-
Notifications
You must be signed in to change notification settings - Fork 15
Generate point cloud from depth image #26
Comments
Yes. To convert the depth image to a point cloud, you also need the rotated unit vectors and the translation vector from the extrinsics. All of this is exposed via aptly named topics published to by the You will of course have to port this to your specific scenario, but, the basis for how to do this computation is in the linked to code. |
By using the exact framework in ex-cartesian.cpp, I was able to create a point cloud. About half of the points are turning out right but the other half looks like they are having an issue with some co-ordinate system. You can look at the images below, green cloud is original reference cloud & white cloud is recreated using ex-cartesian.cpp. This is the code I am using
I am not sure what's throwing off the conversion. Let me know if you see anything funny. Another quick question about converting cartesian co-ordinates back to image co-ordinates, is there a way to use something similar to a pinhole camera model like |
@saikns What software version are you using on the camera? |
libo3d3xx - 0.4.4 |
The code looks OK upon inspection. The image certainly looks funny -- no computed points in -y space, like the points are rotated around z 180 degrees and also y by 90 (or something???). Not clear why this is affecting only some of the points. We need to dig deeper. Can you run this code outside of ROS? Probably easiest to generate your point clouds, write them to a PCD, then load them in
Not that I am aware of -- i.e., the camera firmware does not expose the intrinsics in this way. |
I'm currently working on a patch version of the library that will be compatible with the latest IFM firmware release. While doing this development, the only unit test that is failing is the Cartesian computation test. I believe something is incorrect somewhere. I am not sure if it is how I am doing the computation or the data that are being returned by the sensor. @graugans I can use your help here and perhaps we should tie in the quant team (Christian's group). Something is wrong. I am doing this testing outside of ROS, so, that is not an issue. In the attached PNG screenshots, the green cloud is the Cartesian data as computed by the sensor. The white cloud is the Cartesian data as computed by my code. I have attached three views of the same data from a scene taken in my office. I have also attached a zip file that contains:
The code that generated the data is here |
I tried to reproduce the issue at my place. For that purpose, I have extended the live viewer example of libo3d3xx with the display of the off-board computed cartesian data and the unit vectors. I could not see a problem with firmwares 1.4.1639 and 1.6.2038, all tested with a camera application without rectification: However, with the lastest development firmware 1.5.698 I could basically reproduce Tom's results: There is a general translation error visible, also there is a somewhat larger error on the left margin of the image. @tpanzarella, is it possible that the errors we can see with the 1.5.698 firmware might be a result of the extended image chunk headers such that due to an offset error we get those large displacement at the beginning of each image row? Also, might this cause the extrinsic calibration to be wrong within the library? @saikns, unfortunately this does not help with your initial problem. Have you already tried to reproduce your issue with the sample code provided by @tpanzarella? Is it possible that you provide a dump of your camera configuration for us to reproduce the error you are seeing? |
@cfreundl This is a good thought. I plan to implement the code to deal with the new chunk headers today. Once I have that done, I will test again and report back.
The extrinsics have always been a sticky issue. Basically, if you store an extrinsic calibration on the camera, all bets are off. However, apply the extrinsics (as stored by the factory) to the radial distance image and the unit vectors to compute the cartesian data then transforming into the |
@cfreundl I believe your assertions were correct. I updated the code to correctly handle the v2 image chunk headers in this commit. The Cartesian Computation unit tests are passing again -- against both v2 and v1 headers. Additionally, I re-ran my test from above, and as you can see, the data look good: Many thanks for your assistance with this. Not sure this helps @saikns so much -- feels like something else is going on there. The updated code is still living on my dev-0.4.9 branch. I still have a few updates to put in before I release and merge it to the HEAD. |
As a side note to my last comment, you can clearly see that I am not rectifying the images either, so, I am pretty confident that this is the fix that was necessary to deal with the issue that was uncovered while looking into @saikns issue. |
I am using the depth image to detect any irregularities or obstacles in the path of an AGV with ifm o3d303 mounted. The process involves removing any depth data other than obstacles from the image through a mask created by gradient based processing. I would like to convert this selective depth image to a point cloud. Is it possible to do using this package?
P.S I have a working version of this using openni to convert selective depth image to point cloud, I am looking for a similar functionality.
The text was updated successfully, but these errors were encountered: