Hey Does enybody know if it is possible to use a lytro and export the .ltr files into recap? This might be a bit out there but it seems that the data captured with the light field sensor would be condusive to the 3D mesh construction. I'm not an engineer nor do I fully understand all of the technology behind this but if the light field sensor captures the direction of the photons entering the sensors and you can output tru 3D images from the lytro with a single immage its seems like mesh creation would be entirely possible and possibly much more accurate. Does anybody know if anyone is working on this at all? Is the camera actually capturing a mesh from s single perspective?
Thanks!
Related
http://www.winstonmoy.com/2013/10/seene-ios-its-like-lytro-and-kinect-had-a-threesome-w-google-maps/
Interesting, thank you for sharing
Mesh no, but is tracking basic points, corners, features... By taking multiple frames, from where the perspective comes from
More accurate, I believe not yet.
The amount of points that is tracking is way lower than the photogrammetry uses to match pixels, or that the laser scanners are capturing per second.
When the sensors will improve, we might see something that will match other technologies.
Cheers,
Mitko
OK Thanks. I guess because it seems that the light field sensor captures depth information and that you could arive at a mesh via using the depth information in from the light field capture in the same way that say a faro laser scanner measures the distance of the laser beam. But given the number of micro lenses in the light field sensor maybe that is what you mean by the limited number of points tracking. Thank you for the response! I'm new to photogrammetry. Do you have any good reading on how photogrammetry works?
Some interesting reading here.
https://www.lytro.com/downloads/resources/renng-thesis.pdf