Photosynth at SIGGRAPH
Today’s coolest session at SIGGRAPH in Boston features Photosynth, a sneak preview held by Microsoft Live Labs. Simple put, it’s about assembling a lot of digital photos and then applying algorithms to extract distinctive features and link these together in a big kind of 3D-model, by calculating 3D positions from adjacent images.
Of course the algorithm is secret, but I would guess they use something like SIFT (Scale Invariant Feature Transform) to detect image features regardless of orientation and then create a point cloud from camera rays sweeping the scene.
The technology looks awesome but I think it faces several technical issues. How many photos are needed to make it look good? My guess would be: a lot. Also, how will the algorithm work with photos from different batches? Their example photos are taken at the same time, but what if some are taken in daylight, some during the night and some with distortions such as tourists posing in front of the pictures? If this technology is made available on a site where everyone can upload their own images, I hope the photos are approved before insertion or else it will all collapse as people add silly stuff.
Nevertheless, it still is a very interesting idea with many applications. For instance, what will happen if all photos of the world are inserted in Photosynth and then linked with Google Earth? Mindboggling!