Wednesday, March 19, 2008

Photosynth and how the 'collective image memory' is harversted

We're building a collective digital memory with all those:
  • votes and ratings
  • comments and blogs
  • tags and bookmarks

We can put this data on google maps, and provide strong links between place and time as well as invent applications that use this data to create new environments. We don't even need to use the common map metaphor to see our data with IBM's wonderful tool 'Many Eyes' which allows us to analyse data in interactive graphs and visualistions. Data can be processed by simple XML allowing for automated feeds of information and graphic representation such as the example below:

And then there is some next level image-onomy or whatever new paradigm term we need to invent that Photosynth ushers in. A technology acquired by Microsoft and originally developed by Blaise Aguera y Arcas.

It allows a feed of photos to build up a map of the earth and places not just using flyover images by aeroplanes or satellite data but by using our own photographs and even illustrations. Photosynth uses public images and it doesn't matter whether these photos are taken by a £10 disposable camera or a posh SLR - it can stitch them together and produce a never ending tapestry that allows you to move around geographic areas and locations with ease.

With Photosynth you can:

  • Walk or fly through a scene to see photos from any angle.
  • Seamlessly zoom in or out of a photo whether it's megapixels or gigapixels in size.
  • See where pictures were taken in relation to one another.
  • Find similar photos to the one you're currently viewing.
  • Send a collection - or a particular view of one - to a friend.
Zooming in might have you moving through 10 photos using your own as a starting point. Your landscape shot of the fair ex-mining town of Cowdenbeath on your digital camera might be part of a family of 1000 photos of Cowdenbeath. Using this pool of images like stones in the middle of a pond you can step and zoom in deeper and deeper to find the Forth Road Bridge in detail when it was just a red spec on your own photo.

Photosynth takes data from everyone - from the collective memory of what the world looks like. A model emerges of the entire earth as our own photos get tagged with other peoples metadata and the mesh of linking becomes tighter and stronger. The network effect continually enriches the space and easily provides cross user and cross model experiences and information.

This is the real semantic web or 'Web3.0' along with the Social Graph developing through the use of people networks. These inferences are taking a life of their own and one can only wonder at what Web5.0 might be.

There's a great demo hosted by TED where Blaise runs through the application with jaw dropping effect.

Photosynth modestly state "Our software takes a large collection of photos of a place or an object, analyzes them for similarities, and displays them in a reconstructed three-dimensional space."

This experience is on the web to try right now but be warned Mac fans - this web experience is PC only for now.

No comments: