Wednesday, October 5, 2011

Apple Siri. The Butlers are coming

Siri
Web2.0 democritised e-publishing and data creation in a friendly way for the masses and low and behold there are 182 million websites available on the net in 2011. Some of these websites have the lions share of the content (Facebook, Flickr, Google, Amazon etc) but collectively it's a grand publish of human thoughts, artefacts, wishes and desires.  What a wonder!

Creating content is one thing but leveraging insights across the content is more difficult. In truth there is still too much information for humans to effectively use and we find ourselves to be a gear in the machine rather than the driver - connecting systems together, cutting, pasting and rekeying.

I want to ask simple questions of my computers and have powerful background processing bring me the answer. Questions like "Which famous guitarists endorse products but don't use them in their live shows?" A query like this would require text analysis of the question to understand the meaning, scouring the net for famous guitarists,  checking which brands they claim to use in endorsements, checking their live 'kit' on websites, picture recognition of what guitars they are using, comparison of statements versus reality and then provide a weighted response based on the volume of data processed. Not easy and lots of key tapping.

Voice Control on the Bat Computer
It won't always be this way.

Batmans computer has been serving him for years (in the fictional world of DC Comics) controlled by his voice helping him fight crime. He simply asks the computer a question while he is driving or smashing heads of supervillans together and his Batcomputer gets back to him with the summary. Questions like  "Cross reference the known toxins that the Joker uses with chemical factories in the vicinity of Posion Ivy's locations over the past three months" are answered with ease. If a clarification is needed then it asks Batman. All achieved using natural language as the interface.

Digital buddies, assistants and advisors are here already for consumers albeit in the form mostly of recommendations engines and advertising systems. Last.fm helps reduce the millions of bands down to something I might like based on my previous listening while Amazon  advises me of books and products I might enjoy based on my previous activity.

These systems help us save time and slash the options and possibilities down to something we can handle. The volume of data falls below our eye and we can concentrate on the richer questions and answers.

For me the biggest aspect of the new iPhone 4S release was Siri - the virtual assistant. I think that as innocuous as it might appear on the surface (fixing calendars, looking up the weather, setting reminders) it is one of the first believable assistants that interact with consumers in a rich way.



Over time this service will grow to understand your accent, tone of voice and mood. It might voluntarily ask you what's wrong or question your commands if it thinks you are acting irrationally. It will potentially develop it's own personality and it will be answering more and more complex queries. Multiple Siris may even communicate and negotiate with one another to save their 'owners' from corresponding back and forth needlessly. Young children that can't type and older people may begin interacting with computers in richer ways. Siri may begin to find it's way into robots and other household devices outside of mobiles.

It's exciting and this is only the beginning. Others have tried to provide this kind of service but none have had the design and user base that Apple have in order to make it 'stick'.

I'll be watching this one carefully.


Thursday, July 14, 2011

Circles and Ladders with Google+ Contact Classification Paradigm

Grouping contacts is an impossible feat isn't it? We have to add Bob to Sport, Work, Musician, 'Allowed to Call after 10pm' and all those other groups we never keep up to date.

Like all user supplied up-front people classification systems Google+ Circles can quite quickly turn into hierarchy ladders when you manage your contacts using them, especially in a social context. 


Contact grouping, grading and intimacy-scoring questions arise like : "Why am I not in your Personal folder?" "Why am I only in Acquaintances?", "Why am I not in group X?"

It makes for unhappiness not to mention all the manual labour of managing those connections. 

The flat monism of classifying all your people as simply 'friends' and allowing the system (not you) to speculatively match between profiles managed by the identity owner is elegant. It causes less arguments over status and how other people classify you. 

Baboons would be relieved to have such a thing. 

Flat'ish, loosely coupled metadata overlaps between people such as : attended same school, favourite band is x, graduated in Kent, holiday in France, has photo of Mt Everest provide a more resilient model in the end for both programmatic and humanistic reasons. It also a more natural petri-dish for harmonious social groups when developing new services.

The degree to which this metadata is enhanced as you interact with 'your people' defines the living breathing classification of what they mean or meant to you. It allows for relationship management (manual and auto) between the people you already know and it also allows for the emergence of machine dialogue such as 'People You Should Know'

It's dynamic and weighted through use - it's no longer the leaden categories of 'Work', 'Home', and 'France'.

Stop Motion Animation on Mobile devices

I thought my recent foray into stop motion animation using mobile devices would be worth a few words. 

Timeline

Prehistory : A while ago I bought a cheap model of our solar system from a physics supply shop in South India. It was rusted and had been sitting there in the odd shop for ages. I shipped it back to the UK and it has been taking up space for a while without good reason. Recently, before moving house, I found a reprieve for it by deciding to use it in a stop motion music video. 
This blog is a look at how I got from that idea to a finished stop motion animation video and the software, hardware and people observations along the way.

Step 1 : Choose the track.
First things first was to choose the track - one I had in both English and Spanish - double search engine and geo-juice. Easy.

Step 2 : Storyboard the idea and the props

If you can't draw like your comic book heroes then a photo based approach to creating a storyboard is a good method. I wanted something on mobile that would let me quickly shoot, arrange and annotate the scenes. 

The unbeatable Cinemek Storyboard on iPhone does this job better than any other. I'm not aware of any compeition. You can just shoot and annotate anywhere meaning when inspiration strikes you move things along quickly. Projects in their early inception stage do depend on thinking done away from desks and desktops. The Cinemek UI is simple and storyboard arranging has a great 'physics' feel and touch making it  easy to swap the scenes around.

The screenshots below show how you can use it to 
  • arrange
  • camera movements against scenes  : tracking, pans, zooms, focus, lighting. Your still images can come to life.
  • add notes and titles







I went and bought a new lamp, some special 'daylight' bulbs, black fabric, plastic bin bags, props, glue, wire and imported a iPhone4 stand from the US (Naja King). This stand allowed me to hang or stand the iPhone4 camera in all the different angles I imagined I would need. Word of note on the Naja- it can drive you mad with the slight movement it has once the new shape 'settles' thereby skewing the framing down after positioning responds to gravity. Naja has more flexibility and positioning capability than a Gorilla stand but less sturdiness - a trade off.

Naja King Flexible iPhone Stand



Step 3 :Choose Stop Motion apps and Prepare to Shoot!




Stop Motion Recorder
iTimeLapse
I must have bought and tried out all the stop motion apps that were in the Apple store.

The most fit for the job seemed to be Stop Motion Recorder and iTimeLapse.

What came out to be most important in the end was how reliable a program was. Losing 30 mins of work every 5 hours is not acceptable even if the application has all the whiz-bang features. iTimelapse wasn't reliable enough. After losing footage and periodic crashes and I resigned iTimeLapse to be a back up only. The trade off with using Stop Motion Recorder was that it was the 12 frames per second and low capture resolution. 


Step 4 : Shoot


People-ware : After doing a session or two on my own trying to get finished scenes I knew that an assistant with nimble hands could help me with the separation of shooting and set adjusting roles. Too much movement back and forth from camera to set takes up time and makes the whole thing slow. 

I hired someone I knew who studies fine art to help out with further set-making and the shoot. This was a wise move. 


We slogged it out and got most of the first half of the video. Stop motion animation is always more effort than you imagine. It's finicky! People say you should double estimates on average - with stop motion you treble it and budget for extra medication.

High Spec for
iTimelapse
After my time had expired with the assistant I had to shoot the remainder myself. Mostly the solar system scenes.  I noticed that iTimelapse had released a new version of their app and it was more stable. Quandary. Do I double the quality of image capture half way through the shoot or keep with the look/feel I have?

Using the logic that the video was in two halves 1) man in the house) 2) man in the solar system) I convinced myself that it could change resolution for the second half.

I looked more seriously into the audio trigger method on iTimelapse to take a photo i.e. shout "now!" then 'click' a shot is taken. It worked well once the sound levels were tweaked and even the misfires due to my thumping around in the 'scene' positioning things were easily edited out.

I managed to get the rest of the footage.



Step 4.5 - Make the decision about whether to continue developing the project solely on mobile. ..

This was quite a quick decision.

iMovie iOS (the major compositing tool on mobile for video) wasn't suitable because  :

  • It has some memory problems and performance issues
  • I simply needed a bigger screen to see what I had shot. It was time to take a good look.
  • No ability to run plugins in-line and a lack of features
  • I am more able with Final Cut Express than iMovie so pre-production with the mobile iMovie being imported into PC based iMovie wasn't a draw either

Maybe an iPad would have helped with the bigger screen but at the present time nothing beats editing and compositing on a powerful machine with a big screen.

Desktop and big screen win then.

Step 5 - Import
QuickTime import of Image Sequences
I imported the stop-motion data from my iPhone and quickly filled the iPhoto library up with thousands of photos only slightly different from one another (my wife took it well given it is a shared Mac). I spent some time fixing the orientation of some of the shots, due to the phone gyroscope flipping periodically, and then they were ready to use.

Final Cut Express does not do importing of individual frames to create video so I had to buy QuickTime Pro to get a good method of creating video footage from a sequence of photos. QuickTimePro also allowed me to specify the exact dimensions of the photos. The mobile apps had cut and trim problems when they exported video which is why I was working in photos and not video by this time.

Step 6 - Edit


Final Cut Express
Final Cut Express is a really good tool once you put some time in to learn it. My work with audio editing and compositing is translatable to video editing concepts so I had a quicker start when I first used it.

I have one of the KB Rubber Keyboard Covers that shows all the Final Cut Express shortcuts which is really useful. Shortcuts for me however are only a band aid as there are so many pieces of software I use that the only real solution for being quick with all my apps would be to have voice recognition.

I used only two plugins very sparingly - a contrast and brightness adjustment to make sure the plastic bin bags representing the cosmos looked dark enough and a plugin called  Lock and Load Express which smooths out shaky footage. Using Lock and Load has a tradeoff though as it selects a subset of the frame which bests stitches with the next frame. Using this approach it finds subset squares per frame as a path through the stills - like threading a needle almost. When shooting at 12 frames a second this was quite noticeable from a textural and resolution standpoint so I restrained to using it only for the manual stop-motion zoom effects I had tried which were very shaky.

Lock and Load Express

Step 7 - Master and Export


All done in Final Cut Pro. The export options are very simple and allow you to select high speed broadband as the likely consumption method making a QuickTime file small enough to upload to YouTube and other services.

I had an English and a Spanish version of the video so one export each while muting the other language track.

Step 8 - Promote

To get it indexed properly by the 'machine' I did the following :

  • I used Spanish metadata for the Spanish language version whenever posted
  • Put direct hyperlinks into the YouTube metadata pointing to my website
  • Did a short music blog on Blogger and pinged feedburner.
  • Put the video on Musician Profile Aggregator sites such as Artist Data, MusicSubmit
  • I wrote this post to improve search engine links. This post is a technical article to create a cross domain link from science and technology into the arts and music
  • Put it on my music website (both the front-page, videos section and on the album pages that it was taken from)
  • Updated my YouTube channel to have this as the default video
  • Put it on MySpace
  • Did status update on my personal Facebook and my Facebook music page
  • Put it on Twitter with hashtags to catch those searching for video and animation.

I could have done more submissions to other sites (Yahoo, MetaCafe) but I couldn't be bothered - YouTube, Google, Facebook and Twitter and the behemoths for video discovery. If need be I'll add more node juice later on.

I'll keep an eye on Google and YouTube Analytics for now to see how it gets on and whether the Spanish or the English version gets the most eyeballs.

Final
So is it possible to shoot and edit stop motion animation entirely on mobile? 

The short answer is that for preparation and capture that mobile is preferable but for editing and mastering the final video piece you still need a big machine.



Enjoy the video!

    
The final video is here in English
and here in Spanish




Tuesday, May 24, 2011

Rebuilding Iberian Motorways with Slime Mould

Wet machines and Soft Computers planning road routes organically. 

Place your 'problem' in bag and shake to get the answer!






Although done on a simple flat map/surface there is no reason why this couldn't be a 3d model with variable temperatures/variables throughout. The modelling of landscapes can be more accurate organically in order to find target map paths that are efficient from a biologic standpoint.

These types of navigational problems* were among some of the first tackled by 'hard' machines (computers like you are reading this with) when they were first developed and it's nice to see the initially parallel development of bio computing.


---------------------------



* Travelling Salesman problem : What is the shortest route visiting each city exactly once and then returns to the starting city? See more classic computing problems here 









Thursday, May 19, 2011

Wearable and Always On Computing

I'll be surprised if 2011 doesn't see something further happen around the wearable computing space.  We need to stop tinkering with metal boxes and facilitate direct interaction with the world a bit more.

There are two social dynamics to this kind of interfacing:

1. Broadcast the display externally on walls, tables, car bonnets or bodies (not private) OR
2. Broadcast internally on glasses or hidden earpieces (i.e. privately).

I think both approaches are more favourable to the current head down into a mobile neck stretch. Mobiles are private devices and Tablets/iPads a bit less so but they are both metal objects you have to put in front of your face and carry around.  The world is only there in periphery when using devices like these. 

Directly communicating with others and including the web as a 'third voice' is still not an elegant flow when taken out of presentation theatres and onto buses and high streets.

Pervasive and wearable computing will see an always-on environment for audio and video. The machines will listen to you 24/7 and parse what you say. The video components will continually record and pattern match the objects around you. Forget Amazon recommends when the data you can input is your whole day! We don't need to key the data about us like monkeys with typewriters. Spines everywhere will rejoice as we lift our heads to look back at the world once more.

The demos from MIT Wearable Computing Team in 2009 still look fantastic and the prototype only cost around $300 back then.



The TED talk - Pattie Maes' lab at MIT, spearheaded by Pranav Mistry



The interface ideas







The evolution of Steve Manns private eye glass display