I'll be surprised if 2011 doesn't see something further happen around the wearable computing space. We need to stop tinkering with metal boxes and facilitate direct interaction with the world a bit more.
There are two social dynamics to this kind of interfacing:
1. Broadcast the display externally on walls, tables, car bonnets or bodies (not private) OR
2. Broadcast internally on glasses or hidden earpieces (i.e. privately).
I think both approaches are more favourable to the current head down into a mobile neck stretch. Mobiles are private devices and Tablets/iPads a bit less so but they are both metal objects you have to put in front of your face and carry around. The world is only there in periphery when using devices like these.
Directly communicating with others and including the web as a 'third voice' is still not an elegant flow when taken out of presentation theatres and onto buses and high streets.
Pervasive and wearable computing will see an always-on environment for audio and video. The machines will listen to you 24/7 and parse what you say. The video components will continually record and pattern match the objects around you. Forget Amazon recommends when the data you can input is your whole day! We don't need to key the data about us like monkeys with typewriters. Spines everywhere will rejoice as we lift our heads to look back at the world once more.
The demos from MIT Wearable Computing Team in 2009 still look fantastic and the prototype only cost around $300 back then.
The TED talk - Pattie Maes' lab at MIT, spearheaded by Pranav Mistry
The interface ideas
The evolution of Steve Manns private eye glass display
You freely allow for others to share a piece of your home-hub broadband connection in return for free access to theirs and, importantly, access for free to BTs public wifi spots.
It's potentially the worlds largest Wi-Fi community in the world and my iPad, without a telcom data card, is begging me to join and download the iPhone app to activate it.
Those Wi-Fi hotspots were mostly aimed at businessmen but this seems much more democratising....and free. Let's be frank, using mobile data is definitely the easiest approach at the moment to feed your smartphone but it does seems we are at the stage now that those large towers used by telecoms companies to throw your data signals through the air will be replaced by a million peoples home wireless hub. It's social decentralised computing and the model is good.
I notice my existing mobile data provider, O2, has capped my mobile data usage and 10 days before the end of the month I find myself with it all used up and my speed throttled. If BT FON could take some of the load off then this would help somewhat.
I wonder what the telecomms companies will make of it?
I think this is an excellent play by British Telecom. If something like this could gain momentum then it would be quite a disrupter but without the numbers the experience will be poor as I transition between Bob Smiths hub and wait another minute until I can use a little bit of Mary's down the road. If switching between free hub-pimping and mobile data is seamless then maybe the problems aren't so great.
I'd be reading the small print on the security and privacy implications but this is definitiely one to watch.
I'd be grateful for insights and comments on this topic.
There seem to be two phases to harmonising sCRM and CRM domains.
1. Don't integrate using software - use people initially.
Use the CRM tools in the cloud (FB comments, Twitter, Get Satisfaction etc) where the customer/prospect is operating and backfill insights and information manually into your CRM system.
This is low risk with no internal technology investment. The cost is mostly around people/staff. It's good education all round and a reminder that a companies systems are no longer the ones sitting on their premises.
2. Automated integration of sCRM data/insights into the 'master' back end CRM and related systems.
This of course is much more difficult. Some outlines of touchpoints and suggested automation are below :
Identity : At it's most simplest you can integrate web2.0 Id's of customers into your back end CRM profile and perhaps use your Campaign Management in your CRM app to deliver/chat with the web2.0 channels. Automated campaign management tools spamming customers web2.0 spaces is clearly dangerous as the personal touch may be lost so simply using your CRM tool as a front end to web2.0 activity might retain the personal touch and allow your CSA's to remain in a single workspace.
Profiling : A unified view of web2.0 profiles with internal CRM profiles. Very useful but prone to legal restrictions. Again the CRM system can pull in public preferences/likes/dislikes for customers and use them for customer support and behavioural targetting solutions.
Analytics : merge analytics observations on your own web/IP properties with those in the cloud. This area is nascent at the present time but the integration between Salesforce and Radian6 certainly looks interesting in examining click throughs from the cloud.
Metadata, Taxonomy and KMS: This is a tough nut to crack and the sharp end of the semantic web. Ideally the taxonomy that your customers develop online should inform your southbound knowledge management and product descriptions. Behavioural Targetting ,SEO/SEM and Customer Support will be more successful if you can develop a commonality between how you describe your domain vs how it's actually being talked about in the real world.
Connect Listening with the Raising of Work/Suggestion Tickets distributed to the rest of the business : Use the listening platform data to directly feed into how you raise suggestions, complaints or issues into the business. This avoids some retype and paraphrasing but still requires staff to manage.
Run Listening Platforms Across internal channels such as Wikis and Forums : this could be automated to filter hot topics and prioritise work and many forum providers are moving into the semantic space to enable just that.
Involvement in Design : beyond providing a Wiki or a Facebook group and harvesting opinions on features via Listening Platforms how do we get customers deeply involved in the creation of the products and services we design?
The big question... Is deep automated integration an expensiveand proprietary folly between sCRM and CRM at the present time?
Do we end raising projects titled 'Integrate with the Internet'?
There was a distinct ethereal hum as I finally added an iPad to my iPhone, iPod, Powerbook, iMac and G5 kingdom.
I had to answer some questions for myself on something that felt 'game changing' - magic almost.
What is the magic in the hardware, form factor and software of the iPad?
Given it's size it travels from room to room with you. The only other computing device that really does that is a feature/modern phone. Laptops get parked which equals bad for a prospective command centre and true on body device.
The role of phones will change. The size of displays for pocket devices have limits (technical and social) that the iPad immediately begins to highlight. Phone's may be more of a cousin of your car key, wallet and GPS devices than the final form factor of the utopian 'life controller'. Phone holographic technology would of course help phones reassert themselves as a social device rather than a private device.
Works great as you walk and use the device using only 1 hand(unlike a laptop) for essential workflow tasks of the OS (cut copy-paste-search-open-close) , in fact most programs and tasks are possible given multi-touch to allow context. It also works really well with 2 hands allowing richer expression and depth of interaction.
No wires and it fits everywhere - that's a fail for most laptops which demand height as well as depth on a desk or surface. iPad display and control are merged into one flat surface.
The resolution is great - it looks like an interactive magazine.
It wins the war hands down as the master input device for contacts and calendar control. Phone calendar co-working was not convenient with phones or laptops. Laptops feel 'official' and had to be crowded round with a bent neck (or worse via ping pong mail). The iPadCalendar experience is one where you co-author it with your spouse or friend or colleague sitting next to one another...on the couch, standing in the kitchen, and so on. Passing the iPad back and forth so everyone can download their details makes for good 'ownership' of events and to-do's and will prompt healthier use. This form factor, so far, is the best capture tool and it's social - just like calendars and are meant to be.
The e-book, e-comic and e-magazine have arrived - they never had before. iPad allows you to zoom on article, photos, paintings or Maps. Hand it to a friend. Magazines can't compete...physical books also look like they will get in the neck strong if iBooks is anything to go by.
Contact data and references (URL's etc) really are ubiquitous and synchronised across all my equipment. Retyping was hurting the mass uptake of computing - we need to do away with it. A combination of mobileMe, wireless and apple core products (itunes, iphoto et al) ensures that data, preferences and references are available, accessible and synced no matter where you are. No mean feat.
Resolution of expression for multi-touch fingers.So much more satisfying for almost all application experiences from innocuous address book management and browsing to rich real-time control audio/visual applications.
The form factor of the iPad makes it a truly social device
The laptop and the mobile are personal computers - we don't share them physically with others much at all. They are social in that they can enable 'remotely social' experiences but it's a private affair.
This is where the iPad and form factors like it have a potential to shine ... families passing it round to arrange the trip to the lakes, band members trimming the email marketing list collectively, waiters allowing customers to select their choice and then taking the device to send wirelessly to the kitchen, putting in in grandma's lap to see slideshows.
but it's not all honey...
The weight - any more weight for the iPad would be a fail but it just gets away with it.
Heat - beware your iPad in strong sunlight...it heats up quickly and then forbids you to use it until it cools down.
Power and Charging - Non native chargers that work for iPhones don't fare well with the iPad.
Utilising other devices - I'm not a fan of the buy a wireless or a wireless with 3G simcard approach. It's a bad fit as iPad demographic probably already have mobile data contracts and don't want another. I'd have much rather payed an extra levy per month on my network provider bill (O2) to use my iPhone as a modem. The iPad is a natural main console for all your computing so it would have been nice to see utilisation of slave devices such as iPhone available out of the box - specifically from Apple rather than a 3rd party integrator.
Upfront user profiles. Given the inherently social capability of the device it is a miss to not have controls for multiple user profiles.
We're building a collective digital memory with all those:
votes and ratings
comments and blogs
tags and bookmarks
We can put this data on google maps, and provide strong links between place and time as well as invent applications that use this data to create new environments. We don't even need to use the common map metaphor to see our data with IBM's wonderful tool 'Many Eyes' which allows us to analyse data in interactive graphs and visualistions. Data can be processed by simple XML allowing for automated feeds of information and graphic representation such as the example below:
And then there is some next level image-onomy or whatever new paradigm term we need to invent that Photosynth ushers in. A technology acquired by Microsoft and originally developed by Blaise Aguera y Arcas.
It allows a feed of photos to build up a map of the earth and places not just using flyover images by aeroplanes or satellite data but by using our own photographs and even illustrations. Photosynth uses public images and it doesn't matter whether these photos are taken by a £10 disposable camera or a posh SLR - it can stitch them together and produce a never ending tapestry that allows you to move around geographic areas and locations with ease.
With Photosynth you can:
Walk or fly through a scene to see photos from any angle.
Seamlessly zoom in or out of a photo whether it's megapixels or gigapixels in size.
See where pictures were taken in relation to one another.
Find similar photos to the one you're currently viewing.
Send a collection - or a particular view of one - to a friend.
Zooming in might have you moving through 10 photos using your own as a starting point. Your landscape shot of the fair ex-mining town of Cowdenbeath on your digital camera might be part of a family of 1000 photos of Cowdenbeath. Using this pool of images like stones in the middle of a pond you can step and zoom in deeper and deeper to find the Forth Road Bridge in detail when it was just a red spec on your own photo.
Photosynth takes data from everyone - from the collective memory of what the world looks like. A model emerges of the entire earth as our own photos get tagged with other peoples metadata and the mesh of linking becomes tighter and stronger. The network effect continually enriches the space and easily provides cross user and cross model experiences and information.
This is the real semantic web or 'Web3.0' along with the Social Graph developing through the use of people networks. These inferences are taking a life of their own and one can only wonder at what Web5.0 might be.
There's a great demo hosted by TED where Blaise runs through the application with jaw dropping effect.
Photosynth modestly state "Our software takes a large collection of photos of a place or an object, analyzes them for similarities, and displays them in a reconstructed three-dimensional space."
This experience is on the web to try right now but be warned Mac fans - this web experience is PC only for now.
One of the biggest problems in Biologic Computing today is the predictability of bacterium's movements...
How about this simple, relatively cheap project a cross functional team might be able to do at a University...
- Get feeds from the IBM public visualisation tool - specifically pictures of datasets. They can be Social Network activity feeds (or hub/colony'esque data). Here's one of many examples - Grab the visualisations of them as well as the raw data - Run some visual pattern matching software to compare these images against bacterium imagery - at varying scales of magnification - Do the same pattern matching on the numbers - See if anything interesting pops up in the pattern matching
Developing continually running real world feeds (API's/RSS or otherwise) from these types of public systems to visual biology computing resources would be potentially useful.
Their benefit is that they are - continually updated, for free, and have simple XML descriptions of data. - a constant public feed allow large elements of automation in such a project (bar the human analysis of 'matches' by the system) - non-proprietary in nature and will 'out', in the end, for generating useful patterns vs custom expensive data capture
If you are up to stuff like this then I'd love to know about it. Mail me.