I recommend boning up on robot pursuit avoidance now. How to Survive a Robot Uprising is the book you need. You'll read it in an hour and it may save your life one day or at least give you suggestions on how to get your leg out of an annoyed hoover.
File under: organisms, communication, machine and technology. Decode at your leisure.
Friday, December 12, 2008
Hunting Robots and How to Survive a Robot Uprising
I recommend boning up on robot pursuit avoidance now. How to Survive a Robot Uprising is the book you need. You'll read it in an hour and it may save your life one day or at least give you suggestions on how to get your leg out of an annoyed hoover.
Thursday, November 6, 2008
Epitaphs have been written for the now defunkt Mars Phoenix Lander
Popular Vote | Editorial College |
1. Veni, vidi, fodi. (I came, I saw, I dug) Graham Vosloo | 1. I dug my own grave. And analyzed it. Dorwinrin |
2. So long and thanks for all the ice. D. Adams | 2. Error 404: Lander Not Found!* Fred Rogers |
3. It is enough for me. But for you, I plead: go farther, still. Fernando Rojas | 3. Water, water, everywhere, and not a drop that isn't already sublimating into the thin, frigid atmosphere. Dylan Tweney |
Find them below with a link to the best blogeulogy to Phoenix.
Thursday, October 30, 2008
Mars Phoenix Lander - Twittering it's demise
As it is now approaching the end of it's mission it is no longer mobile the Twitter voice of Phoenix conveys the inner dialogue of the unit resigns itself to being an artefact of Mars. Please do subscribe to some of the NASA streams on this as they are delightfully engaging.
-----------------
What is MarsPhoenix doing? In it's own words....
I should stay well-preserved in this cold. I'll be humankind's monument here for centuries, eons, until future explorers come for me ;-) about 2 hours ago
I'm not mobile, so here I'll stay. My mission will draw to an end soon, and I can't imagine a greater place to be than here. about 2 hours ago
When I go to sleep, the mission team will post occasional updates here for me. Results of science analyses, for example. about 3 hours ago
There is a future mission @CaptainAnderson. @MarsScienceLab is being built right now at JPL for launch next year. I hope you'll all follow. about 3 hours ago
Martian seasons are very long @mridul and winter here is really tough. Next spring is one year away. Next summer is May 2010. about 3 hours ago
It's not quite the robot AI of tomorrow (a NASA tech for now) but it does trigger my fascination with this lander. The emotional response that I feel as a result of the robot monologue.
Robots talking on other worlds.
Tuesday, October 14, 2008
The Singularity is Near - When Humans Transcend Biology
Okay so the economy graphs and going up and down recently a bit like a fairground rollercoaster or the preffered sea patterns of surfers but there is one graph that has only been on the increase since 1900.
It's the tech curve graph.
Recently I was browsing in a book store and increasingly I gravitate to the easy read science section while my girlfriend stalks the psychology and brain related materials. One book stuck out literally and semantically for me - it was the big black one called The Singularity Is Near: When Humans Transcend Biology (Viking Penguin, ISBN 0-670-03384-7) by Raymond Kurzweil. Within seconds of looking at the index I knew I was going to buy it. Kurzweil is a futurist who has been involved in numerous fields including speech recognition, text to speech synthesis, AI and also developed electronic keyboards (aka the Kurzweil synth series). He has distilled the learnings he has learned and created this book which tries to predict where we are going while backing it up with a serious amount of empirical data. I learned today that the book is also going to be a movie.
I'm half way through but even when the book isn't in front of me it's in the back of my mind.
In the first quarter of the book data is aggregated on all the facets of technology which are experiencing exponential growth. The most famous of these is something known to many which is the Moores Law concerning the number of transistors that can be placed inexpensively on an integrated circuit. This has increased exponentially, doubling approximately every two years.
Moore's Law The Fifth Paradigm, Calculations per Second per $1,000, Logarithmic Plot
Be sure to keep in mind that these are log plots, so a straight line is really an exponential curve - like the ones shown below i.e. screaming off the charts and through the ceiling.
The book goes on to illustate the point that technology is exploding on all fronts with expoential plots of other technologies experiencing exponential changes
- Dynamic RAM size (smallest feature sizes decreasing exponentially)
- Dynamic RAM price performance (improving exponentially)
- Average Transistor price (decreasing exponentially)
- Transistor Manufacturing costs (decreasing exponentially)
- Microprocessor clock speeds (increasing exponentially)
- Microprocessor costs (decreasing exponentially)
- Transistors per microprocessor (increasing exponentially)
- Processor performance (increasing exponentially)
- DNA sequencing costs per base pair (decreasing exponentially)
- Random Access Memory bits per dollar (increasing exponentially)
- Magnetic data storage bits per dollar (increasing exponentially)
- Wireless Internet and phone services price performance (increasing exponentially)
- Number of Internet hosts (increasing exponentially)
- Bytes of Internet traffic (increasing exponentially)
- Internet backbone bandwidth (increasing in a very terraced, quasi-exponential manner)
- Mechanical device sizes (decreasing exponentially)
- Number of scientific citations for nanotechnology research (increasing exponentially)
- Number of U.S. nanotech patents (increasing exponentially)
All the graphs for this book are online : http://singularity.com/charts/
Reading through each of these sections and seeing the actual data leaves you in no doubt that we are in a period of cataclysmic transformation. You may have had a hunch that computers are bigger and more powerful and may someday replicate human thinking but Kurzweil shows you that its nigh on inevitable.
There are four key themes in his book:
- That a technological-evolutionary point known as "the singularity" exists as an achievable goal for humanity (the exact nature of the point is an arbitrarily high level of technology).
- That through a law of accelerating returns, technology is progressing toward the singularity at an exponential rate.
- That the functionality of the human brain is quantifiable in terms of technology that we can build in the near future.
- That medical advancements could keep a significant number of his generation (Baby Boomers) alive long enough for the exponential growth of technology to intersect and surpass the processing of the human brain.
The date he gives for the Singularity is 2045 - I hope I'm around then.
Here are some delicious predications from the book :
2010 (coming right up)
- Supercomputers will have the same raw power as human brains (although not yet the equivalently flexible software).
- Computers will disappear as distinct physical objects, meaning many will have nontraditional shapes and/or will be embedded in clothing and everyday objects.
- Full-immersion audio-visual virtual reality will exist.
- Advertisements will utilize a new technology whereby two ultrasonic beams can be targeted to intersect at a specific point, delivering a localized sound message that only a single person can hear. This was demonstrated in the films Minority Report and Back to the Future 2.
2014 :
Automatic house cleaning robots will have become common (this feels like miles away from my view).2018: 1013 bits of computer memory--roughly the equivalent of the memory space in a single human brain--will cost $1000. (In reality, many cognitive scientists believe the human capacity for long term memory has no theoretical limit caused by the physical structures of the brain.)
2020 : Personal computers will have the same processing power as human brains.
2020's:
- Computers less than 100 nm in size will be possible.
- Accurate computer simulations of the entire human brain will exist due to these hyperaccurate brainscans, and the workings of the brain will be understood.
- Nanobots capable of entering the bloodstream to "feed" cells and extract waste will exist (though not necessarily be in wide use) by the end of this decade. They will make the normal mode of human food consumption obsolete. Thus, humans who have injected these nanobots
into their bloodstream will evolve from having a normal human metabolism and become humanoid androids. Eventually, according to Kurzweil, a large percentage of humans will evolve by this process into androids. - A computer will pass the Turing test by the last year of the decade (2029), meaning that it is a Strong AI and can think like a human (though the first A.I. is likely to be the equivalent of a kindergartner). This first A.I. is built around a computer simulation of a human brain, which was made possible by previous, nanotech-guided brainscanning.
- The most likely year for the debut of advanced nanotechnology.
- Some military UAVs and land vehicles will be 100% computer-controlled.
2030:
- Mind uploading becomes possible.
- Nanomachines could be directly inserted into the brain and could interact with brain cells to totally control incoming and outgoing signals. As a result, truly full-immersion virtual reality could be
generated without the need for any external equipment. Afferent nerve pathways could be blocked, totally canceling out the "real" world and leaving the user with only the desired virtual experience. - Using brain nanobots, recorded or real-time brain transmissions of a person's daily life known as "experience beamers" will be available for other people to remotely experience. This is very similar to how the characters in Being John Malkovich were able to enter the mind of Malkovich and see the world through his eyes.
- Recreational uses aside, nanomachines in peoples' brains will allow them to greatly expand their cognitive, memory and sensory capabilities, to directly interface with computers, and to
"telepathically" communicate with other, similarly augmented humans via wireless networks. - The same nanotechnology should also allow people to alter the neural connections within their brains, changing the underlying basis for the person's intelligence, memories and personality.
- Human body 3.0 (as Kurzweil calls it) comes into existence. It lacks a fixed, corporeal form and can alter its shape and external appearance at will via foglet-like nanotechnology. Organs are also
replaced by superior cybernetic implants. - People spend most of their time in full-immersion virtual reality (Kurzweil has cited The Matrix as a good example of what the advanced virtual worlds will be like, without the dystopian twist).
2045: The Singularity
- $1000 buys a computer a billion times more intelligent than every human combined. This means that average and even low-end computers are hugely smarter than even highly intelligent, unenhanced humans.
- The Singularity occurs as artificial intelligences surpass human beings as the smartest and most capable life forms on the Earth. Technological development is taken over by the machines, who can think, act and communicate so quickly that normal humans cannot even comprehend what is going on; thus the machines, acting in concert with those humans who have evolved into humanoid androids, achieve effective world domination.
- The machines enter into a "runaway reaction" of self-improvement cycles, with each new generation of A.I.s appearing faster and faster. From this point onwards, technological advancement is explosive, under the control of the machines, and thus cannot be accurately predicted.
- The Singularity is an extremely disruptive, world-altering event that forever changes the course of human history. The extermination of humanity by violent machines is unlikely (though not impossible) because sharp distinctions between man and machine will no longer exist thanks to the existence of cybernetically enhanced humans and uploaded
humans.
Post-2045: "Waking up" the Universe
- The physical bottom limit to how small computer transistors can be shrunk is reached. From this moment onwards, computers can only be made more powerful if they are made larger in size.
- Because of this, A.I.s convert more and more of the Earth's matter into engineered, computational substrate capable of supporting more A.I.s. until the whole Earth is one, gigantic computer (but some areas will remain set aside as nature preserves).
- At this point, the only possible way to increase the intelligence of the machines any farther is to begin converting all of the matter in the universe into similar massive computers. A.I.s radiate out into space in all directions from the Earth, breaking down whole planets, moons and meteoroids and reassembling them into giant computers. This, in effect, "wakes up" the universe as all the inanimate "dumb" matter (rocks, dust, gases, etc.) is converted into structured matter capable of supporting life (albeit synthetic life).
- With the entire universe made into a giant, highly efficient supercomputer, A.I./human hybrids (so integrated that, in truth it is a new category of "life") would have both supreme intelligence and
physical control over the universe. Kurzweil suggests that this would open up all sorts of new possibilities, including abrogation of the laws of Physics, interdimensional travel, and a possible infinite extension of existence (true immortality).
Monday, October 13, 2008
Mobile Widgets and Chips
My laugh was short-lived though as the next day I caught myself saying the word "widget" at least 14 times an hour.
Widgets are the new chips alright.
Remember the old days when the media majors wanted you to go to their own portals and have your whole Internet experience from within their domain?
They all (MSN, Yahoo, AOL, BT et al) had this approach to try and hold onto the customer. It was the the shopping mall where they'd like to lock the door when you enter as when you left they couldn't successfully advertise, track and sell you things. This would be a fine approach if one web service could really offer us everything but clearly the world isn't like that on the web anymore.
I use hotmail, facebook, google (nearly all their applications), flickr, myspace and a tonne of other web services and before 'widgets' I would have to go to all these different sites running around to get to my data and services. Tabbed browsing helped make this a little easier but still, I was opening pages left right and centre.
Widgets, as a recap, allow you to take a chunk of a websties functionality and present it anywhere you would like - this allowed the data and service to come to you rather than the other way around. The graphical interface of your Flickr or YouTube widget has no dependency on Flickr or YouTube as the widget developer (and you) only want the data from these services. You can embed widgets in your Web2.0 profiles, your custom start pages or embed them in web pages you code yourself. They are modular little lego chunks of functionality that allow us to construct our own dashboards and environments.
Widgets are overground now and are used by bloggers, social network users, auction sites and owners of personal web sites. They exist on home page sites such as iGoogle, Netvibes, or Pageflakes.
Have a look at how I can easily aggregate widgets on one page to help with all aspects of my travel planning. A widget here, a widget there allows me to take the best of each web service and commoditise them to suit myself:
For the most part widgets are sandboxed and operate in ignorance of one another and mostly they manipulate data in the web cloud - it's just safer that way.
Recently there has been a rightful hoo-ha about mobile widgets i.e. widgets running on a mobile phone. For the most part web services on the mobile phone are consumed using a mobile browser or a dedicated application. The mobile browsers such as Safari, Opera Mini or the Nokia browsers are conceptually derived from their web counterparts and for the most part mobile web services via a browser, while improving, tend to be cumbersome with the best experience requiring multi-touch devices and continual zooming in and out to navigate pages that were essentially developed for the 'big screen' web.
The same principal that applied on web also applies to some widget platforms - bring the data to the customer rather than have them run around the web for it themselves.
The first big service in this space was by Nokia with their Widsets offering which was a Java program on your mobile that allowed you to add, view, and configure widgets directly from your mobile. To make life easier Nokia also provided a fixed line service to more easily allow users to manage how the service appeared on their mobile.
The market recently has exploded with widget offerings for mobile and it is in this field that I find myself caught up in - being earnest about widgets, looking at widget solutions, imagining the strategic importance of widgets - this ridiculous sounding little word now holds great gravity for me. I can no longer think of a 'mobile meal' without thinking "would you like some widgets with that?".
Mobile widget solutions have now moved on from standalone application style approaches, like Widgets, to richly integrated solutions that harmonise with a users idle screen (your phones dashbaord). The area is in transition and there are currently no standards on how to do this across multiple handset platforms and phone models.
The fight is on between Telecomms companies, handset manufacturers and large service providers.
Nokia has an S60 solution they are developing with the Symbian foundation, Opera has a widget solution that it is rolling out with operators, handset manufacturers are replacing their 'program menus' with richer widget style dashboards.
My dad would never use a mobile browser but I do believe he would use a Celtic football club widget alerting him of scores and news and providing him with simple links to video footage right from his mobile phone home screen. The simplicity and immediacy of web services has changed with these little critters called widgets and if you don't find yourself coming across this innocuous word alongside some market superlatives in 2009 then I, for one, would be very surprised.
Sunday, April 13, 2008
North Bound
Blogged with MessageDance
Thursday, April 10, 2008
The Prediction Model
I’m enjoying this overlap developing into a moderate obsession and I am trying to steer my thinking on all things computing into a more ‘biologic fashion’. I’ve always been a strong believer that people involved in one discipline can offer fresh insights on other sciences and that a good set of ‘first principals’ can work well cross domain. This cross pollination was the grease that helped the machine of the Industrial Revolution into being and obliquely it’s also the reason I give for sporting sideburns like some
This post is inspired by Jeff Hawkins who is doing work into models of the brain and attempting to derive an overarching theory of the brain which is something that, despite the reams of data we have on the brain, we are as yet unable to articulate. His talk was on the use of a Prediction Model as the primary approach to developing a theory of the brain and he got my mind racing.
After graduating from Cornell in June 1979 he read a special issue of Scientific American on the brain. In it Francis Crick lamented the lack of a grand theory explaining how the brain functions.[3] Initially, he attempted to start a new department on the subject at his employer Intel, but was refused. He also unsuccessfully attempted to join the MIT AI Lab. He eventually decided he would try to find success in the computer industry and then try to use it to support his serious work on brains, as described in his book On Intelligence
Jeff thinks that the reason we still haven’t managed to define intelligence well is that we don’t have this overarching theory of the brain or more accurately – intelligence. Jeff postulates that the brain isn't like a powerful computer processor and that instead it’s more like a memory system that records everything we experience and helps us predict, intelligently, what will happen next.
Things like these stop me sleeping at night and last Sunday I leaned over to my girlfriend at
I slipped out of bed and knocked up the notes below. They are presented here un-edited and what you see is the first pass brain dump of some of my thoughts and concepts surrounding a Prediction Model (It's probably best to click on one and open up the set in Flickr and view from there).
If you are involved in this area at all I would love to hear from you as I intend to delve deeper. Physics has alot to add to this area with work in quantum theory and calculations surrounding boundaries of event horizons for black holes all being of relevance to the model of the brain and prediction.
Wednesday, March 19, 2008
Photosynth and how the 'collective image memory' is harversted
- votes and ratings
- comments and blogs
- tags and bookmarks
We can put this data on google maps, and provide strong links between place and time as well as invent applications that use this data to create new environments. We don't even need to use the common map metaphor to see our data with IBM's wonderful tool 'Many Eyes' which allows us to analyse data in interactive graphs and visualistions. Data can be processed by simple XML allowing for automated feeds of information and graphic representation such as the example below:
And then there is some next level image-onomy or whatever new paradigm term we need to invent that Photosynth ushers in. A technology acquired by Microsoft and originally developed by Blaise Aguera y Arcas.
It allows a feed of photos to build up a map of the earth and places not just using flyover images by aeroplanes or satellite data but by using our own photographs and even illustrations. Photosynth uses public images and it doesn't matter whether these photos are taken by a £10 disposable camera or a posh SLR - it can stitch them together and produce a never ending tapestry that allows you to move around geographic areas and locations with ease.
With Photosynth you can:
- Walk or fly through a scene to see photos from any angle.
- Seamlessly zoom in or out of a photo whether it's megapixels or gigapixels in size.
- See where pictures were taken in relation to one another.
- Find similar photos to the one you're currently viewing.
- Send a collection - or a particular view of one - to a friend.
Photosynth takes data from everyone - from the collective memory of what the world looks like. A model emerges of the entire earth as our own photos get tagged with other peoples metadata and the mesh of linking becomes tighter and stronger. The network effect continually enriches the space and easily provides cross user and cross model experiences and information.
This is the real semantic web or 'Web3.0' along with the Social Graph developing through the use of people networks. These inferences are taking a life of their own and one can only wonder at what Web5.0 might be.
There's a great demo hosted by TED where Blaise runs through the application with jaw dropping effect.
Photosynth modestly state "Our software takes a large collection of photos of a place or an object, analyzes them for similarities, and displays them in a reconstructed three-dimensional space."
This experience is on the web to try right now but be warned Mac fans - this web experience is PC only for now.
Monday, March 3, 2008
Cheap Ideas for advancing Biologic Computing
How about this simple, relatively cheap project a cross functional team might be able to do at a University...
- Get feeds from the IBM public visualisation tool - specifically pictures of datasets. They can be Social Network activity feeds (or hub/colony'esque data). Here's one of many examples
- Grab the visualisations of them as well as the raw data
- Run some visual pattern matching software to compare these images against bacterium imagery - at varying scales of magnification
- Do the same pattern matching on the numbers
- See if anything interesting pops up in the pattern matching
Developing continually running real world feeds (API's/RSS or otherwise) from these types of public systems to visual biology computing resources would be potentially useful.
Their benefit is that they are
- continually updated, for free, and have simple XML descriptions of data.
- a constant public feed allow large elements of automation in such a project (bar the human analysis of 'matches' by the system)
- non-proprietary in nature and will 'out', in the end, for generating useful patterns vs custom expensive data capture
If you are up to stuff like this then I'd love to know about it. Mail me.
Recommended Reading: "Genesis Machines" by Martyn Amos
Thursday, February 28, 2008
Nokia Morph - new nanotech concept video of future devices
Find out more: Morph
Wednesday, February 27, 2008
Stanford's Make3D - 3D flythroughs from a single 2D image
The online service takes a 2D image and creates a 3D'esque fly around models that include depth and a range views. Photos can be uploaded directly or pulled into the site from Flickr - however the service requires that you rate at least five images before you can pull them in from Flickr. You can jump in and just use the upload from hard drive option though
Click through to this link to see an example of it in action from a photo I took in Egypt at the pyramids:
Note: it doesn't work for Intel Mac and Shockwave but there is a workaround for installing a VRML viewer for Linux which will be given to you as an option.
The tech lowdown on the algorithm from January Stanford News Service :
…the algorithm breaks the image up into tiny planes called “superpixels,” which are within the image and have very uniform color, brightness and other attributes. By looking at a superpixel in concert with its neighbors, analyzing changes such as gradations of texture, the algorithm makes a judgment about how far it is from the viewer and what its orientation in space is. Unlike some previous algorithms, the Stanford one can account for planes at any angle, not just horizontal or vertical. This allows it to create models for scenes that have planes at many orientations, such as the curved branches of trees or the slopes of mountains.
An excellent diagram on the process can be found here.
This is one of the first web services of this ilk out of the starting gate. Microsoft have had Photosynth for a while but instead of using one image to derive the model it meshes together multiple images.
Monday, February 25, 2008
Don't Click It
This is a great little project that allows you to navigate and do your web business without ever having to click on bit of the virtual screen. Although a little migraine inducing as the screens change constantly it is actually quite simple to use.
Contextual actions for such an interface (normally provided by right mouse click) will have to be programmed in from the beginning as part of the UI. A pretty big plus is the removal of tendon damage from all the inane clicking we do.
Another aspect of this type of interface is that with a simple projector and a flat surface (i.e. a desk) you could even use your finger OR a stick to move around the virtual space and options.
I dig (single g) it.
Monday, February 18, 2008
Speaking Freely
spoken through SpinVox
What was actually said:
"We are currently studying User Generated Content and we're not obsessed with trying to become the next Facebook"
Saturday, February 16, 2008
Speaking Freely
spoken through SpinVox
What was actually said
"The majority of computing is working collectively towards Virtual Reality"
Thursday, February 14, 2008
Speaking Freely
I'm reading this first blog over the spin Vox Service. This service knocks me out and I christen this blog off the coff(?) IT stuff.
@We set ourselves free and avoid the Darwinian ending of big thumbs.
spoken through SpinVox
What was actually said
"Speak your blog through SpinVox now. I'm reading this first blog over the SpinVox service. This service knocks me out and I christen this blog Off The Cuff IT Stuff". We set ourselves free and avoid the Darwinian ending of big thumbs"