I caught wind of the Leap Motion spatial interface accessory last week. For one reason or another, I left it aside until this morning, when I could look at it and probably better relate to the context of whatever it was they are doing. Having seen the video, I have to say that not only am I impressed, but I think that its the kind of accessory that would (finally) change the design for many computing devices – or at the very least, improve the means in which people can relate to computing devices and information in kiosk-led settings.
What is the Leap
Well, the Leap is basically a device, that’s the size of a USB memory key, that creates a 4D (they say 3D, but its actually using 4 dimensions) field in front of computing device to enable manipulation of on-screen objects. Its not self-powered, and much like the larger Microsoft Kinect, it connects to a computer through the USB port and has some kind of software/service utility that needs to run for it (this doesn’t seem to use the conventional hardware drivers that other accessories like mice and keyboards would use – which is both good and bad).
That’s really it. And it looks cool and futuristic because it is. Much like the Kinect, and the Wii before it, there’s a innate understanding that we have to space and this idea of manipulating it using some other tool that we have to learn is actually quite self-defeating. Any time that we can take those basic goal-behaviors of reaching, tasting, listening, and use those to navigate and learn contexts, we actually do a lot better retaining their value (hence my feelings about a spatial interface for the Bible and other tomes of digital content). The Leap has made the technology much smaller, and accessible ($70, available this Winter), so there’s a good chance that the input paradigm of computing can be changed for the better in many situations.
Gestural Input Interfaces
I think it was Christian Lindholm (the man who led the development of the phone UI on Nokia phones that’s pretty much the standard interface for every phone-based interaction on a mobile today) whom said it first to my eyes, when input mechanism changes, behavioral/technologies change. I’ve been tracking this idea of using cameras and IR (infrared radio waves) to monitor, record, and display motion and there’s really something to it – besides just waving your hands.
I can remember when going to CES and visiting the Opera booth when they released the Opera web browser for the Wii. There was something very personal (and tiring) about using the wand to navigate the web. It actually felt like navigating a web. And aside from the issue with typing (that keyboard was hard to learn how to use), it was really something that made an impression on my idea of screens and spaces. Later, seeing the Wii sell in bucket loads, and then later the Kinect sell in bucket loads itself, there’s been this clear impression on my end that gesture-based input interfaces do have a place, if the efficiency of motion can be solved.
The Missed Mobile Opportunity
The Leap leads me to something that I’m very much thinking will be my next accessory for my mobile. These days, one of the ways in which I do presentations is that I connect my mobile to a projector (it can do both Composite video output and HDMI output), and then run the Zeemote software to enable the Bluetooth remote that can be used to control the screen. I love doing that. For one, its an impressive display of what can be done with a mobile, but also because it allows me to use my hands for talking while not looking like I am controlling the presentation with a clicker or another mobile. I see the Leap as most probably replacing the Zeemote – and especially because my Nokia N8 has the ability to use accessories like mice and keyboards when plugged into it (as well as wirelessly).
Nokia missed things here though. I don’t know when it was, but I can remember seeing a video of a student at MIT who hacked the Nokia N95 (a device that came out in Fall 2006) to a similar gesture-tracking system. He was basically able to use the N95, while it too was connected to a larger screen (it had the same composite video output my N8 has), and control things without touching the device. Amazing! I have waiting for ages to see that mature since the output tech was already there, and thought that it would be something integrated into Nokia’s Big Screen (beta) software. However, it was never something taken advantage of. There was also Plug-and-Touch, but that too wasn’t taken advantage of. In a very real sense, Nokia missed an opportunity, a *huge* one to literally be 5 years ahead of where we are now in looking at Leap. Wow…
It was I think 2006 when I made the move to a Nokia smartphone from my Treo. Back then, I was enthralled with this idea of pushing the idea of computing out of the spaces of a desktop/laptop and out of the PDA paradigm (organizer + email + tasks). Since then, I’ve explored and done a whole a lot, event to the point of changing devices, changing workflows, and now, not changing much of anything until devices and services can catch up with my imagination. I see Leap as a step in that direction and very well could see myself on the list of getting one – provided it supports my N8 and any future mobile devices (not necessarily tablets) of which I carry. This is the kind of paradigm shift that would change things for the better at least in respect to taking ideas out of my head and maniputating a digital space much as one does clay.
And possibly (and more importantly), Leap could bring to the front of computing the lack of attention on accessibility in applications for those whom aren’t given to efficient visual or motor skills/behaviors… a group of persons whom, if age remains time’s bouncer, becomes all of us tooling around on these computers eventually.