I talk enough about mobile computing sometimes to make my own head spin. And for one reason or another, I keep coming back to some salient points about its context. There’s never been anything like it, and it is altogether a constant matrix of technologies and changes. In respect to where it is going, I’m going to posture myself with recent Microsoft ads and Nokia comments – it is time for heads-down computing to go away.
I described heads-down computing in a previous post. Basically, it is a user experience that’s based on immersion. Being immersed isn’t a bad thing, except when its not respecting your context. That’s when those type of usage paradigms cause a problem. The pointing to people being so immersed in their mobile devices that they are missing life around them speaks something to a greater ill that computing does – it is so immersive that we’ve lost some ability to live with it as an augment to life’s experiences.
What does it mean then for computing to augment life’s experience? Well, it has to keep you connected (as it does), empowered (as it should), protected (in our dreams), and informed (relevant information, not broadcast).
For example, you want to share your child’s photos, but not have to always manually send a message saying that you have a few to send. You’d rather that those whom you are connected to are already endowed with permissions to see pictures of your child as you take them in a certain time/physical space and they’ll automatically be notified that those photos are available on whatever computing device they are using. They don’t have to go head’s down to dismiss a prompt, because in this life of screens, they are only seeing what’s permitted on that space in the dashboard.
So do we want this heads’s up mentality? I’d argue that not do we want it, but in some respects we need it. The summary of how we view our mobile world can be put in terms of the interfaces that we use:
At the moment, there are two main patterns for smartphone interfaces and user experiences…
There’s the pattern used by iOS. You get screens full of apps and a physical home key. It’s a very beautiful, elegant and simple pattern. It’s almost like navigating through a house. You’re at the front door and you can go into the dining room. If you want to go to the living room, you go back to the front door and then straight into there.
The second pattern is that used by both Symbian and Android – multiple, personalisable homescreens. The user fills these out with their own preference of widgets. Doing that is so simple and organic that they end up being able to use the whole phone from their homescreens. This content can take many different forms, such as shortcuts to apps and live information widgets.
Marko Ahtisaari, SVP of Design at Nokia, regards the space that we interact with our mobiles – and therefore filter the world around us – as being too limiting, that there is space for something different. I agree with him – and my experiences with Nokia Bots and Nokia Situations beta software on my mobile confirms the path they are going down.
So then what does different look like? If we are going to say that the model of poke/swype/prompt isn’t good enough, what is? I think we’ve got some answer towards Nokia and Microsoft’s thinking in this video published a while back by the (now quite different) Symbian Foundation:
I’ve got a good idea where they are going. Here’s another piece of that idea. There’s a large missing slate of context and intelligence with our mobile (and I’d argue all) computing devices that’s not being solved in the app or services economies as they are presently constructed. To see computing differently, our mobiles need to be smarter than they are currently.
To talk like this is both exciting and scary at the same time. Yet, if we are going to do more than just crow about higher specs, shiner screens, etc., we might as well see something change that actually gets people to look one another in the eye and do things differently.
Pick your head up, you might miss something 😉