I just finished reading Tomi Ahonen’s exposition on the iPhone 4 and have to say that I largely agree. Well, I actually said the same, just not with as much in the way of metrics and statistics. I merly pointed out that the tech has been done before.
But, as I sat composing the comment that I left for that post, I asked (aloud) a question that I think needs to be addressed by manufacturers and developers – and mobile bloggers for that matter – in respect to where devices, services, and people go from here. And I mean this purely in the sense of the user experience (UX) that we have in doing anything on a mobile device. It all starts with purpose and the user interface.
The “Point and Do” UI
First off, I will say that I am largely a hobbyist in respect to understanding user interfaces (UIs) and the overall user experience of mobile devices alongside connected services. Yes, I try a lot of things new, and am commonly called a geek for how I choose to live on the bleeding edge, but I can say that more often than not, I stumble onto facts of usage that seem to be right, and challenging.
The UI fact that I’ll first talk about is the point and do methodology that all of our computing devices seem to provoke (I’m using this word “provoke” on purpose). We point to the screen with a finger, voice prompt, mouse, keyboard, or button, and then an action happens. This action can be something simple like going to a menu option, or more complex in triggering other automated actions such as polling an email service and receiving the alert on the device.
Thing is, this mode of using a device is very task-oriented. We pick up the device to provoke it to give us information that is relevant. Even when its information that’s pushed at us, like a text message or the receviing end of a phone call, we still have this UI behavior that says we must point to the the device its intended next action (dismiss, reply, answer, edit, delete, etc.).
This is great and all, but when are we going to get past this point? Really, isn’t mobile innovation moving fast enough that we can start demanding that our devices’ UI behaves differently?
Recommend and Adapt
I’ve written many times before about the Nokia Bots application that I use on my primary mobile (an N97). This application does something that no other mobile application has ever done on my device before, it learns my behaviors/tendencies in respect to how I respond to setting alarms and device profiles (sound and notification preferences), and then either changes the device state automatically, or asks me if a common (to me) action should occur.
For example, I was sitting in a meeting with a friend yesterday. The meeting started 15 minutes before it was scheduled, and therefore I had my mobile sitting up at me (the N97 has a titling screen) while using the iPad. As we talked, my friend asked me about MMM and my passions behind mobile. As she did so, my device’s screen powered on (from an off-idle state) and a widget on the screen asked if I wanted to change the device into the “Meeting” profile since it “read” that my calendar said I was in a meeting.
Note: when I say “asked,” I mean that it presented an icon with a clock that had a yellow question mark over it, and the word “meeting” with a question mark after it. I only needed to notice that the device was on to know that it was “asking” me to respond to it.
What happens is that this application learns how I manage my device for things such as sleep modes, wake alarms, and meeting alarms, and then recommends to me some actions that it assumes are best based on how I’ve used the device in the recent past.
For all the hoopla that was the iPhone 4 announcement (and the swarms of media since about it), I was really expecting to see Apple introduce some of the same behavioral UI into the new iPhone. And given some of the hardware specifications (sensors, multiple cameras, higher resolution screen, etc.), I thought that it would make all kinds of sense to not just announce that, but push such a “learning” UI into the mainstream consciousness of mobile use.
That didn’t happen. What was served was the same old UI, with a few additional tap-and-hold functions for additonal actions (this was copied from my old Palm IIIxe and its Hackmaster program). Then, on the side of the analysts, there was no acknowledgement that such a vaunted (simplified, copied, and smooth as butter) UI didn’t show any major progression. It was like the people who should have been loudest about what wasn’t there – or innovative – were blinded themselves.
And so, as I read Tomi’s analysis, in agreeance with most, if not all, of it; I really started to wonder if we could recognize and recommend innovation when we see it, or just point at the shiny, and so nothing more until directed to?
You see, for mobile devices, these mobile devices that no matter where we are in the world are more powerful than computers of a generation or two ago, we should demand more, such as responsive UIs. I can see that Nokia (and Google, Samsung, and a few others) are making attempts to go in that direction. But, they too have this adversion to adding too much shiny, and not enough applied learning to the equasion.
If we are going to point to mobile devices as the better way of doing things, shouldn’t we at least recommend to the masses that there’s a better way to apply what they can learn besides downloading an app? Maybe when our user interfaces move from prompts to be initiated, to adapting to what we need when we need them.