I’m sitting, thinking about what could be done to improve the relationship that I have with applications and services on my mobile devices, and there’s something of a thought that hit me: I don’t use the same apps in every context. What if the user interface took advantage of not only what I wanted to do (having the app), but also where I was and when I needed to use it?
A Brief Me-story
For a number of months, perhaps the better part of the past year, I’ve been sticking with a Symbian device (the Nokia N8, and before that the N97) because of two beta applications – Nokia Bots and Situations. Bots, is a simple AI application which utilized the homescreen widgets feature Symbian to display to me contacts, apps, and even some system messages (alarm and battery) as they were needed. Though this project is no longer being developed, in one of its last updates, it added a feature to the contacts widget to be able to show you the most recently used contacts when you were home, versus the most frequently used ones when you were not at home (defined by the app as “work”).
Situations was similar in that it enabled the mobile device to adjust to environments, except with this one you had to program the application to respond to the environment, rather than it learn and adjust as Bots did. Because this wasn’t limited by learning, there were more options to tweak with Situations, and to date, that’s been the application on my N8 which I use for some of this automated activity. I specifically like that I can ignore charging my mobile at night, because its programmed to go into power-saving mode every night, regardless of the state of the battery.
Both of these applications have compliments on Android, WebOS, and BlackBerry platforms, though I don’t think that this kind of context – or sensor-driven approach gets enough play. Yes, Motorola/Verizon has done the Smart Actions on some of their Android RAZR devices, but aside from a few commercials (I don’t watch much TV), this isn’t something that I hear about online in discussions about the platform,or how people would rather have devices and services adapt to them.
A Contextual User Interface (UI)
That brings me to this idea of something that’s a bit more than just a screen of apps (iOS, Symbian, BlackBerry) or a set of lightly-connected screens/tiles (widgets on Android/Symbian, Windows Phone’s Live Tiles). What if the user interface for mobile devices looked more like a map? Where, instead of pages of apps, and then perhaps a short-listing of these in a widget in a more accessible area, you have collections of apps/services/actions, to which you can define a locale to, and then the device adjusts itself based on what its learned or what you’ve programmed?
Example:
You purchase the mobile device, and upon startup, it asks if you want the “normal” interface, or would you like the “smart” interface. You choose “smart interface” and then it takes you through a small wizard to define some settings. First, it asks if you would like to define areas (home, work, school, car/motion, etc.) Then, it shows you a listing of the applications which are on the device by default, and it asks you to define the areas that you’d use for the application. Then it asks if you want to use the GPS, WiFi, and cell-ID to define those areas. Or, you can click to “skip this step and let the device learn.”
After the apps and areas are chosen, the device asks if you would like to change how wireless connectivity is used in those areas – for example, in your “home” area, do you need to have Bluetooth activated, or in your auto/in-motion area does Wi-Fi need to be on. You also get some options such as “automatically put device on silent if event in calendar is happening” and “track battery usage and give optimal charging messages.”
Once finished this, you are presented the homescreen of your device, with the tool-tip instruction to press andd hold the menu button to see running apps or to change the area. Perhaps, that exploded view is an actual map, showing something like a folder icon with apps in it for different areas of your locale.
Why?
It feels like a lot of complexity after writing this, but I am imagining the difference in how I’ve been using my mobile since putting in an app like Bots or Situations. There’s rarely a time that I look at my mobile and the right app or service isn’t already ready to go. The times when it isn’t is because I couldn’t do something like put an event into my calendar and then assign apps to it to fire off when that event is underway (for example, it would make all kinds of sense for the camera app to start running in the background when I’m at an event where I’d like to be taking pics.
Thing is, I know that this would be useful, but, it cannot be something like Nokia Situations where its so controlled by doing a lot of programming upfront. You’ve got to have some means of this UI capturing the actions and activity of the owner, and then offering smart suggestions towards using it after a particular point. That’s where the intelligence of Bots came in that I smiled at. It wasn’t just that it was ready to work as I needed, but that over time it kept the mobile adapting itself to my needs, and the mobile was much more the wand into connecting me to what mattered.
My iPad doesn’t have an application or service like that. I’ve got to go into it, navigate to the app, and then do whatever it is that I’d like to do. It doesn’t do something like realize that I’ve typed 500 words and then have a message in the status bar saying something like “would you like to continue typing this with your previously paired Bluetooth keyboard?” Something like that would be the catalyst to being more productive (for me), but would also show me that there’s been some sense of adaptability that was built into the UI in which the designer knew that not everything which was preference for them would be the same for me.
There’s a quote that said something like “technology done well is no different than magic.” I don’t know that anything in mobile today qualifies as magic unless you are well acquainted with how all of this is put together. The things that people deal with on the surface (pushing a button to an app, whooo-weeeee) could be more magical, if the sensors, environment, and user were taken into more consideration from the get go – before 3rd party apps and services entered the fray.
Your right on target here, I too wish that my phone and iPad would change the interface in the context of what I was doing or had set up as home/work/etc. Why dosen’t the mute button get bigger or move to somewhere else on the screen when in use?