I wasn’t slow to catch onto the tablet thing, I had been using one years ago (2005-2006ish) for web design work. It made for a nice distraction to my day to not only have the websites to deal with, but how did sites remain usable and accessible on a touch-enabled platform. One of the lessons that I learned then is that you have to make a choice between buttons and gesture actions when designing interfaces. Having both options is confusing (depending on the UI you are starting with). As I’ve gotten back into using a tablet (the iPad) that has a bit more of one than the other (one button, several gestures), I’m finding that I am once again looking at this question of buttons and gestures, and wondering where the tablet market will go in this respect.
Take for example the new Motorola Xoom (Engadget Review, Chicago SunTimes Review, Scoble’s Review). Its a tablet which uses the Google Android operating system. And one of the key changes in this iteration of Android (v3 or Honeycomb) is that it doesn’t require physical buttons for a main interface element. There is instead a bar at the bottom of the screen with (soft) buttons which change (or are added to) depending on the application you are in. Neat right? Well, sort of. You’ve got this 10in screen, and still are relying on this behavior of pushing buttons in order to get from one action to another. I kind of wish that this metaphor left – tablets that is.
You see, when I think of new interfaces being driven by a big piece of glass in front of me, I don’t think of wanting to push buttons. As a matter of fact, I am darn near repulsed by them unless I am typing. Buttons mean do this action because I command you to, but I feel that gestures could do this much easier, and smoother than buttons have engrained in us that they should.
For example, take a look at some of the gesture research that Apple is doing with a developer release of iOS 4.3. There’s definitely more muscle action that is happing across the entire hand in order to navigate, but you know what else is happening? The user is blending themselves with the interface. Buttons (at least how I am looking at them here) might allow you to do an action, but you never feel that the action is a part of you connecting with the device. With gestures, there’s this attachment to content and (you could argue) to the device that makes getting in, around, and through content much more logical.
I know from some past reading and research into gestures that it is actually a hard thing to create gestures that not only make sense, but also will work and are easy to learn. There are also some actions that kind of just make sense that should be used now that we’ve come into knowledge and use of a larger screen. Some examples of gestures that every tablet should do out of the box (IMO):
- Put five fingers on the screen and close them together to go back to the home screen
- Two fingers to the left or right to swap between last used or currently running applications
- Two fingers up or down to go between windows/tabs in the currently running application
- Press and hold the right or left bottom corner of the screen for a settings/option menu for the currently running application
Those aren’t things that are impossible to learn, and I’d even argue that these are already understood and just need to be demoed since there’s already a basis of gestures in play due to the work folks like Apple, Nokia, and others have done in that space.
To that end, when I’m looking at the Xoom and other newer tablets, I’m looking at whether they are really a tablet in the sense of making a screen that I want to interact with. Or, are they just presenting to me information that I have command of, but can never really associate myself with?
I would argue that the latter (button-driven) will be the case with many of these tablets, and mainly because the developers have not thought about the association with screens and data that we naturally want to have versus the one that they think we have.
Image via Engadget.