Scribbling and Sketching Ideas on Screens and UIs

I spend a good amount of time on my iPad during the work-week.* Mostly within the confines of consuming and doing some light writing of content for here and MMM. And except for a few instances where Mobile Safari is a pain the butt, I tend to get along very well with the iPad for a few creation-based tasks. Problem is though, the iPad, and quite a number of mobile devices today, aren’t really based around the idea of creating, and this causes a problem when you have ideas and projects to do.

Moving beyond those simple and quick messages on the iPad and my mobile, I start to either look for a keyboard (wireless preferred), or software that keeps the data in manageable chunks on the device, making good use of the available screen(-size) and input mechanisms.

One app that I have on my iPad that seems to respect this paradigm very well is Tactilus. Tactilus is a sketching and writing application for the iPad which is designed to make most of the user interface and input elements get out of the way while you are sketching or drawing. I really dig the fact that the tools panel can be moved around the screen as you need to get it out of the way, and that there are several gestures that you can do to better take advantage of the constrained space you have to work with on the iPad.

The problem with this kind of approach though speaks to the major area that many tablets have flat out failed to address – creating is sometimes more important than consuming content.

If you think about it, this makes a lot of sense without thinking so hard. You pick up certain types of computing devices with the intent to create something, and the last thing that you want is the device’s physical or software limitations slowing you down – you’d rather your abilities actually be the limiting factor.

And yet, physical and software limitations exist, and there’s not much we tend to see or do around them. Well, many of us, there is some work being done such as the Manual Deskerity project at Microsoft I relearned about today. The Manual Deskerity project (info and video at istartedsomething) seeks to take what we’ve already learned over time with screen, touch, and pen-based interfaces, and then create more intelligent (and sometimes intuitive) means of creating and interacting with content.

One of my favorite concepts of the past years has been that of the MS Courier project. the MS Courier was a dual screened, touch and pen-based tablet device that was reportedly under development and nearly made it for sale by Microsoft. It used a very customized version of the Windows operating system and an ability to receive pen and touch-based input for some very impressive possibilities. Normally speaking, if you speak to someone on the street and ask them about the future of paper and notebooks, you’d get something spoken that looks a lot like this (YouTube Video).

Granted, it is a bit of a shame that the Courier won’t come to pass. But, I think that there’s a lot that we can learn and apply in respect to how we build applications that aren’t just tablet or touch-friendly because they are panels that slide, poke, and glide. We need to capture movements that speak towards the creative process of a specific instance.

For example, in a word processing application, when writing, the key is writing – so any showing of menus and formatting bars needs to go away until the user’s typing cadence (on-screen or accessory keyboard) has stopped at a certain threshold.

In applications such as Bible readers, where there’s a healthy combination of both consumption and writing (depending on context), input shouldn’t be restricted to the furthest points from your hands – menus at the top of the screen make little sense because they obscure content – but should be enabled from any of the thumb-touch points in the bottom 1/3 of the device. The addition of a keyboard should slide those menu panels up on top of the keyboard, not off screen or into another panel entirely.

If I’m dealing with web-based systems such as a CRM or SharePoint-like application, my web interface has to be stripped of every bit of chrome possible, before its built-up into a usable experience. Similar to what we see on the iPad version of Google Reader, you want to have controls simplified and in a common location, allowing for additional screens to open up settings and fine-tuning options. Using the same interface layer on a tablet as you would the desktop/workstation instantly biases decisions to sinking money into an application that doesn’t need to exist.

So what should make sense when designing around an application that is more or less built more for consuming content? I’m still working on that answer – at least something that I know how to say and demonstrate. In the meantime, I’ve been doing a lot more on my iPad (when the software allows) towards composing and creating content, and then also doing more sketching and drawing, since those are deeper-than-writing tasks. If I can nail what works there, and then figure out how a pen or voice input method works better or in a more optimized manner, then maybe I could see a bit more of that future that I opine about 😉

*Even though I essentially work for myself, I still carry on the idea of a work-week, putting in 5.5-6 days of work with Saturday kept as a spiritual and mental Sabbath.