Home

Scenario: Am sitting in my home with Nokia N8 connected to HD TV. I decide that boradcast programming is no longer good enough, and so I switch the TV’s input from the local broadcast to the mobile, expecting to pass the time browsing the web.

After connecting to the TV, I have to walk back to my seat. Obviously, without a Bluetooth keyboard, the N8 would be unable to take in my input to get around the device. I would have to stand next to the TV, inputting commands to the device (opening applications, typing, etc.) and then rusing back to my seat to enjoy. Every time that I would want to change the view, I would have to repeat the process.

What if Nokia’s BigScreen application, or by product the device itself once it sensed that (a) you connected the TV-Out or HDMI cable and (b) moved some distance from the device, that it would turn on the camera and track macro-movements in order to allow you some measure of being able to navigate the device while it was connected to that screen.

This idea draws on the work done with the following concepts/beta products:

  • Nokia BigScreen – a HD user interface activated when the device is connected with the HDMI cable; pictures, videos, and music are accessible thru this application; it also supports some Bluetooth accessories for input/control and turns the mobile into a remote control to pan the interface
  • Nokia Plug and Touch – a demoed but never released concept where the camera of a mobile device when pointed at a display surface, such as a TV, would track the touch-motions initiated towarsd that display surface essentially turning what wasn’t a touchscreen surface into one that is.

So, you have this experince where the mobile needs to, will have to, drive the entertainment experince. However, it needs to be physically attached to the display mechanism. Therefore, posting the mobile device to point its camera towards you would give an ability to do simple interactions which would be translated into inputs/interactions.

Some examples of interactions:

  • Point finger and gesture motion like a touch-tap in order to bring up a pointer, then select an item on the screen
  • Hold five fingers up towards the display to open up a menu that would allow you to select system settings, application settings, or switch to another application
  • Hold five fingers up towards the display (palm-facing display) then close hand in order to close application, turn system off/disconnect device safely, or share screen with another device in the room which can connect to the mobile via Bluetooth or WiFi
  • Smile to unlock (facial recognition) for restricted content, websites, applications, etc.
  • Wave left-right to pan throughout a menu (alubm covers, pictures, menu options, etc.)
  • Had other gestures like hand clap, yes/no nodding, point towards then away from display, that I’d not put actions to but seem natural for that kind of interface idea

And so you have this mobile device, physically connected to a larger screen, which needs to have that entertainment length (10ft experiences) interaction model attached to it. We are already at the point where its accepted that motion, gestures, and the entire body can be a control mechanism for computing-led experiences, but its still a sitaution of being an accessory and not something that the mobile device conctextually can shift into. This idea asks the mobile to adjust to our context of use, and then allow us to interact with it as if it is part of the space we relate to.

Advertisements
Posted in: MobileComments Off on Nokia BigScreen with MS Kinect-Like Gesture Tracking