Aiming to expand their Amnesia Connect application Amnesia-Razorfish is exploring new platforms for content sharing and manipulation.
How should the interaction be in the case of a wall mounted display in which the user has complete freedom of movement? Dealing with motion sensors was an unexplored new territory for Amnesia and the traditional methods of interaction didn’t apply.
A wall screen shouldn’t be a mere extension of a desktop, that’s already a mistake that early developers of smartphones fell into when designing the UI for touch based handhelds. When the user is in front of a wall screen is usually stood up and will use the area at the reach of her hands, that is, waist up. Also the user manages herself in a 3-dimensional space where depth exists and this reality has to be translated into the 2-dimensional space of the screen.
Hence the approach was addressing the interaction of a user with a physical object while standing in front of a vertical surface, in this case we used the simile of delivering a presentation or a lecture using a blackboard, or manipulating magnets on a fridge.
In this concept we decided to go beyond what it has been developed so far. The current state of the art with Kinect when it comes to manipulate objects is setting a hovering timeout of about 2-3 seconds over the object, showing a process bar around the icon representing the user’s hand. We considered this was not a natural interaction, in a 3-dimensional space people “grab” objects closing their hands and release them opening them; in a 3-dimensional space people “push” buttons and not wait with the hand over them until they activate; in 3-dimensional space when somebody wants to “stretch” a material, grabs it from two edges and opens the arms.
Distinguishing an open hand vs. a fist using current Kinect technology is very tricky. The Kinect device is based on a matrix of infrared beams emitted from the device, the reflected beams are sensed by an infrared camera and this information is used to build a depth map of the room. Well, these beams expand on space, the further the object is from the camera, the less resolution it is retrieved, therefore, while it’s quite easy to determine when a hand is open up to a distance of about 150cm it’s very difficult to detect at longer distances since the depth image returned for the hand will be almost the same, that is, a blurry mesh of dark pixels.
Since we didn’t need such accuracy as to distinguish the finger tips, just open hand recognition I could go a little rough and take approaches more based in the shape of the returned depth image from the hand.
My approach was then using horizontal and vertical raster scan lines to look for light pixels on the portion of the image corresponding to the hand area, if the amount of gaps found went over a threshold I considered the hand open. To improve the performance, I scanned one image out of a certain amount of frames. All the values were parametrized and a dashboard was created to fine tune the detection engine for different environments.
Teaming up with Amnesia's Interaction Designer, Stefanie Elsholz, we defined the proper body gestures and
UI for a wall based installation. I then designed and developed a prototype and testing environment for three different kind of gestures, my work also included researching the existing methods and technologies and assess on the current feasibility.
I finally developed a high fidelity prototype to showcast the concept potential and created a gestures library using MS Kinect SDK.
The concept is applied in Amnesia Connected Room project and Razorfish's Connected Retail Experience platform.