A pseudo-transparent screen from Microsoft and Mitsubishi lets people enter data from both sides of a handheld device.
Transparent touch: Top: An illustration of a future multisurface, multitouch device. Bottom: A portable-device application that rotates, translates, and zooms in on a digital map. Superimposed on the map is an image of the user’s fingers, which are touching the back of the device. Credit: Microsoft
Researchers at Microsoft and Mitsubishi are developing a new touch-screen system that lets people type text, click hyperlinks, and navigate maps from both the front and back of a portable device. A semitransparent image of the fingers touching the back of the device is superimposed on the front so that users can see what they're touching.
Multitouch screens, popularized by gadgets such as PDAs and Apple's iPhone, are proving to be more versatile input devices than keypads. But the more people touch their screens, says Patrick Baudisch, a Microsoft researcher involved in the touch-screen project, the more content they cover up. "Touch has certain promise but certain problems," he says. "The smaller the touch screen gets, the bigger your fingers are in proportion ... Multitouch multiplies the promise and multiplies the problems. You can have a whole hand over your PDA screen, and that's a no go."
The current prototype, which illustrates a concept that the researchers call LucidTouch, is "hacked together" from existing products, says Daniel Wigdor, a researcher at Mitsubishi Electric Research Lab and a PhD candidate at the University of Toronto. The team started with a seven-inch, commercial, single-input touch screen. To the back of the screen, they glued a touch pad capable of detecting multiple inputs. "This allowed us to have a screen on the front and a gesture pad [on the back] that could have multiple points," says Wigdor. "But what that didn't give us was the ability to see the hands." So, he says, the researchers added a boom with a Web camera to the back of the gadget.
The image from the Web camera and the touch information from the gesture pad are processed by software running on a desktop computer, to which the prototype is connected. The software subtracts the background from the image of the hands, Wigdor explains, and flips it around so that the superimposed image is in the same position as the user's hands. Additionally, pointers are added to the fingers so that a user can precisely select targets on the touch pad that might be smaller than her finger. In October, a paper describing the research will be presented at the User Interface Software and Technology symposium in Rhode Island.
Admittedly, this prototype has several limitations. Most glaringly, it's impractical to attach a boom and camera to the back of a handheld device. In their paper, the researchers suggest a number of different approaches for more-compact LucidTouch prototypes. The gesture pad on the back could actually provide an image of the user's fingers as well as touch information, explains Wigdor. The pad uses an array of capacitors, devices that store electrical charge. Fingers create a tiny electrical field that changes the capacitance of the array, depending on their distance from it. This distance can be tuned, says Wigdor, so that the pad can register the entire finger, and not just the fingertip touching it. Another approach, he says, would be to use an array of tiny, single-pixel light sensors that could map fingers' locations. Or the device could use an array of flashing, infrared-light-emitting diodes; sensors would then detect the light's reflection off of a hand, Wigdor explains.
As touch screens shrink, says Scott Klemmer, a professor of computer science at Stanford University, one of the biggest problems users face is inadvertently covering up content with their fingers. LucidTouch, he says, "distinguishes itself in two ways: first, it provides better feedback about where you are ... and the other distinction is that it's multitouch."
Even with their prototype's cumbersome design, the researchers were able to write applications for it and gather user responses from a small group. Depending on the application, users found that touching the back of the screen could be useful. For instance, most preferred to type on a Qwerty keypad using the front of the screen. But when the keypad was split down the middle, and one half was placed vertically along each side of the screen, most preferred to type on the back of the device. Half of the participants preferred using the back of the device for tasks such as dragging objects and navigating maps. The users were also divided on whether the superimposed images of their fingers were helpful. Two-thirds of the participants preferred the superimposed images when using the keyboard and dragging objects, and half preferred them while using the map.
These results suggest that a user's preference for LucidTouch and pseudo-transparency depends on the application. Baudisch suspects that one of the first places that this technology could appear is in portable gaming, where specific games could be written for the technology. But importantly, it could enable people to start thinking differently about the potential of multitouch screens on handhelds.
"I think--zooming out for a moment--what's really exciting about this time is that for so many years, we've seen the dominance of the mouse," says Stanford's Klemmer. "I think that hegemonic situation is now over. What this points to for me is the idea that we're going to see this increased diversity of devices that adapt to different situations."
Source : http://www.technologyreview.com
No comments:
Post a Comment