Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
68,196
38,988


Mobile Magazine reports that Norwegian company Elliptic Labs is set to offer demonstrations of its new "touchless gesture" user interface for the iPad at CES 2011 early next month. The interface, which utilizes ultrasound to generate a field approximately one foot in front of and either side of a docked iPad, would allow users to control their iPads by simply waving their hands in front of the screen.
We had a chance to catch up with CEO Stian Aldrin, who explained the following; "The idea is that you use touchless gestures to operate primary functions of a docked tablet in situations like when you have wet or greasy hands in the kitchen. In general tablets are made for being handheld. When it is docked you are often walking or standing further away, and then using a finger on the screen involves a change of modality. Rather than bending down, leaning forward or picking it up you can use larger movements a little bit further away to do things like volume up or next song without changing modality."
The project began life as a simple concept prototype for generic tablet devices, but has since evolved to become an iPad-specific docking station. A demo video from one of the early prototype designs shows how simple gestures could be used to swipe between screens on the tablet, and while the demo shows only limited swiping functionality and a fair amount of lag in the responding to gestures, the company has no doubt been working to expand and refine its technology to improve the user experience.

Apple has expressed its own interest in "Kinect-like" touchless gesture and voice control of such diverse devices as appliances and vehicles, recently acquiring the rights to a number of patents and patent applications related to the technology.

Article Link: Kinect-Like Gesture System for iPad to Be Demoed at CES
 
Well, I tried to simulate the experience by swinging my hand up, as if I was increasing the volume. Accidentally, I hit my desk lamp.
 
Interesting, but utterly pointless for such a small device. More effort waving your hands about with a poor response rate then just moving a finger.

The Kinect gesture system on the other hand IS impressive and unique.
 
Demo?

Bad commercial. They used a poor Green Screen to composite and fake the footage. Poorly executed too. If you are going to fake make the product at last look like it works well.
 
Wirelessly posted (Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_1 like Mac OS X; en-us) AppleWebKit/532.9 (KHTML, like Gecko) Version/4.0.5 Mobile/8B117 Safari/6531.22.7)

We need this because
 
Not sure if this is a real demo or a mockup video, but the lag is huge and hopefully the final product won't have that problem. Also, it needs to work with the real iOS interface, no one wants to switch interfaces all the time. It also doesn't look very precise, I doubt you can do anything like select menu items or drag sliders.
 
To all those saying "pointless," remember that early technology rarely resembles the final product.

Who looked at the early multi-touch examples 5 years ago (on large, heavy screens) and envisioned the iPad? Steve did.

So what's this going to turn into? It won't track hands since you need them to hold portable devices. My guess? Face and eye tracking on iPads and iPhones. They already use things like that in fighter jet cockpits. Adding it to an iPad could work.

Of course there will be games that use it, but what prctical use could they have? Well, imagine the ability to look at any button in the software and 'click' it by blinking twice. You could keep your fingers on the keyboard while your eyes open and send e-mails. Add in head movement tracking (nods and shakes for yes and no?) and you have a computer that understands you in many more ways than any touch-based computer can.

It'll happen eventually, despite the fact that not everyone can imagine it now.
 
Last edited:
It's our most intuitive, magical, and revolutionary user interface...we like to call it AirGesture©.

-Steve Jobs
 
I have the most crazy idea...
Why not actually TOUCH the device?!?
I know this sounds outlandish, but maybe... :p
 
Maybe in a car.... Otherwise my arm hurts already just by looking t that guy swinging.

That was the first thing that came to my mind. Using the iPad in-car.

I always wanted to have a "carputer", long before the iPad was announced. Lately, I was thinking about getting an iPad and a vehicle-mount for it, but I was a little worried that I would have to hit a "small" interface element just to skip to the next track.

If they can make this gadget work, they'll have my money.
 
Wirelessly posted (Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_2_1 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8C148 Safari/6533.18.5)

I'm thinking Minority Report sans the gloves. Pretty sweet.
 
To all those saying "pointless," remember that early technology rarely resembles the final product.

Who looked at the early multi-touch examples 5 years ago (on large, heavy screens) and envisioned the iPad? Steve did.

So what's this going to turn into? It won't track hands since you need them to hold portable devices. My guess? Face and eye tracking on iPads and iPhones. They already use things like that in fighter jet cockpits. Adding it to an iPad could work.

Of course there will be games that use it, but what practical use could they have? Well, imagine the ability to look at any button in the software and 'click' it by blinking twice. You could keep your fingers on the keyboard while your eyes open and send e-mails. Add in head movement tracking (nods and shakes for yes and no?) and you have a computer that understands you in many more ways than any touch-based computer can.

It'll happen eventually, despite the fact that not everyone can imagine it now.

I agree. Whenever a new technology progresses, historically it has following steps.

1) When it is postulated and published as mock ups for fictional accounts.
2) Working system in a laboratory
3) Commercial sale but very expensive
4) Niche / Novelty implementations
5) Reliable enough for mission critical use
6) Mass market consumer use
7) Legacy supported
8) Obsolete

As far as motion recognition technologies...

In the 70's and 80's, that era of Science Fiction did Step 1.

In the 90's it went from Step 2 to 3 in the go-go Virtual Reality days. FYI, almost all of Jaron Lanier's VPL patents have expired by now.

We are now seeing Steps 4 and 5 happening. In a few years, this will get integrated into common commercial devices and we will have the big money Steps 5 and 6.

I'm all for this type of technology as long as this can be turned off and all actions can also be done on a touch screen. What about power use?

Also, this this specific technology uses ultrasound, curious to see how dogs react to this when they are nearby.
 
Wouldn't the Apple patent of embedding tiny camera elements between the pixels of the display panel itself do both touch and near touch gestures with one system?
 
Wirelessly posted (Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_0_1 like Mac OS X; en-us) AppleWebKit/532.9 (KHTML, like Gecko) Version/4.0.5 Mobile/8A306 Safari/6531.22.7)

Kinnect like technology seems to be stirring things up a bit ...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.