Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I might just be being lazy here, but this sounds like a pain in the arse!
I like the idea of a 3D virtual desktop environment, but i'd like to interact with it using the minimal of movement and effort to increase speed and efficiency.
Moving your entire top half just to "look" behind a window when the same thing could be accomplished with the slightest genture of a finger (for example) sounds very inefficient (and quite gimmiky) to me.
We've got no choice but to move to look behind things in the real world, but in a virtual world we can take some shortcuts ;)
 
multiple viewers

So then what happens when there is more than 1 person looking at the display? It stops being 3d? reverts back to "rotate using mouse" mode?
 
None of this is new and Johnny isn't the only one doing tracking. I hope Apple's patent is rejected due to existing work but given the patent office's record as of late... I'd imagine Apple would be fine with that too so they can incorporate this into their products w/o strings. Its odd that companies need to apply for stupid patents just so they don't get sued by trolls down the road.

BTW, the person that mentioned bouncing IR light off of retinas is dead on. IBM did this for their BlueEyes project. Its a really good approach to face tracking as it gives you eyes from which to reconstruct the rest of the face. Very good optimization.

BTW2, there are decades of existing products that use electromagnetic, vision, IR, magnetometer and acoustic technologies for tracking.

BTW3, Apple's patent is attempting to patent fish-tank VR which itself has decades of existing work.

BTW4, IR is one of the most common technologies used for Mocap. Its just a lot more expensive (and accurate) than the Wiimote.

BTW5, the Apple webcam is perfectly capable of doing what the Wiimote does. Just put a cheap IR emitter next to the camera and you've got your own IR tracking system. Most webcams pick up IR light even though the human eye doesn't. In fact, if you have a Wii sensor bar, turn it on and put it up to the webcam to see the individual IR emitters. Nifty huh?
 
Right! Because all Apple screens always have one single user in front of it, never multiple users. There will never be a situation where several users want to look at the screen together, discussing what they see.

Incidentally, this is why glossy glassy glary screens work so great with Apple, there is always a single user and he can easily adjust his position to minimize glare. (But wait a minute, what if he both wants do duck from glare and see the 3D object from a different angle?)

</dreeping-bitter-sarcasm>


I use 4 + displays with my regular home business Mac. They are for my customers viewing as well as mine. Very seldom am I centered on any of the screens. This added to the fact that I make my living writing my programs in Excel & running those programs. Graphs & images have not been part of what I do. I also am not a game player.

As stated above I would have little or no use for this technology. But for those that do it would be great to have more choices. My wife ran Autocad for 15 or so years & never had any use for 3D. Most CAD & drawing people do not use or need 3D with movement. This is a very narrow use segment. And to bring things to the 3D movement point is done even less of the time. But it could be just like some of the other technologies, little need or use comes until someone finds a very useful product or programs that really need those features. Maybe I'll find some use for many of these new technologies. Remember I use a spreadsheet for over 95% of my work. For those that can remember it was the spreadsheet that really made the early micro computers useful. So I'm working 30 years in the past. But I'm doing it on a 3+ year old Intel Mac Pro. As they say where there is life there is hope.
 
Even easier than moving the users' relative position would be moving a tablets' relative position to the user.
 
So then what happens when there is more than 1 person looking at the display? It stops being 3d? reverts back to "rotate using mouse" mode?

How about some sort of facial recognition technology that could recognize the operator and respond to only that person? Everyone else in view of the camera would just be background objects.
 
That guy now works for microsoft and is involved with project natal. If they utilise this tech in natal it could provide some seriously cool gaming.
 
None of this is new and Johnny isn't the only one doing tracking. I hope Apple's patent is rejected due to existing work but given the patent office's record as of late... I'd imagine Apple would be fine with that too so they can incorporate this into their products w/o strings. Its odd that companies need to apply for stupid patents just so they don't get sued by trolls down the road.

BTW2, there are decades of existing products that use electromagnetic, vision, IR, magnetometer and acoustic technologies for tracking.

BTW3, Apple's patent is attempting to patent fish-tank VR which itself has decades of existing work.

The patent is for Apple's specific implementation of the technology, not the concept. Existing work (known as prior art) may or may not have anything to do with it.
 
Holy cow, this would rock my world!

...It's a shame i use a Samsung display!
 
The patent is for Apple's specific implementation of the technology, not the concept. Existing work (known as prior art) may or may not have anything to do with it.

Exactly. And prior art covers what Apple's claims in its patent seem to be proposing. Just going by the selected quotations from the patent, there does not seem to be anything new. So, at least these claims will/should be denied and other claims, not quoted by Macrumors, might be accepted. Will see.

My hope is that they are denied as part of Common General Knowledge and therefore Apple can use them and so can other companies. Apple would then have a legal ground to defend against any patent troll at that point.
 
hope they dont make this standard

i would just feel like a right retard trying to look past the screen....
 
Already done on the Mac

Something similar using the iSight camera for head tracking was already implemented for molecular visualization:

http://molviz.cs.toronto.edu/molviz/

(check out the second video on the page, starting at 1:47)

Here, the users head is tracked and the display updated. They also implemented an 'acceleration' mode where a small head movement would induce a larger movement of the 3d object. This makes it easier to see around objects.
 
Huh. I wonder why APPLE is filing a patent on this. It's really not as though they invented it. If I were Johnny Lee, I'd be PISSED... unless, of course, I already patented it.
 
they are really late in the game here

Agreed and agreed. I really hope we'll stick to more conventional displays, albeit using more advanced techniques.

Back in '94-95 time frame I met with several companies that were creating, and getting patents, for 3D display systems... Yes, those patents could be expiring now, but from the looks of what Apple is proposing, they have not done the least bit of homework in the advances of the past in this area.

The most promising display I saw was a linticular LCD display (see http://en.wikipedia.org/wiki/Lenticular_printing for a general description of 'linticular') where there were multiple LCDs integrated into a single surface, with little transparent beads positioned over each LCD pixel cluster. Depending upon where your head was positioned, you'd see a different pixel from each bead, and each bead had something like 16 pixels beneath - making the display require 16 different 'views' per displayed frame.

That system works for multiple viewers too. Much better tech than that naive concept described by this article...
 
That would be very fun in Time Machine. ;-)

But seriously, this seems pretty awesome. I imagine floating palettes and inspector windows (you know, the ones you always have to shift around) floating very high and you can easily peek underneath them to get your work done.
 
That would be very fun in Time Machine. ;-)

But seriously, this seems pretty awesome. I imagine floating palettes and inspector windows (you know, the ones you always have to shift around) floating very high and you can easily peek underneath them to get your work done.

Yeaaaah, but it could also be REALLLLY annoying, like if you are trying to find something that's hidden just a little bit off the edge of the screen, and you have to lean waaaay over to get it. ...or if someone else is in front of the computer as well, and it suddenly shifts to their perspective... egh.
 
Hey Apple how about using infrared the same way that johnny does with Wii... but bounce and detect the infrared from the back of my retina... you know... the dreaded red-eye effect... put it to good use with location tracking... You can get all the info that Johnny did... straight from the noggin of the iPhone user... and the infrared [at a Johnny-be-good wave length] wont even be noticed by the viewer... but the detector will work on the phone...and use the location, orientation, distance data... applied to say iTouch/iPhone/Tablet/Slate/Ikea Catalog type devices.... :D

:apple:

That would double as an ambient light sensor too.
ie. the smaller the red eyes the brighter the users brain thinks the environment is.

Doesn't glass trap IR (you know the greenhouse effect) so that might not for people who wear glasses.

I would have thought they would get this to work with the camera pixel screen (patent app a couple of years back). Like an extension of the visual mutli-touch systems but what the computer see as users head shape controls perspective. What the computer sees as hands control the interface.

Now if the computer see the users eyes as well, like using red eye, then that would give it greater accuracy.

Do which they would stop teasing us with cool stuff and bring some of it market soonish.
 
I think this is cool technology that would be awesome at times and very annoying at others. I would probably want to turn it off for a lot of things.
 
I think this is cool technology that would be awesome at times and very annoying at others. I would probably want to turn it off for a lot of things.

It's really going to depend on the interface and API designers to get it right.
I can imagine in Palette intensive programs like Interface design, Graphic Design or CAD then the idea that palettes remain onscreen but float high enough above that if I'm looking straight down the middle all i see is my project but as I move visual focus out to the edge of screen where I expect the palette to be it just floats into place. Moving the focus of the application with it.

That will be the tricky bit and the bit that will make or break this on productive systems, instead of just something fun for games. Knowing when the you wants the control focus (keyboard and mouse) to move from the working window to say an inspector palette.

Get that right and it will feel so natural people won't really need to learn how to use it. Get it wrong and it'll get turned off in 5minutes.

But minor tweaks could be had with resolution independence. Imagining when learning new application or new features after a major update I might bring the palette forward so i can read labels clearly, but i might need to scroll more* as i get use to features and the layout and were i expect things to be i can push a palette back making it more information dense.

*Now if the computer can see were i'm looking on screen maybe i don't need to scroll anymore if i'm looking at the bottom edge then scroll if my eyes start moving up then i've found the item i want stop scrolling.
 
It's really going to depend on the interface and API designers to get it right.
I can imagine in Palette intensive programs like Interface design, Graphic Design or CAD then the idea that palettes remain onscreen but float high enough above that if I'm looking straight down the middle all i see is my project but as I move visual focus out to the edge of screen where I expect the palette to be it just floats into place. Moving the focus of the application with it.

That will be the tricky bit and the bit that will make or break this on productive systems, instead of just something fun for games. Knowing when the you wants the control focus (keyboard and mouse) to move from the working window to say an inspector palette.

Get that right and it will feel so natural people won't really need to learn how to use it. Get it wrong and it'll get turned off in 5minutes.

But minor tweaks could be had with resolution independence. Imagining when learning new application or new features after a major update I might bring the palette forward so i can read labels clearly, but i might need to scroll more* as i get use to features and the layout and were i expect things to be i can push a palette back making it more information dense.

*Now if the computer can see were i'm looking on screen maybe i don't need to scroll anymore if i'm looking at the bottom edge then scroll if my eyes start moving up then i've found the item i want stop scrolling.

That sounds to me like when you control - zoom in the screen in OS X and the zoom follows the mouse pointed everywhere. That would really annoy me. Just because I look somewhere does NOT mean I want it to scroll there. It would need to read my thoughts to know whether or not that was my intent.

Also, what you are describing sounds like it would not just require head tracking (which is what this is), but also eye tracking. There are often times where we overt our glance elsewhere without moving our heads.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.