Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

bkkcanuck8

macrumors 6502a
Original poster
Sep 2, 2015
664
416
I was thinking about the technology built into Vision Pro and what would be the future of it...

One of the questions was in VR if you are don't build all sorts of controllers with specialized electronics into each device, how would you deal with occlusion. While thinking of the Mac/iPad and then Apple TV I think I have one possible solution that I could see Apple bringing to market. I am not the most hyped on voice control, sometimes speaking to no one I find tiring. I am also not a big proponent of adding touch on desktops as I would find it tiring to reach out and touch a desktop display regularly, and on laptops it would make a mess of the screen. However, the technology that is in Vision Pro would I think be useful in other devices if it could see my gestures but not require me to lift my hand forward. What if the future is they add it to Mac device, Mac Minis and especially Apple TV cameras which they just added support to attach to the Apple TV. You could even network these sensor arrays (with or without vision, but definitely with LIDaR) wirelessly so that all devices have an 'eye' on your environment from all different directions. You could when editting Video on a Mac make a motion of a circle by your hand just by raising the fingers up from the keyboard and basically roll the video forward and things like that. If you had multiple devices using these sensor kits, you could make a motion to which device you are talking to. This would also potentially eliminate blind spots when looking forward from the Vision Pro on objects that you are holding or down to see your legs for real which may be blocked from the Vision Pro... You could also use these sensors for other things like HomeKit.

The more I think about it, I would not be surprised to see Apple come out with a sensor kit / camera of their own for Apple TV in the coming year or two and see additional sensor kits for placing around and communicating wirelessly between them... and more support for it on Apple TV, and a revamping of the HomeKit strategy.

There would be additional latency on that - but the primary sensor for VR/AR mode would be behind... but it would give a model of what is happening in front... and the latency if it is via a web approach to sensor kits (with a built in silicon unit that is a blend of compute and sensor input - should be able to provide enough compute power within each sensor kit) should be able to be managed.

It would allow for hand signals like the ones shown in the Vision Pro when using Apple TV in addition to voice. The wiring up the sensors around the house would also be a good foundation for a security system... it would not work on motion but based on object recognition... it would be able to identify human form vs cats and dogs rather than work on motion. That in addition to other HomeKit functionality I think it could be feasible and produce additional accessory sales that would be a good source of revenue.

The advantages to being able to design your own silicon in using existing silicon options - would provide the ability to do some very interesting things.
 
In a way, Microsoft already tried this with the Kinect Sensor accessory for Xbox 360. The technology at the time didn’t have the resolution to do hand tracking, but the camera was positioned below the TV to have a good sensing volume of the living room and delivered reliable skeletal tracking of the body. Like the Vision Pro it leveraged a concept of the body-as-a-controller and there were a number of quite innovative game experiences for it. But despite a big launch it wasn’t a success.

Full-body tracking has some uses but it isn’t a huge enabler. It will open some doors but it depends how you leverage it in the context of augmented reality to see how well it does.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.