I was thinking about the technology built into Vision Pro and what would be the future of it...
One of the questions was in VR if you are don't build all sorts of controllers with specialized electronics into each device, how would you deal with occlusion. While thinking of the Mac/iPad and then Apple TV I think I have one possible solution that I could see Apple bringing to market. I am not the most hyped on voice control, sometimes speaking to no one I find tiring. I am also not a big proponent of adding touch on desktops as I would find it tiring to reach out and touch a desktop display regularly, and on laptops it would make a mess of the screen. However, the technology that is in Vision Pro would I think be useful in other devices if it could see my gestures but not require me to lift my hand forward. What if the future is they add it to Mac device, Mac Minis and especially Apple TV cameras which they just added support to attach to the Apple TV. You could even network these sensor arrays (with or without vision, but definitely with LIDaR) wirelessly so that all devices have an 'eye' on your environment from all different directions. You could when editting Video on a Mac make a motion of a circle by your hand just by raising the fingers up from the keyboard and basically roll the video forward and things like that. If you had multiple devices using these sensor kits, you could make a motion to which device you are talking to. This would also potentially eliminate blind spots when looking forward from the Vision Pro on objects that you are holding or down to see your legs for real which may be blocked from the Vision Pro... You could also use these sensors for other things like HomeKit.
The more I think about it, I would not be surprised to see Apple come out with a sensor kit / camera of their own for Apple TV in the coming year or two and see additional sensor kits for placing around and communicating wirelessly between them... and more support for it on Apple TV, and a revamping of the HomeKit strategy.
There would be additional latency on that - but the primary sensor for VR/AR mode would be behind... but it would give a model of what is happening in front... and the latency if it is via a web approach to sensor kits (with a built in silicon unit that is a blend of compute and sensor input - should be able to provide enough compute power within each sensor kit) should be able to be managed.
It would allow for hand signals like the ones shown in the Vision Pro when using Apple TV in addition to voice. The wiring up the sensors around the house would also be a good foundation for a security system... it would not work on motion but based on object recognition... it would be able to identify human form vs cats and dogs rather than work on motion. That in addition to other HomeKit functionality I think it could be feasible and produce additional accessory sales that would be a good source of revenue.
The advantages to being able to design your own silicon in using existing silicon options - would provide the ability to do some very interesting things.
One of the questions was in VR if you are don't build all sorts of controllers with specialized electronics into each device, how would you deal with occlusion. While thinking of the Mac/iPad and then Apple TV I think I have one possible solution that I could see Apple bringing to market. I am not the most hyped on voice control, sometimes speaking to no one I find tiring. I am also not a big proponent of adding touch on desktops as I would find it tiring to reach out and touch a desktop display regularly, and on laptops it would make a mess of the screen. However, the technology that is in Vision Pro would I think be useful in other devices if it could see my gestures but not require me to lift my hand forward. What if the future is they add it to Mac device, Mac Minis and especially Apple TV cameras which they just added support to attach to the Apple TV. You could even network these sensor arrays (with or without vision, but definitely with LIDaR) wirelessly so that all devices have an 'eye' on your environment from all different directions. You could when editting Video on a Mac make a motion of a circle by your hand just by raising the fingers up from the keyboard and basically roll the video forward and things like that. If you had multiple devices using these sensor kits, you could make a motion to which device you are talking to. This would also potentially eliminate blind spots when looking forward from the Vision Pro on objects that you are holding or down to see your legs for real which may be blocked from the Vision Pro... You could also use these sensors for other things like HomeKit.
The more I think about it, I would not be surprised to see Apple come out with a sensor kit / camera of their own for Apple TV in the coming year or two and see additional sensor kits for placing around and communicating wirelessly between them... and more support for it on Apple TV, and a revamping of the HomeKit strategy.
There would be additional latency on that - but the primary sensor for VR/AR mode would be behind... but it would give a model of what is happening in front... and the latency if it is via a web approach to sensor kits (with a built in silicon unit that is a blend of compute and sensor input - should be able to provide enough compute power within each sensor kit) should be able to be managed.
It would allow for hand signals like the ones shown in the Vision Pro when using Apple TV in addition to voice. The wiring up the sensors around the house would also be a good foundation for a security system... it would not work on motion but based on object recognition... it would be able to identify human form vs cats and dogs rather than work on motion. That in addition to other HomeKit functionality I think it could be feasible and produce additional accessory sales that would be a good source of revenue.
The advantages to being able to design your own silicon in using existing silicon options - would provide the ability to do some very interesting things.