visionOS 2: Spatial Personas Can Touch Fingers, High Five, Fist Bump Each Other With Visual and Audio Feedback

Glasses will always be less convenient than just putting a screen on the device that you're wanting to use. Any tech product from any company that isn't more convenient and more useful than the alternatives will fail.

The point of wearables is that they're portable devices. But most people don't want or need a computer monitor attached to their face when they're away from their computer. If they're at their computer, they can just use a regular monitor.

You're missing the idea behind spatial computing.

It's about technology escaping screens and devices altogether.

The "computer monitor attached to your face" is just a temporary measure during the technology's infancy. And will of course not reach the ubiquity of other wearables.

Eventually it will be as unobtrusive as wearing a smartwatch, you'll just have a pair of light glasses on your face and then the world around you will be the device. It will add so many conveniences to your life that you'd never leave home without them.
 
You're missing the idea behind spatial computing.

It's about technology escaping screens and devices altogether.

The "computer monitor attached to your face" is just a temporary measure during the technology's infancy. And will of course not reach the ubiquity of other wearables.

Eventually it will be as unobtrusive as wearing a smartwatch, you'll just have a pair of light glasses on your face and then the world around you will be the device. It will add so many conveniences to your life that you'd never leave home without them.

You'll still need a second device for when the glasses need to be charged, which means you're carrying around a second device (which will likely be doing most of the processing), and it will need to have a screen - making the glasses redundant for most tasks.

If they're going to be 'light weight' they'll have terrible battery life. The limitations of physics still exist with lens/diopter thickness and weight to make the screen look like it is far away, as do the physical limitations of processing power vs. battery life.

There's also the problem with focal distance. If the eyes are converging and focusing on a real-world object in the distance, they cannot simultaneously focus on a virtual object less than an inch from the face. One of those things is going to be in focus and the other is going to be blurred. This is why VisionPro (and all other VR headsets) has a light seal around the face and video pass-through, rather than transparent screens with a real-world pass through which is an available technology today.

VR/AR headsets with any meaningful functionality are always going to be bulky and heavy (because physics), and they're always going to have very limited battery life, meaning you'll always have to carry another handheld device with you as a back up. You'll also need to carry an input device. People want privacy and don't want to be talking out loud to a headset, and the pinching/typing on an invisible floating keyboard solution doesn't allow for any real productivity. Which, again, means you're carrying around a second device or a keyboard - in which case, just put the screen on that device.

Not everything needs to be a wearable device. Glasses aren't going to be a 'put it in your pocket when you don't want to wear it' kind of device like a phone. They will always be a less convenient, more expensive, limited use-case, unnecessary device - the very things that define 'gimmick'.
 
You'll still need a second device for when the glasses need to be charged, which means you're carrying around a second device (which will likely be doing most of the processing), and it will need to have a screen - making the glasses redundant for most tasks.

If they're going to be 'light weight' they'll have terrible battery life. The limitations of physics still exist with lens/diopter thickness and weight to make the screen look like it is far away, as do the physical limitations of processing power vs. battery life.

There's also the problem with focal distance. If the eyes are converging and focusing on a real-world object in the distance, they cannot simultaneously focus on a virtual object less than an inch from the face. One of those things is going to be in focus and the other is going to be blurred. This is why VisionPro (and all other VR headsets) has a light seal around the face and video pass-through, rather than transparent screens with a real-world pass through which is an available technology today.

VR/AR headsets with any meaningful functionality are always going to be bulky and heavy (because physics), and they're always going to have very limited battery life, meaning you'll always have to carry another handheld device with you as a back up. You'll also need to carry an input device. People want privacy and don't want to be talking out loud to a headset, and the pinching/typing on an invisible floating keyboard solution doesn't allow for any real productivity. Which, again, means you're carrying around a second device or a keyboard - in which case, just put the screen on that device.

Not everything needs to be a wearable device. Glasses aren't going to be a 'put it in your pocket when you don't want to wear it' kind of device like a phone. They will always be a less convenient, more expensive, limited use-case, unnecessary device - the very things that define 'gimmick'.
You really, REALLY aren’t understanding the difference between deciding to do something with an app then pulling your phone out, versus all your chosen apps just being active and functional all the time all around you without you thinking of it.

Both require having devices with you, but it’s not what you carry that’s different. It’s the vast difference in experience. How you use, interact, and even THINK about the device. Do you not get that?

The difference between in-screen technology and that which just exists passively in the world around you is MASSIVE.

I think you can’t understand this because you lack the imagination to see spatial computing beyond how you see it working right now, which is basically minute-one of day-one of spatial computing’s infancy.
 
Last edited:
I wish Apple would, within its Accessibility features, to allow someone like myself with a 'sleepy eye' condition (where one of my eyes is normal and one has a difficult time). I cannot use binoculars or any device with 2 eye pieces. I have tried the PlayStation sytem of virtual reality and I couldn't get it to work at all, so I reluctantly had to return it. I am one of many millions of people around with world with this condition, and I'm sure that Apple could bring in some sort of adaption for me and all the others.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.
Back
Top