Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

computermilk

macrumors 6502
Original poster
Jun 10, 2008
274
16
Has anyone else been watching the review videos?

One thing I noticed (watch ijustines) she shows her hands a lot.

One thing I noticed is that the way it shows your hands is I think it’s actually capturing the video of hand and trying to artificially green screen the world around it, sort of how you can clip things out of pics and videos right now . But it looks really blurry leaving this blur effect in between your fingers and around the hands.

I feel like this is a letdown for me and kills the immersion. :(

Here’s where I find this odd however. If you watch the persona creation steps they have you show and record your hands front and back, why do they do this? For FaceTime calls only?

I really feel like they should just be 3d rendering your hands in front of you especially since they have you record what they look like and just use those hands in the world that you see.

That way there is no blur in between the fingers and around the hands.
 

Attachments

  • IMG_0695.jpeg
    IMG_0695.jpeg
    550.5 KB · Views: 113
The screen mirroring is in a lower resolution yes but that makes my point even more of a point. A problem like this will be even more of a problem with a higher resolution.
 
The screen mirroring is in a lower resolution yes but that makes my point even more of a point. A problem like this will be even more of a problem with a higher resolution.
It isn't a "problem" till you try it in usage - the screen is 90hz so the slight offset will likely be barely noticeable. Everyone says the hand occlusion is incredible in actual usage.

Here is a simple test - go to the apple vision pro website on an iPhone, view the headset in AR on your desk, and wave your hand in front of it - it looks amazing. That is being calculated from an iPhone processor, with one camera in real time. The Vision pro will be using multiple cameras fed into a dedicated sensor processing unit.
 
This is a direct screen mirror though of what the person sees.
Mostly. The Vision Pro wearer is actually rendering two slightly different streams to the wearer so things appear 3D. Plus things are getting downscaled. I assume some processing is done to "2D-if" it quickly for streaming, though that might be as simple as them choosing a single eye – but it could also be a third sampled rendering of sorts. It is hard to say how much the R1 is doing some heavy lifting right to the displays that it may not send back for pushing data back into an AirPlay stream.

Last thought on it: because of the depth to what's being rendered to the wearer, the hand cropping might be more convincing in there than it seems. I assume for Personas this is true as well.

That said, since it is so obvious in watching the captures, it is worth seeing what buyers and reviewers say about it coming days. We should know pretty quickly. To be clear I am also open to the possibility it is as "bad" as it looks on the stream, but I can also see a few reasons why it might not.
 
  • Like
Reactions: computermilk
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.