Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The only thing that could sell me on the Apple Vision Pro would be this. I can't imagine having life-like video to look back on when family members and pets die. Outside of that, I have no use for this at all.
That’s all i need it for. Wish i had it while my son was younger.
 
  • Like
Reactions: cocky jeremy
The only thing that could sell me on the Apple Vision Pro would be this. I can't imagine having life-like video to look back on when family members and pets die. Outside of that, I have no use for this at all.
This. Although I could see that the realism of such a thing could be a challenge on a person psychologically.

It reminds me of the show 'Black mirror' s2e1 called "Be Right Back". It's about a female who's boyfriend passes away, and they have the technology to create a physical 'clone' as such. He looks like the original, but he is just a clone.. a machine.. a memory.. she's not really interacting with the original boyfriend at all.. and it slowly drives her insane..

Potentially dangerous stuff watching life-like 'immersive' videos of people you miss dearly..
 
15 pro has dedicated chip. I think it’s a power supply to run both cameras.

Running two cameras at once can be done with the App DoubleTake. It can capture discrete video from any 2 of the iPhone’s 3 or 4 cameras. It is limited to 4k and 30fps. There is not even a 1080 30fps option… the biggest problem is lack of any stabilization on either of the selected cameras.

Given the warnings and notifications about stabilization built into the system, my guess is the 1080@30 limitation for Spacial Video has the most to do with stabilization. You need full 6-axis stabilization while keeping the two images perfectly aligned with no wobbles or shifts in either camera to avoid causing all kinds of viewing issues.


I imagine there are legions of people who want to spent $5,000 to watch 3D movies (videos in modern parlance). That's what we used to call spatial video back in the day. The only difference is that the special glasses needed were free with your movie ticket. Not so much anymore, I guess.

I have a feeling more general 3D photography and video will drive up adoption and demand for display tech that does not require glasses or a headset, like light field displays.
 
I doubt the issue is about pixels or even bus bandwidth and more to do with the fact there is only so much you can do with the relatively anemic point cloud generated from the LiDAR sensor on these units (it is an assumption that they are using that data and not simply doing some kind of differential image processing, but I suspect they are for the kinds of applications advertised for Vision Pro), the more pixels in the image, the finer the voxel size/more polygons necessary...and that sensor is not the most robust one. At 1080p and 30fps it gives plenty of time to overscan and process the relatively mismatched point cloud resolution/sensor.

Just a guess.
Man only 1080p 30fps? I guess the ultra-wide needs a resolution boost for the 24mm frame to have enough pixels for 4K. Hopefully it gets the bump to 48mpx next year and we can get at least 4K 30fps :)
 
Running two cameras at once can be done with the App DoubleTake. It can capture discrete video from any 2 of the iPhone’s 3 or 4 cameras. It is limited to 4k and 30fps. There is not even a 1080 30fps option… the biggest problem is lack of any stabilization on either of the selected cameras.

Given the warnings and notifications about stabilization built into the system, my guess is the 1080@30 limitation for Spacial Video has the most to do with stabilization. You need full 6-axis stabilization while keeping the two images perfectly aligned with no wobbles or shifts in either camera to avoid causing all kinds of viewing issues.




I have a feeling more general 3D photography and video will drive up adoption and demand for display tech that does not require glasses or a headset, like light field displays.
I am guess it is related to LiDAR resolution being what it is, thus the reduction in frame rate, especially. Gives more time overscan the LiDAR sensor inputs. Who knows, just a wild guess.
 
Will there be the ability to convert existing photos printed or digital to spatial photos down the road maybe? Imagine old photographs with generative AI able to compute and complete photos etc in 3d space etc. just thoughts and ideas. Prob have to wait till 2024/25
 
This is going to lead to some sad Minority Report / Strange Days scenes where people sit around drinking at night trying to make their kids and exes come back.
Vision Pro: Designed for divorced men with estranged children. 🤣😄


Just kidding. Don’t take the joke seriously. I think Vision OS will rejuvenate the Mac lineup. :apple::)
 
I doubt the issue is about pixels or even bus bandwidth and more to do with the fact there is only so much you can do with the relatively anemic point cloud generated from the LiDAR sensor on these units (it is an assumption that they are using that data and not simply doing some kind of differential image processing, but I suspect they are for the kinds of applications advertised for Vision Pro), the more pixels in the image, the finer the voxel size/more polygons necessary...and that sensor is not the most robust one. At 1080p and 30fps it gives plenty of time to overscan and process the relatively mismatched point cloud resolution/sensor.

Just a guess.

This is a very good take. I did take some video with my finger over the lidar sensor and didn’t get an error. You also get the “move camera farther away” warning with the lidar sensor covered, similar to portrait view.

Another thing I wonder is if the lidar data is baked into the frames or preserved on its separate track as something Vision Pro uses to reconstruct a scene. If the latter, imagine the computer vision training implications.

Also, you make the best guess I’ve seen so far for why the video is capped at 30fps. On top of needing to stabilize the frames, 30fps helps a lot with judder. I sort of wish you would be allowed to underclock the display to 60fps so long as you are seated in a stationary environment. 60 does feel choppy on quest 3.
 
Last edited:
  • Love
Reactions: bmustaf
Has anyone found out if the video file can be used with an Oculus Quest?
[Edit to add link to Apple Document, and observations so far:]

That's what I want to know too; compatibility with other video playback software and VR headsets.
It's an extension of HEVC codec (in a MOV/mp4 container). Apple has documented the format/extension here:
Apple HEVC Stereo Video

My primary interest in potentially purchasing any Apple spatial-computing device will be related to spatial photos/videos, and I'm not investing in it unless those memories can be moved out and viewed outside the Apple ecosystem...

To that end, I just updated took a quick sample spatial video using iOS 17.2 beta 2.
So far:
- Photos recognizes it as a spatial video with a little tag, but as expected doe nothing else...
- Quicktime / VLC - play it as normal, and don't show anything In detailed information or metadata, etc..
- the VisionOS SDK beta play It as a special video, bit obviously in a 2d window on macOS...
- bino3d ( https://bino3d.org/ ) - latest from GitHub - can't find any right-eye data in the file...
- HereSphere (v0.98) on the Quest 2 can't find any rifght-eye data in the file...
 
Last edited:
That's what I want to know too; compatibility with other video playback software and VR headsets.
I really hooe Apple hasn't come up with a closed, proprietary format for this... even if it's a new format, it needs to be open/standards driven, so that other software developers can implement decoders/viewers for it.

My primary interest in potentially purchasing any Apple spatial-computing device will be related to spatial photos/videos, and I'm not investing in it unless those memories can be moved out and viewed outside the Apple ecosystem...

To that end, I just updated took a quick sample spatial video using iOS 17.2 beta 2, and I'll do some investigating on macOS with:
- Photos / Quicktime / VLC
- the VisionOS SDK beta
- bino3d ( https://bino3d.org/ )
- HereSphere on the Quest 2

...and see what we have.
Canon released a stereoscopic lens for VR content, hoping apple will support stuff like this for higher quality content.


system-features-camera2:4-3-XL
 
  • Like
Reactions: marshy
God, the constant dilemma if I'm going to capture the moment with 1080p spatial video or plain 4k 60.

Can't we have an option to capture the current best of both modes?
I don't care if there will be a long delayed background processing after the capture
 
Last edited:
God, the constant dilemma if I'm going to capture the moment with 1080p spatial video or plain 4k 60.

Can't we have an option to capture the current best of both modes?
I don't care if there will be a long delayed background processing after the capture
I think it’s due to the one lens needing to crop in to match the zoom of the other lens to create the stereoscopic view. I wouldn’t mind an option to let me capture the spatial and saving the separate source videos, like how HDR saved the image stack along with the combined when that feature started.
 
  • Like
Reactions: spaxxedout
I think it’s due to the one lens needing to crop in to match the zoom of the other lens to create the stereoscopic view. I wouldn’t mind an option to let me capture the spatial and saving the separate source videos, like how HDR saved the image stack along with the combined when that feature started.

Great suggestion! Or maybe the telephoto lens could be running at full resolution at the same time.

I do think they will adopt a traffic light pattern eventually to get more IPD out of the lenses.
 
  • Like
Reactions: verniesgarden
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.