Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Yep, I’m thinking LiDAR is playing a huge role in all of this. I was excited for the added hardware when it was first announced, but it seems as if it’s been under the radar ever since. (Pun intended. 😉)

maybe. but then it seems weird they wouldn't mention it, they could have made it sound much more interesting and make the Pro stand out even more. LiDAR feels a bit underused anyway.
 
  • Like
Reactions: Razorpit
They've gotta be using LiDAR for spatial video, right?

Two barely-separated cameras doesn't give you a lot to work with, but when your phone can "see" the 3D positioning of everything in front of you, that seems like all the data you need for 3D video.
Sounds cool, but I’m not sure how that would work or how good it would look. You need the 2 separated cameras to capture 2 unique perspectives. The LIDAR is very low-res, so what could it add?
 
This is a low-key big deal. Imagine when our phones can sync location & motion using UWB so multiple people with iPhone Pros recording in Spatial create a crowdsourced spacial video from multiple perspectives! 🤯
 
As a quest 2 haver, one of the main issues of VR is that there is barely anything to watch. Making the iPhone 15 pro a camera for spatial video is a brilliant strategy. I can see why people would say its stupid, but having stuff to watch is going to be a big issue for the success of Apple Vision Pro, and iPhone 15 pro is going to be a huge influx of content. Is it good content? Well, likely not right away. BUT if even some streamer and youtubers start to put out stuff… It will be huge for Apple Vision and also Meta, lets be real.

All in all, Apple is doing everything right in regard to Apple Vision.

IF Apple continues like this, they will own the AR/VR content creation market in short order, there is no mainstream tech company out with a successful/great 3d camera/editing solution as of now. One where you can buy a camera and get consistently good results. But maybe I am just not aware, and who knows how good the iPhone 15 pro will actually be for spatial video.

However, I assume it will be ok. Its 2x 12mp?... I imagine it might be a bit odd since the interpupilary distance is so small… I guess we will see.

Perhaps you can do some magic when capturing to emulate the proper interpupilary distance. It would likely involve a capture preset where you zoom in on the frame? Likely possible, I am just throwing stuff at the top of my head here… Maybe it will work.
 
Last edited:
Millions of iPhone 15 Pros will support the recording of spatial video even before the Vision Pro goes on sale. I have only one question: Why aren't more people congratulating Tim Cook to this amazing product strategy?

Maybe because there is not enough detail on “How”?

One aspect that has me questioning; what is the frame rate? The 4k60 already maxes out the pipeline and to do it in spatial?
This will be interesting.
 
  • Like
Reactions: fwmireault
I would honestly be more impressed to be finally able to shoot video with both cameras (front and back) at the same time like Samsung or is there at least a 3rd party app that can do this?
Yes, there is definitely an app that does this. I have used it.
 
Maybe because there is not enough detail on “How”?

One aspect that has me questioning; what is the frame rate? The 4k60 already maxes out the pipeline and to do it in spatial?
This will be interesting.
Apple has a real chance to be very influential on the trajectory of AR/VR aka spatial computing going forward…

Facebook was never a contender, but they can become the “microsoft” of spatial computing maybe. However, considering they are relying on Android for their OS.. They dont really have a chance. Google/Alphabet will eat their lunch and dip into the rich data from AR/VR for their business model..

Considering Apple… They are a 100% vertically integrated company with a multi-year lead on the mobile computing space with silicon. Apple will have a good stretch of time where they can leverage their ARM advantage to put out crazy AR products until atleast 2027 when TSMC will feel the consequences of not getting the angstrom era UV litography tech. BUT ofc, TSMC is building fabs in the US to avoid that scenario…


Apple Vision is going to be a crazy technology around 2026, when nanolenses should be hitting the market… And after that all bets are off I guess, however Apple is positioning itself very well to succeed.
 
Capturing spatial video from such a small set of cameras would look very weird, the rule is that you set your sterescopic cameras to a interpupilary distance of what your eyes are, roughly.

But, I do believe it is possible to capture a 12mp frame and if you zoom in enough it will be the equivalent of capturing with a bigger interpupilary distance. Afterall, the frame is 12or14mp? and 4k is 4mp or less.. There are enough pixels to achieve the effect, and the iPhone 15 pro has the compute to even do some processing on the frames to make it look right.
 
They will also be trained to use hand gestures with the new Apple Watch feature. That was my first thought when they presented it.
🤔 Hm, there is also potential for combining watch and camera data to better track this gesture. The  Watch could become an input device for the Apple Vision Pro. When you double tap while the line of sight to your hand is blocked, the watch could still register a click.

QUICK: Patent this idea! 😁
 
I think this is more than just stereoscopic photography. In the demo, it looked like you could move around and see people from different angles. It would definitely be better if the camera were separated, though, but I'm guessing they can use the two cameras to get textures from both sides of an object and use the depth sensor to make 3D models of objects in the shot.

Sounds super complicated - I highly doubt Apple would try to 3D model the objects and the render the scene from that model.

Simulating ~ 64mm ocular distance on a pair of cameras ~ 20mm apart sounds challenging.While the lidar helps mapping the close by objects and thus might allow simulating the wider ocular distance by moving close-by objects in the frame. That poses a challenge of filling in parts of the image occluded by the object. One can just look around and cover one of their eyes to see much that 64mm allows seeing "around" objects. Maybe there is a way to do without generating artifacts bad enough for human to notice. My guess is that Apple just needs to "stretch" the gaps left by occlusion of close-by objects and blur the objects enough simulating out of focus field to fool the eye.
 
I wonder if spacing out the lenses more starting with the 14 Pro series was preparing for this. Was there another reason, other than possibly the flash element becoming larger?
 
Also also why is ist not possible to smoothly transition between lenses when filming 4K60
Yup, another major problem with the iPhones. (Dunno if the 15P has fixed this.) Basically, BEFORE shooting, you have to decide whether zooming via camera switching in hardware (not just digital cropping) is more important for you than fluidity (60p) and accordingly select between 30p and 60p. Sometimes a REALLY hard decision as 30p really makes panning a chore.
 
One thing to consider when it comes to the spacing of the cameras is that with current AI tech we are able to create photorealistic images where the gaps are "filled in". Adobe has been doing this with photoshop as of late and it works pretty damn good. Then you take a look at existing 3d tech like that from older 3DTVs or from devices like the LumePad where it is able to convert 2D images into 3D pretty convincingly (with some artifacts). I imagine Apple will not have much of a problem developing something where after the video is recorded, there is some "processing time" where the "ocular distance" is adjusted, the gaps are filled in, and then the video is available for viewing.
 
  • Like
Reactions: mech986
For me, this is THE reason I want a 15 pro. I feel like in 30-40 years, kids will will imagine the world used to be 2D, like kids today imagine the world used the be black and white.

My guess as to why this is Pro only is the Lidar (pro only) is doing the depth mapping-- and the fish eye is doing the surround video.

... though I don't know why it couldn't be done on the last couple generations of Pro phone, if that's the case. Even from Apple's perspective, while giving them no benefit of the doubt-- you'd think they would want to get as many phones generating vision pro content as possible.

Did they actually say it won't be coming to older phones?
 
Apple has a real chance to be very influential on the trajectory of AR/VR aka spatial computing going forward…

Facebook was never a contender, but they can become the “microsoft” of spatial computing maybe. However, considering they are relying on Android for their OS.. They dont really have a chance. Google/Alphabet will eat their lunch and dip into the rich data from AR/VR for their business model..

Considering Apple… They are a 100% vertically integrated company with a multi-year lead on the mobile computing space with silicon. Apple will have a good stretch of time where they can leverage their ARM advantage to put out crazy AR products until atleast 2027 when TSMC will feel the consequences of not getting the angstrom era UV litography tech. BUT ofc, TSMC is building fabs in the US to avoid that scenario…


Apple Vision is going to be a crazy technology around 2026, when nanolenses should be hitting the market… And after that all bets are off I guess, however Apple is positioning itself very well to succeed.

Well said. Apple's in this for the long term and now have all the pieces and software to bring AR to the masses in both commercial and consumer spaces.
 
I would honestly be more impressed to be finally able to shoot video with both cameras (front and back) at the same time like Samsung or is there at least a 3rd party app that can do this?
You already can since iPhone 11 Pro with apps like DoubleTake.
 
Sounds cool, but I’m not sure how that would work or how good it would look. You need the 2 separated cameras to capture 2 unique perspectives. The LIDAR is very low-res, so what could it add?
I was thinking about that as well - but maybe they compute the 3d differently. After all, it's also different fields of view. The lens that's more tele "compresses" the image more - maybe there's a way to compute a stereoscopic image from that data?

image.gif


Also, if you just use 2 cameras and don't change their angle corresponding to what you are focusing on in an image, the 3D won't be very good and rather headache-inducing. Same reason 3D movies nowadays are mostly post-conversions. It just works better and it seems like it's even less effort to rotoscope a whole movie than to film it with 3d cameras. In the words of James Cameron: "It will rip your eyes right out of your head if you try to shoot everything with the two lenses side by side" and "The stereo [e.g. depth] space has to be managed based on how far away the camera is from the subject"

I'd guess they'll do some heavy processing to get 3d from 2 different focal length lenses at more or less the same point in space and use the lidar for distance measuring to assist with that.
 
Last edited:
I wonder how well the wide angle lens crops down, or if Apple Vision users will be looking at stereoscopic photos and videos like:

o__O
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.