Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

HDFan

Contributor
Original poster
Jun 30, 2007
7,355
3,412
"But what if up close, moving content was desired? What if making a whole immersive landscape featuring multiple elements like animals, people, and more was the goal? That would require one heck of a camera system to appear true to life. That is, at least, what Canon believes.

"In order to reproduce video for Vision Pro, you need to have at least 100 megapixels,” Yasuhiko Shiomi, Advisory Director and Unit Executive of the Image Communication Business Operations at Canon, says.

“So at the moment, we can’t cater to that level of a requirement. But what I presume what companies who will be providing images for the Vision Pro will be required to have 100-megapixels with 60 frames per second,” Shiomi explains.

For reference, 100 megapixels would equal 14K: nearly double most of the current high-resolution cinema offerings. The only system that fits the bill is Sphere’s Big Sky camera, which is an 18K behemoth that takes 12 people to operate. While that would be enough, it isn’t in mass production and likely isn’t commercially viable, given the cost, to use outside of making content specific to the Sphere. The key point here is availability."

 
  • Like
Reactions: AdonisSMU
He was specifically referring to video with high-detail foreground motion. I wonder how accurate his estimate really is for general video use. Clearly Apple has cameras they are using for the immersive content already released, and I don't think there's a camera out there with the specs he cites, so I suspect his estimate is off.
 
Apple may be using custom solutions that are not available to buy.
That's possible, of course but I doubt it. They do not want to be the camera business; the AVP is meant to be a content-consuming device. They clearly want others to be creating content for it and the more the better, so requiring any kind of super-special camera doesn't really make logical sense. I just doubt the claim made a Canon exec who has probably barely had time to even use an AVP. I highly doubt Apple has built some super-special camera made just for their videos. It's more likely that the Canon exec doesn't have enough knowledge to understand what's really needed. @gerald.d has a pretty good breakdown above. Given their past work with BlackMagic I wouldn't be surprised at all if a modified version of the Ursa12K is what was used and that we'll see camera/lens combos available quite soon.
 
  • Like
Reactions: Allen_Wentz
I’m of course fully aware of exactly how this will end. All fine by me. Your refusal to engage says it all.
 
Last edited:
Please explain how you interpret "high detail foreground motion", and how it relates to video capture.

FWIW, Apple's existing content is almost certainly created with a multiple sensor/lens set-up. There is nothing extraordinary about the video that has been captured (I personally was creating 12K monoscopic 360 degree video back in 2013).

What is challenging right now, is to create it with a single camera.
It's not me, that's taken right from the article you linked:

"But what if up close, moving content was desired? What if making a whole immersive landscape featuring multiple elements like animals, people, and more was the goal? That would require one heck of a camera system to appear true to life. That is, at least, what Canon believes.

“In order to reproduce video for Vision Pro, you need to have at least 100 megapixels,” Yasuhiko Shiomi, Advisory Director and Unit Executive of the Image Communication Business Operations at Canon, says.

“So at the moment, we can’t cater to that level of a requirement. But what I presume what companies who will be providing images for the Vision Pro will be required to have 100-megapixels with 60 frames per second,” Shiomi explains."
 
  • Like
Reactions: gerald.d
Please explain how you interpret "high detail foreground motion", and how it relates to video capture.

FWIW, Apple's existing content is almost certainly created with a multiple sensor/lens set-up. There is nothing extraordinary about the video that has been captured (I personally was creating 12K monoscopic 360 degree video back in 2013).

What is challenging right now, is to create it with a single camera.
And yes, I am 100% in agreement with you. The Canon exec comment is a bit odd. You absolutely do not need some special camera with that resolution and all of the lenses you would have to build for it. Much more cost-effective to use a dual camera/dual lens rig using off-the shelf available cameras and lenses. So like you said, nothing really special on the camera side.
 
  • Like
Reactions: gerald.d
It's not me, that's taken right from the article you linked:

"But what if up close, moving content was desired? What if making a whole immersive landscape featuring multiple elements like animals, people, and more was the goal? That would require one heck of a camera system to appear true to life. That is, at least, what Canon believes.

“In order to reproduce video for Vision Pro, you need to have at least 100 megapixels,” Yasuhiko Shiomi, Advisory Director and Unit Executive of the Image Communication Business Operations at Canon, says.

“So at the moment, we can’t cater to that level of a requirement. But what I presume what companies who will be providing images for the Vision Pro will be required to have 100-megapixels with 60 frames per second,” Shiomi explains."
Yes I get that - I read the article closely.

I'm pretty sure I know exactly what he is alluding to with that statement (and took it into account when I responded earlier). I'm simply asking you what you think he means by it.
 
Some resolution gets lost because the AVP pixels don’t correspond 1:1 to the camera pixels. Instead the AVP applies some transformation to the original image to make it correspond to the visual 3D setup of the AVP, and probably also to individual parameters like the IPD. This has the effect of blurring the original picture, if it had roughly the same resolution. In addition, the size of the original pixels after the transformation is not uniform, and may become larger than an AVP pixel depending on the position on the AVP panels.

I don’t know if that’s what Canon os alluding to, but it seems clear to me that, for maximum fidelity, 3D material needs significantly higher nominal resolution in recording than the nominal resolution of the VR headset panels.

There is a similar effect when mirroring a Mac screen in the AVP, where even when the Mac resolution is in the same ballpark as the screen real estate taken up on the AVP panels by the mirrored screen, text appears less sharp than text in the visionOS UI. This is because then latter is directly rendered taking the 3D transformation into account, whereas the former is first rendered onto the 2D Mac screen and the resulting pixels then 3D-transformed to appear like a screen in 3D space.
 
Whatever Canon specifically meant in this comment, I'm glad to see it. Because the availability of 3D immersive cameras are in short supply. :)
I think what he really was getting at is you won't see an inexpensive single-camera solution to this problem. At least that's the best guess...because that is true. But there are other solutions with multiple cameras. Of course none of these will be inexpensive, but this is also premium content that requires a lot of technological sophistication for the capture and editing, at least if you want it to look awesome.

I suspect that it might be just your iPhone Pro's and then high-end systems that do this because its a super niche market for a camera maker to target--not just Apple but the whole VR market right now is just too niche. Meanwhile the iPhone and some Android phones can work and pretty well. And I should clarify...they won't be quite fully "immersive" but rather just the pared down version you can see now.
 
The phase I which for me defines the audience that he is referencing is:

But what I presume what companies who will be providing images for the Vision Pro will be required to have 100-megapixels with 60 frames per second,” Shiomi explains.

i.E. he's not talking about a general use camera but a camera that would be used by companies with unlimited budgets to produce highest quality images for their VP applications. Companies which could afford multiple Blackmagic Design URSA Mini Pro 12Ks >$7K and Cinema lenses or lens kits which could reach $30K for each of the 2 cameras.
 
Canon has a current solution for 180 Vr video and that using a custom stereo lens coupled with the Canon R5C A 45 megapixel full frame camera. I've seen raw sample video produced from that camera on my Pimax Crystal, the video was not even close to being sharp, it was HD quality at best. I could see it needing 100 megapixels. Remember this guy is taking about 180 degree immersive side by side video. What apple records with it's Vision Pro camera's is 3d cinema style, like a movie you'd download, not immersive like 3d porn.
 
  • Like
Reactions: G5isAlive
I've seen raw sample video produced from that camera on my Pimax Crystal, the video was not even close to being sharp, it was HD quality at best.

Was this the video you were watching? Watching it in 4K resolution on my Mac the quality seems excellent.

 
Was this the video you were watching? Watching it in 4K resolution on my Mac the quality seems excellent.

Not that one, but similar. At first it does look ok on a 4k monitor, but if you look at it in an immersed environment on a high resolution display, you can tell that it's pretty low res. It comes down to pixels per degree, these 180 degree videos would need 34 pixels per degree to match the Vision Pro. The canon 5rC 45 megapixel camera records a width of 8192 pixels, these then are divided by 2 to make a stereo pair = 4096 pixel width per eye. Take the 4096 then divided by 180 degree coverage. is equal to 22 pixels per degree.
 
  • Like
Reactions: G5isAlive
Apple may be using custom solutions that are not available to buy.

I wouldn't be surprised at all if a modified version of the Ursa12K is what was used and that we'll see camera/lens combos available quite soon.

We may have an answer to the question what camera Apple used to take their spacial videos.

"In 2020, Apple acquired NextVR,..."

"NextVR had spent over a decade building and perfecting VR 180 camera technology and production pipelines for broadcast-quality video. The NextVR YouTube channel is still live and provides amazing examples of what became possible with their technology "

NextVR 3d camera.png

From the best VP analysis I've seen:


Tried the NextVR iPad app but gives me the error "gyroscope not supported"
 
  • Like
Reactions: Kierkegaarden
This camera has been seen in the wild; here at an NBA game, and it's purported to be what Apple is using.

And it fits...if you watch the Alicia Keys immersive video, you'll see a set of white speaker boxes placed in various locations around the room. In those boxes, you see cutouts for the two lenses, and also for the three smaller holes around the lenses.

So, whatever it is, I think this is the camera they are using. At least for live video stuff.

58633-119514-nba-camera-12-xl.jpg
 
  • Like
Reactions: Kierkegaarden
We may have an answer to the question what camera Apple used to take their spacial videos.

"In 2020, Apple acquired NextVR,..."

"NextVR had spent over a decade building and perfecting VR 180 camera technology and production pipelines for broadcast-quality video. The NextVR YouTube channel is still live and provides amazing examples of what became possible with their technology "

View attachment 2359765

From the best VP analysis I've seen:


Tried the NextVR iPad app but gives me the error "gyroscope not supported"
Don’t those look like two RED cameras synced together?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.