The usual Apple rumor community has had nothing to say about Neural Radiance Fields (NeRFs), with the exception of Twitter where an increasing number of people seem to believe that NeRFs will be a key feature of the upcoming Apple headset and eventually the rest of the Apple ecosystem.
Most people don't even know what NeRFs are - there isn't even a wikipedia article for it yet! This is a new concept in AI and computer vision that uses machine learning to represent a scene without 3D models. It went from a resource-intensive research project at Nvidia in 2020, to an efficient near-realtime technology just within the last year. People are thinking up all sorts of use cases, but the most popular is view synthesis. This would be a step up from photogrammetry that you currently see in Object Capture - the technology can take a relatively small number of input images, and generate a compact ML model that encodes the scene. Unlike photogrammetry, it can accurately capture reflections, transparency, volumetric effects like fog or smoke, and even lens flare.
The speculation about NeRF features in the Apple headset partly stem from the unique capabilities of Apple Silicon, which have GPUs and a Neural Engine powerful enough to make NeRF features practical. We know that Apple is interested, since they have published several papers about Radiance Fields on their Machine Learning site. The potential here is huge - Users of a headset will be able to easily capture scenes or objects and share them with others, who can explore them in MR with photorealistic quality. Apple could stream live sports events or concerts in VR, where the user could position themselves wherever they want in 3D space, leveraging spacial audio to complete the experience. A new standard datatype to encapsulate this information would allow NeRFs to be integrated with headset apps from day one, and open up all sorts of applications that no one has even thought of yet.
Most people don't even know what NeRFs are - there isn't even a wikipedia article for it yet! This is a new concept in AI and computer vision that uses machine learning to represent a scene without 3D models. It went from a resource-intensive research project at Nvidia in 2020, to an efficient near-realtime technology just within the last year. People are thinking up all sorts of use cases, but the most popular is view synthesis. This would be a step up from photogrammetry that you currently see in Object Capture - the technology can take a relatively small number of input images, and generate a compact ML model that encodes the scene. Unlike photogrammetry, it can accurately capture reflections, transparency, volumetric effects like fog or smoke, and even lens flare.
The speculation about NeRF features in the Apple headset partly stem from the unique capabilities of Apple Silicon, which have GPUs and a Neural Engine powerful enough to make NeRF features practical. We know that Apple is interested, since they have published several papers about Radiance Fields on their Machine Learning site. The potential here is huge - Users of a headset will be able to easily capture scenes or objects and share them with others, who can explore them in MR with photorealistic quality. Apple could stream live sports events or concerts in VR, where the user could position themselves wherever they want in 3D space, leveraging spacial audio to complete the experience. A new standard datatype to encapsulate this information would allow NeRFs to be integrated with headset apps from day one, and open up all sorts of applications that no one has even thought of yet.