There is an opportunity for RealityOS to be a Machine Learning platform. With a headset you have visual, audio, and spacial inputs - those can all be enhanced with ML to provide a novel AR/MR experience.
It doesn't have to be all consumption, either. Creating and sharing NeRFs (or whatever Apple ends up calling them) is the simplest case. Apple also has their no/low-code systems like SwiftUI and Trinity AI. It should be possible to create AI tools right within Reality OS - either by training object recognition through Object Capture + NeRFs, or with audio or spacial data. Someone working in this system could create a ML-driven solution and then share it with colleagues and friends to help with them with various tasks.
Imagine being a researcher counting migratory birds in the field. You could take some recordings of your subject and train a model to recognize them. Then carry the Apple headset with you, and the system would alert you on your Apple Watch when it hears the audio you are interested in. You then put on the headset, and it uses the mic array to display on-screen indicators showing the direction of the sound. Using optical zoom, you could get a better look; maybe even drop in a pre-trained image enhancement model. You could then log the results using the data service on your iPhone and put the headset away.
There are tons of uses around the home for this, too.
- My kid makes a sculpture for a school project, and I use the headset to create a virtual model I can share with the grandparents. This can even use Apple's subject-isolation technology to remove the background.
- I want to watch a movie on Apple TV+ with some friends - I can use Share Play to create a shared viewing experience, while offloading the streaming and decoding to my Apple TV to save on power.
- We get done painting a room, and want to share what it looks like before and after.
- I want to do something on my Mac, but it's a different part of the house and I'm outside on the porch. I can bring up a virtual screen on the headset, and do desktop-style work anywhere.
If Apple does this right, the eternal question of "What is AR really useful for?" will quickly become "What is AR not useful for?". Apple is the only company in the world who has competence in every field needed to bring a platform like this together. They can do great hardware, and have custom silicon to power it. They have many other devices and services that can be tied into the headset (Mac, iPhone, AirPods, Apply TV, Apple Music, App Store, etc). They have all the pieces needed for a breakthrough Machine Learning environment. They have mature developer tools and APIs, and a developer community eager to adopt new form factors. They have a world class ecosystem not just for apps, but for sharing and communication. Neither Microsoft, Meta, Google, or NVIDIA are on this level.