Good thing ARKit has been seeding the field for years now. And the stress Apple has placed on adhering to the APIs they provide. Almost as if there has been a multiple year roadmap basically building all the frameworks out that will make pretty much all AR apps developed for iOS nearly ready to go for this first Minimum Viable Product.
Apple will rapidly iterate the hardware for 2/3 generations until the actual original outcome reaches fruition. From there they get to accelerate their lead in the product category while everyone is still trying to catch up to Gen 1.
Apple has a very predictable trajectory when moving into new product categories that they eventually dominate, for those of us who haven’t been here since the original iPhone era and watched this arch play out in multiple categories.
They’re methodical, and they start building out the software side years before the hardware is actually shown to the public. A lot of choices in other Apple products are going to make a whole lot more sense whenever this thing finally comes to light.
There’s breadcrumbs everywhere that individually mean little, but in hindsight we’re all going to say it should have been obvious. UWB, Continuity, body mapping in ARKit, the neural engine being in just about everything, the peculiar tile based GPU design they went with, very specific things they’ve done with Maps in recent years,etc. These choices will be what binds Apples overall vision for the future together, whatever this product is going to be it’s merely the *starting* point that Apple has in mind for their Next Big Thing.