I don't understand why some think this type of product is an immediate failure.
I've probably noted this before, but to me VR+MR+AR devices are a clear progression of display technology, and then so much more. There have been huge strides in the quality of screens and resolution. These next gen headsets are large steps in the right direction, showing how far we've come.
I still remember old green screen monitors. The Retina displays we have today are amazing in comparison to monitors from just 15 years ago. The fact that we can even get two 4K displays per eye in a device that we can wear is amazing, and it seems, this is just the beginning still of where it's all headed. If you only considered that aspect, where you replace a desktop of screens with virtual displays, where you can still see your workspace around you, that alone is valuable to many. And then, then I think, the rumor is, Apple's headsets will have 2 of their new processors. Wow! So we're getting virtual display, as many as we want or need, at any size, anywhere plus processing power of at least a MacBook, possibly more all in a package the size of your hand? How is that not game changing? How is that not impressive?
But all of that is amazing even if you are only thinking 2 dimensionally, think beyond 2 dimensions. Try out the LiDar on your phone, and some of the amazing apps that truly use it to its full potential. For example, not so long ago, my wife was shopping for furniture and a rug. AR allowed us to size all perfectly and see those things in our living room, before we bought them. When they arrived, it PERFECTLY matched our expectations, what had been presented and planned. That's an obvious use case, and extensions to that are obvious.
For fun, I recently tried out a new game that has a virtual pet. I know, it sounds silly, but I was mostly interested in how well it used LiDar and AR capabilities. I was blown away. This little virtual character skipped and flew around the entire room, landed on my real desk, walked behind chairs with a cool occlusion effect, and then when we set a case on the floor, it "saw" the new item in the room, walked over to it, and hopped up on it. In short, the real world metrics became a part of the game, not just a simple plane surface or bounding wall, but all the features in the space, measured and integrated well, seamlessly into the interaction and presentation.
Now take that data, that ability, and the prior, then imagine developers creating new types of apps and interactions, new ways of visualizing information, creating content, producing tools and interactions that are no longer constrained by rectangular screens, that can virtually sit on your desk or against a wall, that can be handled, tossed, scaled, and manipulated beyond the constraints of the real world. It will change how software, user interfaces and experiences are design, completely. Will we through our real world screens and handheld rectangles away? No, but I do believe that mixed-reality user interfaces and full immersion technology will prove to be superior to those devices in many settings and uses.
Why haven't these ideas come to fruition in first generation headsets? For the same reason that the old green screen 640 x 480 monitors weren't rendering content at 5K with 16 million colors. The whole tech stack wasn't there and wasn't ready.
If all these cool, popular, and useful devices and services that Apple has produced over the past decade are any indication of the quality that they can deliver in combining them, I think it's a winner out the gate, the beginning of a whole new evolution, a revolutionary cycle of content, production, media, communication, and apps in general.