I get that you might find it impressive, but in VR-land fully rendered environments that you walk around inside, with fully rendered creatures is table stakes, and while everyone is blowing out their voiceboxes in their urgency to say "It's not VR", the fact is it's a VR headset using passthrough video to make an AR experience.
What does seem particularly telling, is that the headline demo also happens to be, one might say suspiciously limited in the amount of 3D it needs to do - a tiny segment of a world, and a single moving creature.
I'll be keen to see if we get any demos of proper immersive environments and workspaces, rather than flat computing in a window or in space.
Again, it's showing what none of the demos you mention do. It's targeting your human hand and getting a real time trajectory prediction of your moving hand and an interaction with your hand.
It's just a small demo. And it's just a 'taste'.
The Environments are super immersive and feel crazy real. They dont let you walk around in them and it's tough to trick the environments to let you walk around in them. On the moon, in hawaii i would reach down to try to touch the rocks. I had someone hold my hand so i wouldn't walk into stuff and kill myself and tried to trick the system to let me go further, but it only lets you move a foot or so and then bleeds out the environment, again, because you'll likely kill yourself.
But if you wear the goggles it's really clear you can walk through 3d environments and they are shockingly convincing.
And conjuring 3d objects and you can shove your face right up to it like it's a real object, the fidelity is pretty shockingly convincing.
No one talks about it, I forget the app name, but the one that lets you position a jet engine or an F1 car in front of you, when you put your hand into it, the system detects the collision and your hand moves through it and it composites it (not perfectly) and there is a very Star Wars hologram style flicker in the confined area of the collision. I dont know if it does that flicker as just a natural part of the algorithm or they were going for a star wars vibe, but it's kind of cool. But i digress, in that app when you bring the jet engine close to your face and you start reading the labels etc, you will be convinced this thing in front of you is real.
TLDR, i do think the 3d environments, if/when they will let you in them, will be scary convincing. The 180 and 360 videos are scary convincing. I think you may be putting a bit too much weight in a very light demo, that was probably done very early on to test things and give people a feel.
I'm hopeful more development will come soon. For example, a native version of NoManSky would be amazing. That game has a UI system where you move the joysticks to position your cursor to interact with a HUD/buttons on the UI, then you press a button and must hold it for a few seconds until a UI cursor piechart fills in, and only after it goes full, does that UI widget get engaged. It's i guess so you dont accidentally mis click on UI things. This entire mechanism could be completely obviated with eye tracking and the finger tap gesture and be WAY faster AND more accurate than the joystick mechanism. I played it via SteamLink on a virtual huge AVP screen, and I kept accidentally wanting to use the AVP eye tracking mechanism as it's such a natural replacement. It made me think how cool and powerful the eye tracking can be, and that it can actually be a FASTER UI input device at least for some use cases.
This is all nascent. I wouldn't damn it quite this early...and probably not laude it too much quite yet too.