Consider the following:
I assume you've all seen the Apple Photos AI 3D enhancement by now, that segments parts of the photo, then applies parallax as the phone/iPad is angled, so that the foreground items move relative to the background items.
It works pretty well, but it requires you to keep angling the display device, because otherwise how would it know that it should change the relative positioning of foreground to background?
Now assume that someone (ie Apple!) sells a small head-mounted "bead" (hell this could even be a new feature added to AirPods by SW!) that can detect your head orientation and how it changes, and report it by radio. That gets conveyed to the computing device (phone, iPad, mac) which in response shifts different window and control layers (and even content within controls) by different amounts. This would be a kind of poor man's 3D display. But "poor man's" doesn't necessarily mean inferior!
You wouldn't get "binocular vision" 3D, but you would get parallax 3D, and that might be good enough for many purposes. It's certainly enough to make the UI "pop" literally, which could actually be useful in terms of having a better feeling for layering, or for trying to view 3D models.
Anyway, anyone who's reading this - take up the idea and run with it!