Regarding virtual workspace use case, what I said in the comment you quoted stands. They are still using pixels. They may be smaller pixels than the competition for all I know, but the math regarding how much of the virtual environment is represented by a pixel (even a small one) 3 inches from your eye still stands. Any illusions of having several high resolution (>=1440p) virtual monitors are out the window, as I stated in a number of comments.
I think this is the reason Apple is showing floating windows for individual apps (or app tabs, modes, etc.), not a full screen workspace as you would have in a monitor. In a traditional monitor (4K for example) you would have multiple such app windows on the workspace 18-24 inches from your face and everything is super sharp. Rendering such a workspace to appear 18-24 inches in front of you is not likely to be as sharp, even with Apple’s higher res displays.
I know the pitch is now “the entire room is your workspace, not just a monitor” but if you have effectively the same amount of functional real estate for applications because they are rendered rather large in the environment or you have to choose one “foreground” app to render sharply, I just don’t see this as a huge leap forward from just having a couple of physical monitors.
If you need to do is match the angular resolution of the eye. The eye's angular resolution is about one minute of arc or about 0.0003 radians. This works out to about 0.0003 meters at 1 meter distance. Or let's say 1/3rd of a millimeter at one meter. A screen at one meter distance would need about 6 pixels per millimeter to show two points that are separated by 0.0003 radians. I'd argue that no desktop monitor is this good. We can clearly get away with four times less than perfect resolution. My 27" 4K screen has only "half perfect" resolution and I don't see a need to upgrade to 8K.
So I don't think we need to match perfect 20/20 vision.
The best use of these goggles is going to be "telepresence" where people who are separated by distance can all meet virtually in the same space and maybe talk about objects like the design of a proposed building. The architect presents his 3D model of the building and everyone can see it and even walk around inside the building and can see the other people there inside the building and they can talk about what they all can see.
I do construction on a small scale. Today I need to work hard to tell a client she needs to up her budget $15K for better materials or to spend another 6K moving a wall or maybe another $5K for moving plumbing pipes. Today I need to make a sketch on paper with colored pencils or a computer rendering and use a LOT of words to describe how it will look. It would be so much better to let the client put on the goggles and we both walk around in the new space.
But I doubt my small projects could support the cost of creating a virtual space. But maybe in time, some kind of AI assistant could do that based on my sketches and verbal input.