That's a somewhat limited analysis.
You can present information on the screen in two ways
- either as "context-free" information, essentially floating text (Google Glass) OR
- information that's tied to the surrounding world in some way (ie the AUGMENTED part of AR).
We obviously have no idea what Apple has planned.
On the one hand, they have talked up AR for three years now...
On the other hand, it's hard to see how you can do any sort of interesting AR without a camera.
I square the circle by assuming that
- Apple understands that AR is what makes this a party, not just a repeat of Google Glasses
- meaning there is a camera in there
- AND SO the people claiming there is no camera are misunderstanding something. For example they may be interpreting "you cannot record with the glasses" as being the same thing as "there is no camera".
In other words the conceptual differences from Google Glass are
- use of camera as an input mechanism. Google was mainly "camera as a way to take photos".
- overlaying relevant data on top of what the camera sees. ie the sort of extreme AR that's only become possible in the past two years or so via custom NPU closely tied into the ISP and GPU.
- Whether that information is presented "context free" as you say or through AR its the same principle putting information on the lens for someone to see.
- I think a camera built in is pretty much a guaranteed component.
- My point is can they take that camera and the AR software that will likely accompany it and make it a smooth, enjoyable and natural experience. You said yourself the kind of extreme AR which would have to be present in a product like this to make it worthwhile is in its infancy.
For Apple glasses and any product of this type the experience has to feel like an extension of yourself, the connection between the user and the glasses need to be seamless.It must understand natural language. For instance if I'm driving I might say "Hey Siri, I'm totally lost, I have no clue where I am, can you help me find my way home?". The response back to the user has to be less than 3 seconds otherwise its not worth it.
In those couple of seconds the glasses will need to GPS locate you, report back where you are and at the same time bring up route information and overlay it to your glasses with turn by turn directions and 3D road mapping. Also it must be able to change all this information on the fly again in 1-2 seconds if I turn my head and look in different directions.
Currently on Siri there are pauses, processing time, sometimes you have to hit the Siri button to respond. Its a robotic back and forth exchange. A product like the glasses needs a constant open natural dialogue flow. In addition mobile networks arent up to scratch to support a product like this. 4G currently is patchy and barely delivers the top speeds. I go into central London and have full 4G signal and my iPhone tells me my data isnt working and I can't load anything.
Honestly I think any glasses from any company along with driverless cars are probably way too ahead of their time for any mainstream release. They need at least another 10-20 years minimum for technology to catch up and make them a truly worthwhile product.
[automerge]1590101540[/automerge]
Thanks, but no need to. I'm fully aware of the potential of AR and have the systems/hardware engineering background over many years to know what's possible.
Ok, have to wait and see.
Though with Apples track record in recent years of lacklustre products I won't hold my breath.