Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You answered your own question in your post it seems like. To be able to be in another place without having to travel is an end in itself because time is a valuable resource that only increases in value as one gets older (because you have less and less of it).

Zuckerberg's metaverse failed not because he didn't figure out the "what" for people already in the metaverse but because there is a disconnect between what goes on in the metaverse and reality (as you so aptly put it). On the contrary, Facebook succeeded because what goes on on Facebook can have a real and significant impact on a user's reality.

Apple and Meta are approaching VR from the opposite direction but they're heading towards the same destination. Apple's approach is rooted in, as you said, convenience, and by extension, reality. It tries to create a virtual homologue of things that already exist in reality, e.g., Spatial Personas and Personal Voice. Meta is going about it as it went about it with Facebook: create a virtual world → get people into the said virtual world → have them stay there. But ultimately, with enough people in the metaverse, it will start to have a real-world impact as people interact, exchange ideas, and make connections. The same goes for Apple. When there are enough homologous traits of you, e.g., face and voice, to essentially duplicate your presence in the virtual world, people will start to yearn for a "metaverse". It might even arise organically.

When I say the "and then what" needs figuring out, I mean that it's more than just answering the question. As a manager, I think of the next steps as:

1. More completely fleshing out the problem statement.
2. Determining use cases/user stories that fit the problem statement.
3. Scoping the components, project milestones, and success metrics for delivery of the solution.

But the precondition to all this is quantifying the size of the addressable market and its projected growth, in more specific terms than the broad ones I've laid out. This is where the rubber meets the road... How do you finance this project and have it pay dividends in project milestones. What does that look like?

Maybe that doesn't actually look like Vision Pro. Maybe it really begins with taking Task Automation and Siri to another level. Right now, Siri is a virtual assistant, but what if it instead were a virtual extension of one's self? If the most common things you do in a day could be automated, so that your coffee is ready, your playlist for the day is set up, your news and social media feeds are configured to the most relevant things on your mind right now, your pre-read for the 9:30 meeting is on your desktop, annotated with highlights to call your attention to key sections that impact decisions you have to make, and your reminders actually execute the tasks from paying your water bill to scheduling an Amazon return, all by the time you wake up? How many hours of time has that freed up for you?

Graphical virtualization of the self is far less compelling if it doesn't come with the ability to get the work of living done so you can spend more time actually living, instead of the running dystopian joke about teaching an A.I. to paint for you so you can spend more time working... and by then you start to see the graphical virtualization of the self is completely superfluous here. It does not offer any advantage. Our presence is not just in how we appear, but in what we do. I already don't turn my camera on for 99% of the meetings I'm in.

This is just an example of how the roadmap might actually begin... With capacitance sensing multitouch it actually began quietly, on the Magic Mouse... and as people acclimated to the idea in one corner of their lives, and it proved to be of value (by eliminating moving parts that wear down), it made the transition to more sophisticated implementations of multitouch more palatable.

That's the thing Apple does so well most people don't even realize it... At one point, even Apple didn't realize it. Remember HyperCard? It was HTTP before HTTP, except John Sculley never understood that.
 
This is creepy af.

I consider myself a nerd, geek, and Apple disciple… but this is an absolute no for me.

I don’t want fake headshots and fake voices of real people. That’s crossed a very creepy line with me.
 
I don’t want to interact with peoples’ personas, I want to interact with them as they actually are. Especially people I’m close to, children and grandchildren come to mind. I want them to interact with me as well.
This is creepy af.

I consider myself a nerd, geek, and Apple disciple… but this is an absolute no for me.

I don’t want fake headshots and fake voices of real people. That’s crossed a very creepy line with me.
What's the point of Face-Time when it's not really you? If I get haircut, scratch on my face from my puppy, it won't show that. Just makes it less human.

I know it's a trade-off but not understanding the benefit. Especially when the new Face-Time effects in Sonoma seem useful for communicating (like having your monitor display behind you).
You all want this. You just don't know it yet.

When you actually try this for yourself - not this version - the completely indistinguishable from reality version, you will never want to do a videocall again because it will feel as valid as a videocall, visually the same, but as a life-sized hologram (and not a seethrough blue halo hologram like you see in sci-fi).

This will feel more present, more engaging, and enable greater body language and social cues. It will feel like you are face to face, shoulder to shoulder, with that person rather than looking at a small 2D representation of them like you do on FaceTime.

As Steve Jobs said: "Give the customers what they want." But that's not my approach. Our job is to figure out what they're going to want before they do."
 
Ok, sure it will get better over time, but, even Dan didn’t think he was talking to a real person when he tried the headset in…
People who have tried Meta's codec avatars have reported that they just believe it's a real person. It's fully convincing.


Not to start a Meta vs Apple war or anything. This is because Meta's tech is relegated to the lab still, whereas Apple is shipping an earlier version rather than waiting for perfection.
 
  • Like
Reactions: Good User Name
People who have tried Meta's codec avatars have reported that they just believe it's a real person. It's fully convincing.


Not to start a Meta vs Apple war or anything. This is because Meta's tech is relegated to the lab still, whereas Apple is shipping an earlier version rather than waiting for perfection.

It's not that they're "shipping an earlier version"... or that Meta are "waiting for perfection"...

What you see in the lab takes an immense amount of firepower. The Meta Quest 2 with its Qualcomm Snapdragon XR2/865 is not even 1/17th the firepower of the M2. M2 could *probably* squeak something like this, but your battery would last minutes or seconds instead of two hours.

What Apple's doing is shipping with the solution that isn't going to make every single Vision Pro user return their $3500 purchase within 24 hours.

Yes I will bet you that my voice out of my $50,000 recording studio with my $1500 Neumann mic sounds better than your $800 iPhone. I will also bet you that my setup is completely and absolutely impractical for the typical Apple user, let alone the typical user.
 
Clippy 2024 Edition

Screenshot 2023-06-07 at 5.01.57 PM.png
 
By all means let’s combine “Spatial Personas” with “Personal Voice” and then our AI-generated images and sounds can just talk to each other over FaceTime long after we’re dead.
I know you’re joking but I remember hearing on a tech podcast several months back about an AI company working on pretty much that so you can talk with dead people long after they’re gone. I think it even works retroactively if you feed them enough data such as social posts and collected videos and text message histories. Kinda wild.
 
There was a part of the keynote where a dad was taking a spacial video of his kid's birthday celebration. Who would be like "hold on, sweetie, before you blow those candles, let daddy get his space goggles on so he can take a spacial video of you." Most humans would just choose to be there in the moment with their child, not watch it through a face computer.
Was I the only person growing up whose dad had a giant VHS camcorder? He would regularly pull it out at birthdays and holidays to record events. I didn’t think he was some awful disconnected person when he was watching me blowing out the candles through the viewfinder of the camcorder, I knew he was saving that moment for us to view in the future.

With the glasses, you can be even more present in the moment than my dad was as you don’t have to hold a giant device in your hands and your eyes are visible to everyone around you. Would it be weird and awful if the dad spent every waking moment in the glasses? Yes. But I think people freaking out about that segment have to come back to reality and remember this isn’t very far from what we already do - whether it’s a camcorder, a phone, or some space goggles, we regularly hold devices between us and other people to record special events.
 
Last edited:
  • Like
Reactions: Good User Name
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.