Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
What does history tell you about technology and how its physical size changes over time? Generally would you agree it gets smaller and more refined over time?
sure, but the "briefcase" mobile phones you mentioned were literally from DECADES ago

and iPhones actually keep getting bigger, presumably to hold the stupid camera I guess? I'm still holding out hope for another "mini" one day
 
But again, the question I asked (amongst others) was “what problem with current interfaces does this solve?” If the eye tracking and hand gesture system is better, someone should be able to describe exactly how, right? Explain what specific goals can only be accomplished with the spacial computing system?

If this is the “future of computing” it should have at least one significant advantage over the current system.
It’s depends on your perspective, you could always argue there are no problems to solve with the current interfaces, regarding input. It’s been roughly the same for years, until touch came about.

However, most consider the interaction layer we focus on with computers is visual. The problem we have visually is screen space, especially on personal devices such as watches and phones.

The AVP addresses this limitation with the concept of any size screen I want, anywhere.

So it’s not a better computer, it’s a different computer.
 
It’s been roughly the same for years, until touch came about.

the only problem touch solved was fitting everything in your hand/pocket

if I'm at my desk I'll always reach for the keyboard rather then the phone

So it’s not a better computer, it’s a different computer.

again, it's not a different computer, it's a different display

edit: not even a different display, but rather a different display mount
 
Last edited:
It’s depends on your perspective, you could always argue there are no problems to solve with the current interfaces, regarding input. It’s been roughly the same for years, until touch came about.

However, most consider the interaction layer we focus on with computers is visual. The problem we have visually is screen space, especially on personal devices such as watches and phones.

The AVP addresses this limitation with the concept of any size screen I want, anywhere.

So it’s not a better computer, it’s a different computer.

Current GUIs are all from the same base DNA. The innovation was not having to use machine code or Basic to operate the hardware. THAT was a real and significant advancement. So, again. What is the comparable step forward represented by Vision? A virtual screen isn’t it. Applications that don’t rely on a screen at all seem like the more credible path.
 
  • Like
Reactions: nathansz
Shocker. This is not a product for the masses - it's a product for people with $3500 burning a hole in their pocket because they already own everything else which costs that much.
It is known as having "more... money... than... brains". I simply won't ever own one (at ANY price) until a retina scan is NOT a requirement to using it (assuming I ever could, as my eyes don't work in stereo). I will not give up my biometrics! I don't use TouchID on my iPhone SE. I won't use FaceID, if my next iPhone doesn't have a home button. I simple will not. Require it and I'm gone. Period!
 
  • Wow
Reactions: gusmula
It is known as having "more... money... than... brains". I simply won't ever own one (at ANY price) until a retina scan is NOT a requirement to using it (assuming I ever could, as my eyes don't work in stereo). I will not give up my biometrics! I don't use TouchID on my iPhone SE. I won't use FaceID, if my next iPhone doesn't have a home button. I simple will not. Require it and I'm gone. Period!

I’m the exact opposite and it hardly makes me brainless. I won’t go back to a device that can’t quickly scan my face or finger and open up without requiring me to do anything other than look at it or touch it. I don’t have any desire to enter a complex passcode on a regular basis.

If there’s a large enough market somebody somewhere will make a device that meets your requirements even if it’s not Apple.
 
I’m the exact opposite and it hardly makes me brainless. I won’t go back to a device that can’t quickly scan my face or finger and open up without requiring me to do anything other than look at it or touch it. I don’t have any desire to enter a complex passcode on a regular basis.

If there’s a large enough market somebody somewhere will make a device that meets your requirements even if it’s not Apple.

And there you have the two extremes regarding this topic. On one hand the convenience of biometric identification. On the other the very real danger of sharing your biometric data with ANYONE.

Whether or not there’s a market for biometric harvesting isn’t at issue, clearly there is. The only question is whether you personally consent to have your data harvested or not.
 
  • Wow
Reactions: gusmula
But again, the question I asked (amongst others) was “what problem with current interfaces does this solve?” If the eye tracking and hand gesture system is better, someone should be able to describe exactly how, right? Explain what specific goals can only be accomplished with the spacial computing system?

If this is the “future of computing” it should have at least one significant advantage over the current system.

I have a hands-free computer with a display the size of my field of view, and which fits neatly into a small box. I find it hard to believe that people are having a hard time envisioning new use cases for this paradigm, and acting like the keyboard+mouse interface is somehow perfect and capable of handling every computing scenario under the sun.
 
  • Sad
Reactions: Surf Monkey
I have a hands-free computer with a display the size of my field of view, and which fits neatly into a small box. I find it hard to believe that people are having a hard time envisioning new use cases for this paradigm, and acting like the keyboard+mouse interface is somehow perfect and capable of handling every computing scenario under the sun.

So what use cases do you envision then?
 
  • Like
Reactions: Surf Monkey
So what use cases do you envision then?

Here’s what comes to mind.

1) Gaming - BG3 is available on the Mac, and extrapolating from my own demo with it involving spatial video, I think playing it with Mac virtual display should look pretty incredible (assuming you have the Mac to support a widescreen setup).

Myself, just the idea of being to bring the equivalent of a giant monitor along with you that fits in your backpack is a huge draw right there.

2) Content consumption. This gets a dirty name around here for some reason, but sometimes, part of people’s work involves basically that - consuming content. If you lay on your back with your head resting on a pillow, the weight of the AVP should be lessened, allowed you to swatch YouTube or TV+ for extended periods of time. It’s basically a miniaturised imax theatre for a single person.

There’s also reading and browsing social media (eg: tasks you normally do on your iPad). I suspect this may end up being more of an ipad killer (or replacement) than a Mac, now that I think about it. I know price is a huge deterrent at the moment, and for someone who uses his ipad a fair amount, there does seem to be a lot of overlap between the two.

3) Distraction-free writing for journaling or reflecting. Turn on an environment to tune out everything that is happening around you. Open your journaling app of choice. I think this will appeal to writers.

I also think that it would be cool to mind map with an app like MindNode, or doing brain dumps in a task manager like Things. My problem with doing this on my ipad is that the screen is too small, and using a keyboard and mouse on my imac felt too impersonal and detached from the content.

4) Photos and videos - this was another of my takeaway from the Apple Store, that spatial content, panoramas and 3D really make you feel like you are there at the time it was recorded, and there is no better way of experiencing them.

Granted, this is theorycraftng that is based off a 30-minute demo at the Apple Store (which felt like 5 minutes), so maybe those tasks may not longer seem as fun or immersive once the novelty wears off, but I will say my initial experience was quite positive overall (except the part about not having any lens that suit me).
 
Here’s what comes to mind.

1) Gaming - BG3 is available on the Mac, and extrapolating from my own demo with it involving spatial video, I think playing it with Mac virtual display should look pretty incredible (assuming you have the Mac to support a widescreen setup).

Myself, just the idea of being to bring the equivalent of a giant monitor along with you that fits in your backpack is a huge draw right there.

2) Content consumption. This gets a dirty name around here for some reason, but sometimes, part of people’s work involves basically that - consuming content. If you lay on your back with your head resting on a pillow, the weight of the AVP should be lessened, allowed you to swatch YouTube or TV+ for extended periods of time. It’s basically a miniaturised imax theatre for a single person.

There’s also reading and browsing social media (eg: tasks you normally do on your iPad). I suspect this may end up being more of an ipad killer (or replacement) than a Mac, now that I think about it. I know price is a huge deterrent at the moment, and for someone who uses his ipad a fair amount, there does seem to be a lot of overlap between the two.

3) Distraction-free writing for journaling or reflecting. Turn on an environment to tune out everything that is happening around you. Open your journaling app of choice. I think this will appeal to writers.

I also think that it would be cool to mind map with an app like MindNode, or doing brain dumps in a task manager like Things. My problem with doing this on my ipad is that the screen is too small, and using a keyboard and mouse on my imac felt too impersonal and detached from the content.

4) Photos and videos - this was another of my takeaway from the Apple Store, that spatial content, panoramas and 3D really make you feel like you are there at the time it was recorded, and there is no better way of experiencing them.

Granted, this is theorycraftng that is based off a 30-minute demo at the Apple Store (which felt like 5 minutes), so maybe those tasks may not longer seem as fun or immersive once the novelty wears off, but I will say my initial experience was quite positive overall (except the part about not having any lens that suit me).

None of those are new use cases

They are all things that can be done (arguably better) with different tools

……

I literally laughed out loud when you mentioned gaming. The most powerful apple silicon can’t even keep pace with 2 year old amd gpus

Most of apple’s idea of gaming seems to be porting mobile games to Mac/apple tv

Not to mention vr gaming is generally not a great experience which is why it has never really taken off
 
  • Love
Reactions: Surf Monkey
also think that it would be cool to mind map with an app like MindNode, or doing brain dumps in a task manager like Things. My problem with doing this on my ipad is that the screen is too small, and using a keyboard and mouse on my imac felt too impersonal and detached from the content.

So I can’t imagine a more efficient way of getting information from my brain to an app like things than typing it out on a physical keyboard. Touch screen is much worse, not because it’s too small but because typing on a screen is inefficient compared to a keyboard

So how are you imputing this in to the goggles that will improve anything?
 
  • Like
Reactions: Surf Monkey
None of those are new use cases

They are all things that can be done (arguably better) with different tools
Such as?

So I can’t imagine a more efficient way of getting information from my brain to an app like things than typing it out on a physical keyboard. Touch screen is much worse, not because it’s too small but because typing on a screen is inefficient compared to a keyboard

So how are you imputing this in to the goggles that will improve anything?

See, that’s the thing. I can acknowledge that maybe for some tasks, the PC wins out in terms of sheer brutal efficiency, and I still love performing them on my ipad regardless.

For example, I record screencasts on my ipad (notability), edited it in LumaFusion (also ipad), uploaded it to youtube (ipad), and posted a notification in google classroom (ipad).

With notetaking, I have this idea of the reading being open on one side, MindNode on the other, hammering out the points first, then using my hands to move them around as I reposition and link the key ideas together.

Again, is it slower? Perhaps. Is it less efficient. Maybe. Is it more fun? Possibly, compared to being tethered to a desk, especially since I can potentially walk around the house while doing this.

Not everything is about specs and raw numbers.
 
Last edited:

I’m not going to go through each scenario one by one, but I don’t see any where a computer, phone, television screen or pen and paper doesn’t make a better tools for the job

And none of those require partitioning yourself off from the outside world
 
Last edited:
  • Like
Reactions: turbineseaplane
Ok

I’m gonna go out on a limb and say that waving your hands around in the air with a headset strapped to your head is not the future.

I think what’s happening here is everybody has just seen so much sci-fi that they think that’s gotta be the reality we are heading towards

I would even wager that whatever might be the future interaction method/device is something we haven’t even seen yet
I'd say the discussion around iPad suffers from the same problem. There is this SciFi delusion that the tablet form factor is supposed to somehow take over and dominate all computing, and when it hasn't managed to replace a single Mac in 14 years, people are incensed that it hasn't lived up to their imagined potential.
 
  • Love
Reactions: turbineseaplane
But again, the question I asked (amongst others) was “what problem with current interfaces does this solve?” If the eye tracking and hand gesture system is better, someone should be able to describe exactly how, right? Explain what specific goals can only be accomplished with the spacial computing system?

If this is the “future of computing” it should have at least one significant advantage over the current system.
Eye tracking and hand gestures are most definitely not "better" than anything. They are simply necessary. Just like wearing the giant uncomfortable headset is necessary. All of these are meant to be things that you tolerate in order to enjoy the software experience, which is supposed to be so good that it makes these things acceptable tradeoffs. Unfortunately, the software experience isn't worth any of that.
 
At $6000 in Australia at the cheapest price, more for prescription lenses and accessories, it is just an expensive entertainment device for the richer people, and that not even mentioning the lack of usefulness of the device.
 


Apple's Vision Pro spatial computing headset has yet to sell 100,000 units in the quarter since it launched in the U.S. in February, according to market tracker IDC (via Bloomberg).

Apple-Vision-Pro-at-Steve-Jobs-Theater.jpeg

The device is projected to see a 75% drop in domestic sales in the current quarter, but the launch of Vision Pro internationally this month is expected to offset that decline.

IDC believes that a more affordable version at roughly half the price of the current $3,500 unit should rekindle interest in 2025, but sales are not expected to rise significantly over the coming year. Apple Vision Pro is set to launch in the United Kingdom, Canada, Australia, France, and Germany on Friday, July 12.

Reviews for the device have been mixed overall. For the most part, users have been impressed with the hardware and the technology introduced by the Vision Pro, but there are questions about the actual function of the device, the intuitiveness of the gesture-based control, the weight and comfort, and VR in general. Users have also been critical of the lack of content for the device.

"The Vision Pro's success, regardless of its price, will ultimately depend on the available content," said Francisco Jeronimo, vice president at IDC. "As Apple expands the product to international markets, it's crucial that local content is also made available."

Apple is expected to produce fewer than 400,000 Vision Pro headsets in 2024 due to the complexity of manufacturing, according to analyst Ming-Chi Kuo. However, Apple is said to be already working on a new version of the ‌Apple Vision Pro‌ for 2025 that will be priced more affordably. IDC's Jeronimo predicts that will more than double sales when it arrives in the latter half of 2025.

Article Link: Apple Vision Pro Unlikely to Hit 500,000 Sales This Year, Says IDC
IDC is a no-nothing. They don't even consider an iPad to be a personal computer.
 
For you. But as the massive demand for tickets to live sports events prove, many if not most sports fans would prefer to attend a live event, not experience it within a plastic isolation helmet.
The more massiver demand for sports on TV show people want to be at home.

Also, it could entirely be possible for Vision Pro sports fans to share immersive stadium environments even if they’re across the country…
 
they apparently produced 750,000 units

and to reiterate the point of the person you were responding to, Sony didn't start producing displays for them on 1 jan

............

I don't know why I'm arguing about supply chain, possibly the most boring subject on earth. which is why apple is such a boring company now.
To clarify who is “they” and what is the source?

If you’re saying Sony, then because it requires 2 units per AVP…what are we going back and forth about here?

375,000 AVP maximum produced using your own numbers. In a thread about how the IDC is claiming 500K units won’t be sold. I’d classify that as a big “no duh”. You can’t sell 125K more units than have been produced.

So why the 8+ pages of hot takes about product when the entire damn premise of the thread is based on an absolutely dumb thing to “report”?
 
To clarify who is “they” and what is the source?

If you’re saying Sony, then because it requires 2 units per AVP…what are we going back and forth about here?

375,000 AVP maximum produced using your own numbers. In a thread about how the IDC is claiming 500K units won’t be sold. I’d classify that as a big “no duh”. You can’t sell 125K more units than have been produced.

So why the 8+ pages of hot takes about product when the entire damn premise of the thread is based on an absolutely dumb thing to “report”?

apple supposedly originally intended to have 750,000 headsets produced, they apparently later reduced their order to 400,000 headsets

I don't recall where I read it but shouldn't be too hard to find if you search for it. it was a couple weeks ago

may have been ars or engadet or something like that
 
How can IDC say that Apple is unlikely to hit 500,000 sales, when Apple is unlikely to produce more than 400,000 units this year !! Doesn’t make sense !!

Well... if they know Apple is not going to produce more than 400,000 units, then it does make sense!
This would be the reason why they are so confident their prediction will happen.
Agree with Iceman. IDC is telling no lies. They just let the reader assume that anything under 500,000 (a number they pulled out of thin air) is a disappointment - when really, no-one knows what Apple's sales target was.

For all we know, Apple was shooting for 350,000 sales - with the remaining 50,000 allocated to AppleCare swaps, loaners, demos, etc?

(I'm making all these numbers up, btw.)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.