Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
honestly, the only feature worth adding is macOS dual boot. You can already do so much on your phone that is so absurdly Byzantine and unergnomic on a tiny touchscreen, I have no desire and no one should have a desire to do serious video editing or whatever on a what 6” screen. But if the hardware supports it, sell a phone with 2 TB ports that boots macOS when a display and input devices are connected. Make that the pro model, sell it for like $1500+ because it replaces your MacBook, offer a super overpriced keyboard/trackpad/display/extra battery dock that’s essentially a MacBook without the compute and you can capture the income of selling an iPhone plus a MacBook Air all with only one A series chip lol.

In all seriousness, iOS iPadOS MacOS and VisionOS being strictly separate branches of the same OS just to silo hardware sales is getting absurd — one high spec SoC that can switch which front end of the Apple OS it can display based on the connected I/O is the sane solution except that we live in hell where selling more widgets is the only meaningful metric, wastefulness and duplication be damned. I have 6 devices running apple Silicon that I interact with every day, including an AppleTV that basically only exists to be a home hub. A MacMini that mostly exists to be a content server and media ingest solution then one device for each of the 4 OS flavors — from a practical standpoint there’s zero reason I need more than 2 or at most 3 SoCs for these functions. (TV and Mac Mini should merge of course, then the brain of my phone should be able to run all 4 OSes depending on if it’s docked to a 6”, 11”, 16” or head mounted display.)
 
Let me just say that while Apple's software has declined in recent years and has rarely been updated with useful new features, it is still miles better than Microsoft Office.
 
  • Like
Reactions: Allen_Wentz
Even M5 is not efficient enough to make ideal Vision Pro into reality, think about that.
As a nearly daily Vision Pro user, the hardware issue with it isn’t that the m2 version is underpowered (haven’t tried the m5) — it’s that the damn thing is tethered anyway but insists on hanging the compute off the front of your face along with a bunch of glass and aluminum and a totally frivolous front display. I’d prefer a headset that ran off an A series chip unless directly tethered to a high spec Mac. Having used many other headsets, nothing about putting up a couple simple floating window apps and useable pass through video can’t run on phone hardware. And requiring a Pro or really a Max level tethered CPU when your device is already tethered is no more cumbersome. And that’s before we get to the reality that even 20 months in 80% of what I do with the Vision Pro is just use it as a big screen for my MacBook while traveling and the other 20% is watching a 3D movie I missed at the theater. And I say that as a stereo photography expert and content creator, there’s no software for VisionOS to do native serious content creation and no IO that would make that workflow feasible anyway. It’s only useful as a mirror or extension of content from a Mac anyway.

In short, the issue isn’t that the m5 can’t run an “ideal” headset — it’s that the concept of a headset running off head mounted compute is fatally flawed — there will always be a use or need for more compute and no head mounted compute device will ever match what can be run on a laptop, desktop, or even data center/super computer depending on the application. The proper way to design a high end headset is the best displays and sensors on the market with the absolute minimum amount of compute to do basic functions like untethered pass through, content consumption and handle I/O from onboard sensors and the main compute platform. This isn’t a short term problem — even in a theoretical world where a 100g compute and power module runs circles around a modern high end workstation with all day battery, there will be an order of magnitude or more compute available off headset and the higher end use cases will rely on and use that compute power.
 
  • Like
Reactions: WarmWinterHat
Even M5 is not efficient enough to make ideal Vision Pro into reality, think about that.
?? More powerful is always better, but AVP is very cool already from a hardware standpoint. Obviously it is still nascent tech, but IMO the limitations with AVP are on the software side. Hardware was superb even at v1.
 
I’m not asking for desktop software. I understand your point, but iOS still doesn’t have any apps that can fully utilize all this power. For now, it’s just wasted performance, in my opinion.

Year after year, I don’t really see any difference in iOS fluidity compared to the 14 Pro Max…
This is a bit of a pointless statement. What made the iPhone 3 incredible was its fluidity compared to other products.

The OS is always going to prioritise the interface.

Please share what you're expecting. Describe what an app that fully utilises all this power would look like. Clearly not Office productivity. What does it look like?
 
So why put a MacBook chip in an iPhone if iOS is still so limited? Just for the longevity of the phone?

Because right now, we have a rocket… but it’s stuck on the ground.

What would you have wanted to do with all this power?

Apple could open the door to real “pro” apps — Final Cut, Logic, Xcode… — but for now, nothing is happening.

Maybe they could focus now on pro-only apps to unlock more features and actually make the most of the smartphone’s power.

Extra camera features are nice, sure, but we’re still just circling around the same stuff
What’s wrong with “just for longevity of the phone” though? I still use my M1 iMac and Macbook Air because the speed has held up over time, and I want my phone capable of running everything I throw at it for 5 years too

Software and websites will for sure become more bloated over that time, so if it wasn’t overpowered on release it would just reduce the useful life of the phone 🤷🏼‍♀️

That’s not to say I wouldn’t love a desktop mode for iPhone though, similar to Samsung Dex. How cool would it be to just be able to cast your phone to a screen or TV and turn it into a desktop-like experience? But alas, it would cut into Mac sales, so it probably won’t happen any time soon. But maybe they’ll do it for the Fold, if it gets some iPadOS features?
 
If Apple didn't put a fast chip in their phone, people would complain that they aren't keeping up with Samsung. So they make absolutely sure there's no question as to who has the fast processor and people complain that the chip is too fast.

People now understand why Jobs never gave a damn about user feedback, right? If I have one complaint about Apple under Cook, it's that they listen to user feedback...
Well, assuming Apple never listen to user feedback, I can’t imagine what Apple would be selling right now if they literally didn’t listen to how user uses it. Maybe iPhone would still flush and with mediocre cameras. Maybe iOS would still be running powerful browsers but no app stores. It is hard to say whether feedback contributes to iPhone growth or hinders it. But that ship has sailed a long time ago.
 
As a nearly daily Vision Pro user, the hardware issue with it isn’t that the m2 version is underpowered (haven’t tried the m5) — it’s that the damn thing is tethered anyway but insists on hanging the compute off the front of your face along with a bunch of glass and aluminum and a totally frivolous front display. I’d prefer a headset that ran off an A series chip unless directly tethered to a high spec Mac. Having used many other headsets, nothing about putting up a couple simple floating window apps and useable pass through video can’t run on phone hardware. And requiring a Pro or really a Max level tethered CPU when your device is already tethered is no more cumbersome. And that’s before we get to the reality that even 20 months in 80% of what I do with the Vision Pro is just use it as a big screen for my MacBook while traveling and the other 20% is watching a 3D movie I missed at the theater. And I say that as a stereo photography expert and content creator, there’s no software for VisionOS to do native serious content creation and no IO that would make that workflow feasible anyway. It’s only useful as a mirror or extension of content from a Mac anyway.

In short, the issue isn’t that the m5 can’t run an “ideal” headset — it’s that the concept of a headset running off head mounted compute is fatally flawed — there will always be a use or need for more compute and no head mounted compute device will ever match what can be run on a laptop, desktop, or even data center/super computer depending on the application. The proper way to design a high end headset is the best displays and sensors on the market with the absolute minimum amount of compute to do basic functions like untethered pass through, content consumption and handle I/O from onboard sensors and the main compute platform. This isn’t a short term problem — even in a theoretical world where a 100g compute and power module runs circles around a modern high end workstation with all day battery, there will be an order of magnitude or more compute available off headset and the higher end use cases will rely on and use that compute power.
…You’re saying a whole lot after saying you haven’t even tried the M5 Vision Pro.

The M5 unsurprisingly allows substantial upgrades on various computation, decoding, (especially ray-tracing, rendering, media playback, and AI stuff that very much is a big deal with spatial computing/content).

You seem to also ignore the dedicated hardware for specifically spatial computing needed alongside the APU Apple has already implemented (R1 chip) towards standalone headsets having merit that’s not a fatal flaw alongside the advancements of non-standalone headsets and traditional computing.

It’s inherently expensive and more challenging for standalone spatial computing hardware to be made and tailored to towards slower advancement rate of other computing hardware, but headset form factor will consistently be important to advance to have high-end standalone spatial computing advancements debut or primarily be maximized before they then trickle down to glasses.

The relationship between such spatial computing hardware analogous to traditional computing devices is the following:
Non-standalone headsets (desktop) -> standalone headsets (laptops/tablets) -> contacts/glasses (phones)
 
?? More powerful is always better, but AVP is very cool already from a hardware standpoint. Obviously it is still nascent tech, but IMO the limitations with AVP are on the software side. Hardware was superb even at v1.
Vehemently but respectfully disagree: M2 was an odd choice to many devs and creative engineers interesting in spatial computing not even having ray-tracing, mesh shading and hardware accelerated AV1 video playback.

That’s all important stuff to more maximize the spatial computing environment use cases, maximize an unavoidably limited battery, as well as more sensibly run iPad apps that it runs spatially invaluably.

Such odd omissions have finally been fixed with M5, and the odd M2 choice was only bearable for some because the Vision Pro was the only serious option for prosumer standalone headsets.

It added unnecessary cognitive noise to get one or not V1 didn’t need.
 
Last edited:
…You’re saying a whole lot after saying you haven’t even tried the M5 Vision Pro.

The M5 unsurprisingly allows substantial upgrades on various computation, decoding, (especially ray-tracing, rendering, media playback, and AI stuff that very much is a big deal with spatial computing/content).

You seem to also ignore the dedicated hardware for specifically spatial computing needed alongside the APU Apple has already implemented (R1 chip) towards standalone headsets having merit that’s not a fatal flaw alongside the advancements of non-standalone headsets and traditional computing.

It’s inherently expensive and more challenging for standalone spatial computing hardware to be made and tailored to towards slower advancement rate of other computing hardware, but headset form factor will consistently be important to advance to have high-end standalone spatial computing advancements debut or primarily be maximized before they then trickle down to glasses.

The relationship between such spatial computing hardware analogous to traditional computing devices is the following:
Non-standalone headsets (desktop) -> standalone headsets (laptops/tablets) -> contacts/glasses (phones)
I don't care what the M5 adds, because my point was and remains that there will never be a world where there is more compute available in a headset than outside of it and as long as the Vision Pro is tethered anyway there's zero reason to put the bulk and weight of the compute on the headset rather than at the end of the tether. That's it.

You seem to be arguing that the AVP is a standalone headset. But the device needs a tether to function, it's a tethered headset with all the compute drawbacks of a standalone one. Just because apple was so boneheaded that they made the heaviest headset on the market while also keeping it reliant on a wired tether that only carries power not compute or IO resources, doesn't make it a standalone headset. You try using your AVP without a cord, it's literally a paperweight.

I don't want to get into an AVP bashing thread. I use it a lot and am a huge beliver in HMD as the near future of computing. So I'm just going to leave it with this: even in this seriously compromised state, the M2 AVP has more than enough hardware resources to be amazingly useful but is limited by truly atrocious software and interface design choices. If it wasn't the highest resolution and most seamless way to extend my Mac's display on the go, I would have walked out of the in store demo of my pre-ordered unit empty handed and would never have considered it again 100% because of the software. And that holds true even with the latest VisionOS.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.