Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
honestly, the only feature worth adding is macOS dual boot. You can already do so much on your phone that is so absurdly Byzantine and unergnomic on a tiny touchscreen, I have no desire and no one should have a desire to do serious video editing or whatever on a what 6” screen. But if the hardware supports it, sell a phone with 2 TB ports that boots macOS when a display and input devices are connected. Make that the pro model, sell it for like $1500+ because it replaces your MacBook, offer a super overpriced keyboard/trackpad/display/extra battery dock that’s essentially a MacBook without the compute and you can capture the income of selling an iPhone plus a MacBook Air all with only one A series chip lol.

In all seriousness, iOS iPadOS MacOS and VisionOS being strictly separate branches of the same OS just to silo hardware sales is getting absurd — one high spec SoC that can switch which front end of the Apple OS it can display based on the connected I/O is the sane solution except that we live in hell where selling more widgets is the only meaningful metric, wastefulness and duplication be damned. I have 6 devices running apple Silicon that I interact with every day, including an AppleTV that basically only exists to be a home hub. A MacMini that mostly exists to be a content server and media ingest solution then one device for each of the 4 OS flavors — from a practical standpoint there’s zero reason I need more than 2 or at most 3 SoCs for these functions. (TV and Mac Mini should merge of course, then the brain of my phone should be able to run all 4 OSes depending on if it’s docked to a 6”, 11”, 16” or head mounted display.)
 
Let me just say that while Apple's software has declined in recent years and has rarely been updated with useful new features, it is still miles better than Microsoft Office.
 
  • Like
Reactions: Allen_Wentz
Even M5 is not efficient enough to make ideal Vision Pro into reality, think about that.
As a nearly daily Vision Pro user, the hardware issue with it isn’t that the m2 version is underpowered (haven’t tried the m5) — it’s that the damn thing is tethered anyway but insists on hanging the compute off the front of your face along with a bunch of glass and aluminum and a totally frivolous front display. I’d prefer a headset that ran off an A series chip unless directly tethered to a high spec Mac. Having used many other headsets, nothing about putting up a couple simple floating window apps and useable pass through video can’t run on phone hardware. And requiring a Pro or really a Max level tethered CPU when your device is already tethered is no more cumbersome. And that’s before we get to the reality that even 20 months in 80% of what I do with the Vision Pro is just use it as a big screen for my MacBook while traveling and the other 20% is watching a 3D movie I missed at the theater. And I say that as a stereo photography expert and content creator, there’s no software for VisionOS to do native serious content creation and no IO that would make that workflow feasible anyway. It’s only useful as a mirror or extension of content from a Mac anyway.

In short, the issue isn’t that the m5 can’t run an “ideal” headset — it’s that the concept of a headset running off head mounted compute is fatally flawed — there will always be a use or need for more compute and no head mounted compute device will ever match what can be run on a laptop, desktop, or even data center/super computer depending on the application. The proper way to design a high end headset is the best displays and sensors on the market with the absolute minimum amount of compute to do basic functions like untethered pass through, content consumption and handle I/O from onboard sensors and the main compute platform. This isn’t a short term problem — even in a theoretical world where a 100g compute and power module runs circles around a modern high end workstation with all day battery, there will be an order of magnitude or more compute available off headset and the higher end use cases will rely on and use that compute power.
 
Even M5 is not efficient enough to make ideal Vision Pro into reality, think about that.
?? More powerful is always better, but AVP is very cool already from a hardware standpoint. Obviously it is still nascent tech, but IMO the limitations with AVP are on the software side. Hardware was superb even at v1.
 
I’m not asking for desktop software. I understand your point, but iOS still doesn’t have any apps that can fully utilize all this power. For now, it’s just wasted performance, in my opinion.

Year after year, I don’t really see any difference in iOS fluidity compared to the 14 Pro Max…
This is a bit of a pointless statement. What made the iPhone 3 incredible was its fluidity compared to other products.

The OS is always going to prioritise the interface.

Please share what you're expecting. Describe what an app that fully utilises all this power would look like. Clearly not Office productivity. What does it look like?
 
So why put a MacBook chip in an iPhone if iOS is still so limited? Just for the longevity of the phone?

Because right now, we have a rocket… but it’s stuck on the ground.

What would you have wanted to do with all this power?

Apple could open the door to real “pro” apps — Final Cut, Logic, Xcode… — but for now, nothing is happening.

Maybe they could focus now on pro-only apps to unlock more features and actually make the most of the smartphone’s power.

Extra camera features are nice, sure, but we’re still just circling around the same stuff
What’s wrong with “just for longevity of the phone” though? I still use my M1 iMac and Macbook Air because the speed has held up over time, and I want my phone capable of running everything I throw at it for 5 years too

Software and websites will for sure become more bloated over that time, so if it wasn’t overpowered on release it would just reduce the useful life of the phone 🤷🏼‍♀️

That’s not to say I wouldn’t love a desktop mode for iPhone though, similar to Samsung Dex. How cool would it be to just be able to cast your phone to a screen or TV and turn it into a desktop-like experience? But alas, it would cut into Mac sales, so it probably won’t happen any time soon. But maybe they’ll do it for the Fold, if it gets some iPadOS features?
 
  • Like
Reactions: maika1305
If Apple didn't put a fast chip in their phone, people would complain that they aren't keeping up with Samsung. So they make absolutely sure there's no question as to who has the fast processor and people complain that the chip is too fast.

People now understand why Jobs never gave a damn about user feedback, right? If I have one complaint about Apple under Cook, it's that they listen to user feedback...
Well, assuming Apple never listen to user feedback, I can’t imagine what Apple would be selling right now if they literally didn’t listen to how user uses it. Maybe iPhone would still flush and with mediocre cameras. Maybe iOS would still be running powerful browsers but no app stores. It is hard to say whether feedback contributes to iPhone growth or hinders it. But that ship has sailed a long time ago.
 
As a nearly daily Vision Pro user, the hardware issue with it isn’t that the m2 version is underpowered (haven’t tried the m5) — it’s that the damn thing is tethered anyway but insists on hanging the compute off the front of your face along with a bunch of glass and aluminum and a totally frivolous front display. I’d prefer a headset that ran off an A series chip unless directly tethered to a high spec Mac. Having used many other headsets, nothing about putting up a couple simple floating window apps and useable pass through video can’t run on phone hardware. And requiring a Pro or really a Max level tethered CPU when your device is already tethered is no more cumbersome. And that’s before we get to the reality that even 20 months in 80% of what I do with the Vision Pro is just use it as a big screen for my MacBook while traveling and the other 20% is watching a 3D movie I missed at the theater. And I say that as a stereo photography expert and content creator, there’s no software for VisionOS to do native serious content creation and no IO that would make that workflow feasible anyway. It’s only useful as a mirror or extension of content from a Mac anyway.

In short, the issue isn’t that the m5 can’t run an “ideal” headset — it’s that the concept of a headset running off head mounted compute is fatally flawed — there will always be a use or need for more compute and no head mounted compute device will ever match what can be run on a laptop, desktop, or even data center/super computer depending on the application. The proper way to design a high end headset is the best displays and sensors on the market with the absolute minimum amount of compute to do basic functions like untethered pass through, content consumption and handle I/O from onboard sensors and the main compute platform. This isn’t a short term problem — even in a theoretical world where a 100g compute and power module runs circles around a modern high end workstation with all day battery, there will be an order of magnitude or more compute available off headset and the higher end use cases will rely on and use that compute power.
…You’re saying a whole lot after saying you haven’t even tried the M5 Vision Pro.

The M5 unsurprisingly allows substantial upgrades on various computation, decoding, (especially ray-tracing, rendering, media playback, and AI stuff that very much is a big deal with spatial computing/content).

You seem to also ignore the dedicated hardware for specifically spatial computing needed alongside the APU Apple has already implemented (R1 chip) towards standalone headsets having merit that’s not a fatal flaw alongside the advancements of non-standalone headsets and traditional computing.

It’s inherently expensive and more challenging for standalone spatial computing hardware to be made and tailored to towards slower advancement rate of other computing hardware, but headset form factor will consistently be important to advance to have high-end standalone spatial computing advancements debut or primarily be maximized before they then trickle down to glasses.

The relationship between such spatial computing hardware analogous to traditional computing devices is the following:
Non-standalone headsets (desktop) -> standalone headsets (laptops/tablets) -> contacts/glasses (phones)
 
?? More powerful is always better, but AVP is very cool already from a hardware standpoint. Obviously it is still nascent tech, but IMO the limitations with AVP are on the software side. Hardware was superb even at v1.
Vehemently but respectfully disagree: M2 was an odd choice to many devs and creative engineers interesting in spatial computing not even having ray-tracing, mesh shading and hardware accelerated AV1 video playback.

That’s all important stuff to more maximize the spatial computing environment use cases, maximize an unavoidably limited battery, as well as more sensibly run iPad apps that it runs spatially invaluably.

Such odd omissions have finally been fixed with M5, and the odd M2 choice was only bearable for some because the Vision Pro was the only serious option for prosumer standalone headsets.

It added unnecessary cognitive noise to get one or not V1 didn’t need.
 
Last edited:
…You’re saying a whole lot after saying you haven’t even tried the M5 Vision Pro.

The M5 unsurprisingly allows substantial upgrades on various computation, decoding, (especially ray-tracing, rendering, media playback, and AI stuff that very much is a big deal with spatial computing/content).

You seem to also ignore the dedicated hardware for specifically spatial computing needed alongside the APU Apple has already implemented (R1 chip) towards standalone headsets having merit that’s not a fatal flaw alongside the advancements of non-standalone headsets and traditional computing.

It’s inherently expensive and more challenging for standalone spatial computing hardware to be made and tailored to towards slower advancement rate of other computing hardware, but headset form factor will consistently be important to advance to have high-end standalone spatial computing advancements debut or primarily be maximized before they then trickle down to glasses.

The relationship between such spatial computing hardware analogous to traditional computing devices is the following:
Non-standalone headsets (desktop) -> standalone headsets (laptops/tablets) -> contacts/glasses (phones)
I don't care what the M5 adds, because my point was and remains that there will never be a world where there is more compute available in a headset than outside of it and as long as the Vision Pro is tethered anyway there's zero reason to put the bulk and weight of the compute on the headset rather than at the end of the tether. That's it.

You seem to be arguing that the AVP is a standalone headset. But the device needs a tether to function, it's a tethered headset with all the compute drawbacks of a standalone one. Just because apple was so boneheaded that they made the heaviest headset on the market while also keeping it reliant on a wired tether that only carries power not compute or IO resources, doesn't make it a standalone headset. You try using your AVP without a cord, it's literally a paperweight.

I don't want to get into an AVP bashing thread. I use it a lot and am a huge beliver in HMD as the near future of computing. So I'm just going to leave it with this: even in this seriously compromised state, the M2 AVP has more than enough hardware resources to be amazingly useful but is limited by truly atrocious software and interface design choices. If it wasn't the highest resolution and most seamless way to extend my Mac's display on the go, I would have walked out of the in store demo of my pre-ordered unit empty handed and would never have considered it again 100% because of the software. And that holds true even with the latest VisionOS.
 
  • Like
Reactions: Timpetus
Vehemently but respectfully disagree: M2 was an odd choice to many devs and creative engineers interesting in spatial computing not even having ray-tracing, mesh shading and hardware accelerated AV1 video playback.

That’s all important stuff to more maximize the spatial computing environment use cases, maximize an unavoidably limited battery, as well as more sensibly run iPad apps that it runs spatially invaluably.

Such odd omissions have finally been fixed with M5, and the odd M2 choice was only bearable for some because the Vision Pro was the only serious option for prosumer standalone headsets.

It added unnecessary cognitive noise to get one or not V1 didn’t need.
M2 was a stupid choice, I was certain at the launch announcement that it would actually ship with M3. But it was and is still more powerful than anyone else's standalone headset before you even account for other Headsets not having a co-processor. Apple in many ways has done less with a good laptop SoC plus a dedicated AR/VR chipset than other companies have done with what's essentially a rather high end phone SoC.

Also, again with calling the AVP standalone. It has a tether! I have to lug around or find a place for a non-headset module and a cord between the two for it to function. Wishing it was standalone does not make it so. Using your tether only for power instead of allowing for Real IO and off headset compute is an unfathomably bad choice on every front, truly the worst of both worlds, but it doesn't somehow make the device with a tether cord connecting its two elements "standalone"
 
One of the downsides of being so consumer focused is that consumers get bored. Which often leads to change for change’s sake, not because you have legitimate improvements to make.
Yet those users still complain when they update and the device falls apart.

So really, what do you want? You want change for the sake of change whilst maintaining 100% performance and battery life? In reality this isn’t possible. Nineteen years of iOS, even if you don’t have all nineteen years of experience (I’ve been using iOS since iOS 4), you’ll know it if you have at least 7 or 8.

Which is why I still see people complaining about iOS 26 and how it runs, and how it kills battery life, and that performance is poor even on the latest iPhones, and if I had known that software quality would be this poor with the redesign I wouldn’t have updated, and that I miss iOS 18, and, and, and…

People also complain about how it looks (!!!!!!!)

To that I always reply, you just updated and didn’t even try to get information? You’re posting on an iPhone forum (which means you at least know they exist) and you’re saying you didn’t know?

“Apple should have allowed downgrading for longer”. You had a week, just like that year. That information was easily accessible, how are you complaining?

“I don’t like liquid glass”. You have two-hour-long videos on YouTube showing you exactly how everything looks, and you know that you can’t go back. If you care enough about this, why didn’t you watch them?

“Battery life is significantly worse”. People who ran the GM told you that. “It isn’t getting significantly better after all of the betas. PSA, it’s still quite poor”. You didn’t see that?

Or like somebody with a significantly older iPad once said “I thought that since Apple made it eligible it would run well”. Really? You have years of iOS experience and still expect this? Apple has always pushed too far. You have one of the oldest eligible iPads and you are complaining about how it runs. “But I can’t go back” you knew that!!!

So consumers want change. They’ll know all of the facts or have them accessible. Yet they’ll still pointlessly expect change and perfection. Even when they know. So I really don’t understand.
 
Chips are getting faster, phones overheat more, iOS gets slower, cycle continues.

I would have preferred 0 animations instead of this stuff they use in 26.

I will just leave this video here where 17 PM compared to 5 on respective iOS versions it comes with. How can 5 feel faster?? It has a very slow chip by today’s standards. Maybe modern bloat is way too much?

 
  • Like
Reactions: Andeddu
Maybe OP needs to get more use cases than just editing notes?

I do local AI and play latest 3D games on my phone
 
  • Like
Reactions: iMacoo7
Really depends what you're trying to do; having optional control over both is better than not.

As a nearly daily Vision Pro user, the hardware issue with it isn’t that the m2 version is underpowered (haven’t tried the m5) — it’s that the damn thing is tethered anyway but insists on hanging the compute off the front of your face along with a bunch of glass and aluminum and a totally frivolous front display. I’d prefer a headset that ran off an A series chip unless directly tethered to a high spec Mac. Having used many other headsets, nothing about putting up a couple simple floating window apps and useable pass through video can’t run on phone hardware. And requiring a Pro or really a Max level tethered CPU when your device is already tethered is no more cumbersome. And that’s before we get to the reality that even 20 months in 80% of what I do with the Vision Pro is just use it as a big screen for my MacBook while traveling and the other 20% is watching a 3D movie I missed at the theater. And I say that as a stereo photography expert and content creator, there’s no software for VisionOS to do native serious content creation and no IO that would make that workflow feasible anyway. It’s only useful as a mirror or extension of content from a Mac anyway.

In short, the issue isn’t that the m5 can’t run an “ideal” headset — it’s that the concept of a headset running off head mounted compute is fatally flawed — there will always be a use or need for more compute and no head mounted compute device will ever match what can be run on a laptop, desktop, or even data center/super computer depending on the application. The proper way to design a high end headset is the best displays and sensors on the market with the absolute minimum amount of compute to do basic functions like untethered pass through, content consumption and handle I/O from onboard sensors and the main compute platform. This isn’t a short term problem — even in a theoretical world where a 100g compute and power module runs circles around a modern high end workstation with all day battery, there will be an order of magnitude or more compute available off headset and the higher end use cases will rely on and use that compute power.

Have you ever used a VP?

?? More powerful is always better, but AVP is very cool already from a hardware standpoint. Obviously it is still nascent tech, but IMO the limitations with AVP are on the software side. Hardware was superb even at v1.
Don’t get me wrong, Vision Pro is currently my favorite Apple product (despite never owning one) because there is so much potential for it to grow. The techs themselves are impressed and miles ahead in terms of R&D.

But sorry that I was too visionary. I was imagining a headset that powered by a chip so fast and efficient that it can run by a small battery stick with long battery, so no wires. It runs so cool that it doesn’t need a fan to cool it self. So efficient that the whole thing is light, small, and portable.

I know I’m be delusional but that why technologies still need to grow. I understand that is post is mainly about software limitations but faster hardware also means efficiency that could also enables maybe things.

But the way, to respond to the original post, I’ll probably just use my iPhone as a PDA. I tried to do ‘pro’ things before with the iPhone. But the screen size make mw want to scratch my head. But that’s why Vision has so much potential to evolve to something much better. Because you are no longer limited by screen size.
 
Others have already mentioned smartphones longevity and general progress with chip architecture which benefits all Apple products, not only the iPhone. In terms of direct benefits of the super-powered iPhone, hopefully Apple will implement this patent sooner rather than later:

Also, as AR glasses/VR headsets pick up steam, there will likely be Vision Air, which tethers to an iPhone and can run pro-grade software. This will be a transitory state until chips become powerful/efficient enough to make standalone AR glasses, building on the top of chip advances which make current iPhones so (excessively) powerful.
 
So why put a MacBook chip in an iPhone if iOS is still so limited? Just for the longevity of the phone?

Because right now, we have a rocket… but it’s stuck on the ground.

What would you have wanted to do with all this power?

Apple could open the door to real “pro” apps — Final Cut, Logic, Xcode… — but for now, nothing is happening.

Maybe they could focus now on pro-only apps to unlock more features and actually make the most of the smartphone’s power.

Extra camera features are nice, sure, but we’re still just circling around the same stuff
Your questions are actually so incredibly relevant especially given the fact that the iPhone Pro and Pro Max range have proven to be such stellar sales performers.
It is about time we see some “Pro level apps”, I think the issue year-to-year is that the new base iPhone will basically get a base chip that is similar to the previous year’s Pro chip, so the “Pro apps” would essentially need to make it to non-Pro iPhones anyway.
 
M2 was a stupid choice, I was certain at the launch announcement that it would actually ship with M3. But it was and is still more powerful than anyone else's standalone headset before you even account for other Headsets not having a co-processor. Apple in many ways has done less with a good laptop SoC plus a dedicated AR/VR chipset than other companies have done with what's essentially a rather high end phone SoC.

Also, again with calling the AVP standalone. It has a tether! I have to lug around or find a place for a non-headset module and a cord between the two for it to function. Wishing it was standalone does not make it so. Using your tether only for power instead of allowing for Real IO and off headset compute is an unfathomably bad choice on every front, truly the worst of both worlds, but it doesn't somehow make the device with a tether cord connecting its two elements "standalone"
It’s absolutely standalone being independently operational on its own without another computer to be functional with its core function.

It having an external battery pack by no coincidence also done by other prosumer standalone headsets doesn’t change that.

This differs from non-standalone headsets like the Valve Index, Big Screen Beyond, and so on that needs to be connected to a laptop or desktop
 
  • Like
Reactions: Allen_Wentz
I’m not asking for desktop software. I understand your point, but iOS still doesn’t have any apps that can fully utilize all this power. For now, it’s just wasted performance, in my opinion.
It's not as bad as the iPad. That has all the proper ability to run more desktop apps, but since you can't sideload they insist on taking a 30% tax on all developers through the app store and so people like Ableton or Steinberg are unlikely to relase pro-level versions of their apps. Steinberg did a very simplified version of Cubase, but it's a long way off the pro apps. Why is Logic the only decent music production app in the app store? Because Apple make it and it's a subscription version. I can use my Ableton license on any mac or PC. But no way to get an iPad verison even though it would probably run pretty good on paper.

Also the memory management (swap) is horrible lon the iPhone. No matter how little processing power it should need, my 15 Pro can't have an app open in the background and not having to reload the thing and lose the current status regularly. I know this from long bike adventures where I've had Komoot running in the background for route detours while navigating with my Garmin and it just can't not lose all the info and me have to start from scratch and re-route everything every few times I open it.
 
Last edited:
It’s absolutely standalone being independently operational on its own without another computer to be functional with its core function.

It having an external battery pack by no coincidence also done by other prosumer standalone headsets doesn’t change that.

This differs from non-standalone headsets like the Valve Index, Big Screen Beyond, and so on that needs to be connected to a laptop or desktop
We fundamentally disagree here. There is a tehter on the AVP. Just because the idiots who designed the system only use that tether for power rather than giving it IO and compute functions that would improve the device in every way doesn't magically make the head set a standalone device. IT CANNOT FUNCTION without a tethered second module. It is therefore a TETHERED device. There's not really any ambiguity here.
 
I think they use iPhone as a testing ground for chip design and manufacturing techniques, and probably also get discounts on M-series chip because the volume of A-series units they order.

Beyond that I think gaming, photo and video rendering, and ML are the three reasons they keep upping the chipset because your right so much power for notes is…..uhhhh weird haha 😆
This makes the most sense.
 
They should put the A19 Pro in a new Apple TV/tvPod Pro and create a Mac app that allows you use your TV as an entry level Mac.
 
Chips are getting faster, phones overheat more, iOS gets slower, cycle continues.

I would have preferred 0 animations instead of this stuff they use in 26.

I will just leave this video here where 17 PM compared to 5 on respective iOS versions it comes with. How can 5 feel faster?? It has a very slow chip by today’s standards. Maybe modern bloat is way too much?

So if you prefer
We fundamentally disagree here. There is a tehter on the AVP. Just because the idiots who designed the system only use that tether for power rather than giving it IO and compute functions that would improve the device in every way doesn't magically make the head set a standalone device. IT CANNOT FUNCTION without a tethered second module. It is therefore a TETHERED device. There's not really any ambiguity here.
Sorry but I also disagree with your semantics; but it is just semantics. Yes the AVP is tethered to its battery, but both AVP and battery are fully mobile with the user. But earlier in your commentary you talked about running the compute capability via the tether, which implies the desktop type thing that others have done, connected to mains and not mobile.

AVP is a v1 tech demo, and it has since inception been mobile not sessile. That mobile vs. sessile thing is a very big conceptual difference that should not get lost by arguing about "tether," which does not mean jack s*** at this stage of the evolution of AVP.

We can argue about where the AVP weight should be placed: with the battery or with the headset. Given the current apparent "pro" type market for the AVP I personally think that the weight in the headset is fine. Doctors, the military, construction workers, etc. have proven their ability to quickly adapt to headsets of weights similar to or less than the v1 AVP. The fact that the consumer crowd here may whine for light weight is moot, because the $3,500 AVP will not soon be a hula hoop type consumer device.

My commentary is not meant to imply that lighter weight usages and/or AVP derivatives will not exist, because they will. My point is that the form factor, weight, tethered battery, etc. are all fine for now and that all those other things can evolve from the current starting point.
 
Last edited:
It’s just a silly OP. It’s more expensive to produce a wider range of chips than it is to produce a narrower range of chips. Of course Apple will move as many of their products to the current range of chips than keep producing older chips, which are produced through older processes. They don’t update their products to current chipsets because it’s great for the end-user, they do it because they want to shut down production of old chipsets. Marketing strategy comes after production decisions, not before.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.