The first, a change the unified memory architecture and focusing dedicated memory for gpu operations
Isn’t this a downgrade? Why would you want that?
The first, a change the unified memory architecture and focusing dedicated memory for gpu operations
The two rumors I saw in various websites seem to fit the bill. I'm not saying these are viable, just stuff you see on the interwebs.
The first, a change the unified memory architecture and focusing dedicated memory for gpu operations, and the second is more dynamic options when buying a Mac, that is, having the ability to choose 30 gpu cores, or 40 w/o needing a new cpu selection. That may not strictly be a architecture change, but its a change in how apple does business.
I think if you read my entire post you’ll be able to suss out what I meant, but I’ll reiterate: the criticism is valid, particularly around the lack of self-learning and correcting errors. The technology has a fundamental flaw in its design and token prediction no matter how much you bolt onto it can not solve this intrinsic problem, e.g. if it makes a mistake and I correct it, it will never change all of its future behavior, even with the frankly rudimentary “memory” capabilities in the latest Frontier models. I personally believe future world models with some different development paradigms may be able to solve this, but we’ll see.Extremely valid to what? That AI is in a bubble? That we shouldn't use LLMs?
the GPU uplift here will probably mean that the m5 will have metal scores around 70000. and if the pro/max also have some variant of SOIC, it could mean an even bigger lift in general. Not unthinkable that a m5 max will land at about 200 000 in metal.
The two rumors I saw in various websites seem to fit the bill. I'm not saying these are viable, just stuff you see on the interwebs.
The first, a change the unified memory architecture and focusing dedicated memory for gpu operations, and the second is more dynamic options when buying a Mac, that is, having the ability to choose 30 gpu cores, or 40 w/o needing a new cpu selection. That may not strictly be a architecture change, but its a change in how apple does business.
I think it is possible that they are working on stacked SoCs where GPU could be on a separate die (but still part of the SoC), allowing more flexibility. I very much doubt we will see this tech this generation however.
I certainly don’t see them abandoning UMA, that wouldn’t make any sense. What would make sense is larger L2 for the GPU.
I think there is a difference between me saying that LLMs are not hype and that they are extremely useful and will continue to get more useful quickly vs me saying that LLMs are perfect.I think if you read my entire post you’ll be able to suss out what I meant, but I’ll reiterate: the criticism is valid, particularly around the lack of self-learning and correcting errors. The technology has a fundamental flaw in its design and token prediction no matter how much you bolt onto it can not solve this intrinsic problem, e.g. if it makes a mistake and I correct it, it will never change all of its future behavior, even with the frankly rudimentary “memory” capabilities in the latest Frontier models. I personally believe future world models with some different development paradigms may be able to solve this, but we’ll see.
If a LLM requires hypervigilance to use, why should I use it?
And if you are an experienced dev, be aware that there are studies showing that programmers who use LLMs are slower and more error prone than programmers who don't. Yes, these studies covered people who love LLMs and swear by them. There's even evidence that heavy reliance on LLMs decreases cognitive ability over time - instead of exercising your own reasoning, you're training yourself to stop thinking and ask the AI to think for you.
AI bubble hype
There's also a ton of ethical and environmental issues with so-called "generative AI", but I bet you're one of the people who would just handwave such concerns away.
There's also a long term problem here: if you're using LLMs instead of junior devs, where are you getting the next generation of experienced devs to watch over LLMs?
Up to 40% increased GPU performance bodes well for M5.
![]()
iPhone 17 Pro and iPhone Air Benchmarks Reveal Speed of A19 Pro Chip
The first benchmark results for the A19 Pro chip in the iPhone 17 Pro, iPhone 17 Pro Max, and iPhone Air surfaced in the Geekbench 6 database today. ...www.macrumors.com
Yea, it was mentioned here earlier. I wonder how much of the 40% improvement is the cooling upgrade and how much of it is the GPU improvement.
It matters because cooling for Macs will likely stay the same.
Jesus no.
Apple needing additional time to optimize a new cooling system in the M5 MacBook Pro would explain the rumored delay until Q1 2026.
The next new cooling system will be with the redesign, and some suggested late 2026 with oledApple needing additional time to optimize a new cooling system in the M5 MacBook Pro would explain the rumored delay until Q1 2026.
The MB Air's could get vapor chamber cooling like the Pro phones. The MB Pro's already have vapor chamber cooling (which is all a heatpipe is). The next steps would be things like Liquid Metal thermal TIM to "increase" thermal conductivity. Or maybe coming up with some exotic liquid that can carry more heat load for the pipe to dissipate.Nah thats probably more a case of the M4 still being competitive, and the iPad getting m5 first.
The M1 and M4 shipped in November, q1 is not that far off. Apple could have been working on "optimising a cooling system" for a decade or more, whether the CPU is out yet is irrelevant as its a simple case of "how do we dissipate X watts in Y^2 space" and that's something they can play with using a heating element.
I might be reading too much into the IPhone 17/A19, but apple is adding the vapor chamber to the phone for a reason - Its quite possible that the A19 runs warmer then its predecessor. The implications are that the M5 will be a hotter running chip that will need better cooling technology. Again, that's a giant assumption that may not have any basis in reality.The MB Air's could get vapor chamber cooling like the Pro phones.
MBA would need something since it draws more power than the iPhone. A passive HP/VC system is kind of their only choice.I might be reading too much into the IPhone 17/A19, but apple is adding the vapor chamber to the phone for a reason - Its quite possible that the A19 runs warmer then its predecessor. The implications are that the M5 will be a hotter running chip that will need better cooling technology. Again, that's a giant assumption that may not have any basis in reality.
I agree, but I was also thinking more along the lines of the MBP, Mini and to a lesser extend Studio - if the M5/M5 Pro/M5 Max are indeed a hotter running chipMBA would need something since it draws more power than the iPhone. A passive HP/VC system is kind of their only choice.
The 16" unit cooling seems fine, the 14" will suffer. I can't speak to the Mac Mini, and yea the Studio has an overbuilt cooling system.I agree, but I was also thinking more along the lines of the MBP, Mini and to a lesser extend Studio - if the M5/M5 Pro/M5 Max are indeed a hotter running chip
The mini runs warm, particularly the M4 Pro. This was one of the reasons why I returned the M4 Pro Mini and bought a M4 max StudioI can't speak to the Mac Mini,
Here's the truth: Apple had a LARGE lead and blew it. When AMD started making strides on power efficiency, most people here dismissed it saying X86 is a legacy architecture.
The problem is that they STILL make progress, and now they have the AI Max+ 395, which can have up to 128GB which can not only be used for gaming (it runs some games smoothly at 4K!), Sure, this memory won't run AI as fast as an actual, dedicated 3D card. But it can be used for large language models at an OK speed run with as little as 45W. Those language models will still run slower than an Apple Silicon processor, but since we have Thunderbolt 4 and Windows GPU support, we have the option to connect external GPUs if running a laptop (or straight up plugging a GPU if we have a desktop option).
And because we ALSO have regular Windows, we have a portable system with all legacy applications it offers (I dislike Microsoft, but they did something right here).
I'm glad AMD is making gains, but they are not nearly caught up. Wait a few months and re-assess Apple after M5 Max is released, you may be surprised.Here's the truth: Apple had a LARGE lead and blew it. When AMD started making strides on power efficiency, most people here dismissed it saying X86 is a legacy architecture.
The problem is that they STILL make progress, and now they have the AI Max+ 395, which can have up to 128GB which can not only be used for gaming (it runs some games smoothly at 4K!), Sure, this memory won't run AI as fast as an actual, dedicated 3D card. But it can be used for large language models at an OK speed run with as little as 45W. Those language models will still run slower than an Apple Silicon processor, but since we have Thunderbolt 4 and Windows GPU support, we have the option to connect external GPUs if running a laptop (or straight up plugging a GPU if we have a desktop option).
And because we ALSO have regular Windows, we have a portable system with all legacy applications it offers (I dislike Microsoft, but they did something right here).
Lmao, ok
I'm glad AMD is making gains, but they are not nearly caught up. Wait a few months and you'll see.
I think it will take another year or two for Intel or AMD to catch up to Apple Silicon, and that's assuming Apple hits some snag with M6 or M7 and doesn't keep iterating so quickly.
Meanwhile the inverse is true with Graphics and Compute, I believe Apple will close the gap(s) significantly and wind up maybe a 12-18 months behind nvidia which isn't bad given where they were a few years ago.
We'll see.
PC wise I'd run linux primarily anyhow but I do recognize there are a lot of benefits to the CUDA development toolchain. I swore off ROCm after horribly buggy experiences a while ago and to my mind MLX etc. are already at parity or better for consumer grade hardware development.
Absolutely, I think AMD deserves credit for pushing here and I hope they eventually fuse this technology with their desktop architecture, I'm not sure what the plan there is I haven't seen rumors about it. I don't want to get to far off topic but something that combined HEDT type features with that new integrated architecture for the memory bandwidth improvements and "AI" acceleration would be really cool.On paper, it's true that the GPU of a portable, state-of-the-art AMD device is not as powerful as a state-of-the-art Apple Silicon processor.
BUT the gap now is VERY SMALL. You are going to miss that extra GPU power if you do rendering, and there are ways to more than make up for it if you are willing to sacrifice some portability (e.g, Thunderbolt 4 + eGPU, which Apple USE to have as an option).
Frankly, for regular usage, there's nothing that an Apple GPU can do anymore that I can't do with my devices. Of course, everyone's use case is different. I'm describing mine here, but I doubt that AMD wouldn't suffice for many video editing and even light video fx effects / rendering (or an eGPU if you want to do heavier rendering).
By the way, some people report 10 hours battery life is easily achievable with those devices: ( ). Pretty good for an "outdated" architecture.