Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Ah, well, the Ultra was just not sufficient for a Mac Pro product. Can't really compete with a discrete workstation box at that performance level. But there was no indication that Ultra had any systemic problems with performance. There was this persistent rumour that the GPU couldn't be properly utilised (stemming from poor Blender scaling), but that is easily explained by how inefficient Blender is in dispatching work on Apple GPUs in the first place. Even on M1 Max there are huge gaps in the timeline and poor GPU core utilisation.

But frankly, if these M2 Max scores are real, Apple can forget about Mac Pro this year as well. They could pull an M2 Extreme of course, but why would someone buy it?

If only there were a company that made a graphics card, and some way to attach that graphics card to a computer…
 
I have a i7-13700K, doing something like Cinebench instantly pushes it to 100C and throttles it for the duration. This is with a massive dual tower Noctua heatsink. There is literally no way this processor could have gone into an iMac. The Mac Pro heatsink would probably have problems with it.

It's faster in complete isolation from the fact that you probably need a 360mm radiator minimum for the i9. Anything with worse cooling is going to be severely gimped.

You need to reseat that heatsink. If done properly you won't get anywhere close to 100c and throttling.
With bios setting 288w power limit 95c, and a -0.4 core voltage mine doesn't go over 90c and still above 30k in Cinebench.
 
Anyone remember Tim Cook's "We have new Intel Macs in the pipeline" proclamation in 2020? Whatever happened with that? All this talk about M1 -> M2 incremental improvements, N[345] processes and TDP limits suddenly made me think of that ...

Continuous delays on Intel's side most likely... Ice Lake SP was originally scheduled for 2019 and got released mid 2021, Sapphire Rapids should have been out in 2021 but probably won't enter the market until late 2023 at best...

It is entirely possible that Apple simply decided that pursuing this simply won't be worth the effort.

He was talking about the entire iMac lineup they updated after that in Aug 2020.
 
  • Like
Reactions: Tagbert
Let's also not forget that Tim Cook is laying off the QA team, so the usability of MacOS / iPadOS / iOS is steadily dropping. I often find iOS to bug sometimes when swiping down, with half of the screen blanking, and the iOS's calendar to be a visual mess. I even considered Google Calendar instead, because the 100% flat layout was so confusing.
Where did you hear that? other than a comment in a forum like this.
 
Where did you hear that? other than a comment in a forum like this.

I hear all the time at forums like these that Apple is losing talent to their rivals, and people have been complaining that the Apple has been letting software quality slip away while focusing on software. E.g, the control panel on the new MacOs is a step behind, because it is just a rehash of the iOS control panel, without considering users are on a desktop.
 
I hear all the time at forums like these that Apple is losing talent to their rivals, and people have been complaining that the Apple has been letting software quality slip away while focusing on software. E.g, the control panel on the new MacOs is a step behind, because it is just a rehash of the iOS control panel, without considering users are on a desktop.
It’s one think to say that Apple is not doing enough testing, but it is entirely different to make a direct claim like “Tim Cook is laying off the QA team”.

Most of the problems in the Settings are app with the design and feature selection. They seem to have just phoned that in. That is a failure of Product management and UX, not QA.
 
That thought is misguided even if the PC alternative is indeed much worse.
What's worse on a PC, you get performance for what you paid for. You don't lose backwards compatibility. You have choice and options to upgrade and a huge selection of third-party hardware without fear of compatibility issues.
 
What's worse on a PC, you get performance for what you paid for. You don't lose backwards compatibility. You have choice and options to upgrade and a huge selection of third-party hardware without fear of compatibility issues.
You seem to really value compatibility but many people feel no need to be compatible with something they have no interest in.

I deleted Parallels and threw out what little Windows software I had a few years ago and haven't missed it.
 
That's a pathetic 11% increase only, after more than a year release cycle. This is a worse increment than intel-macs upgrades. At least those were upgraded by more than 15%, as far as I remember.
This has been the situation since the decline of Moore’s Law in the early 2000s (for single core) and Amdahl’s Law in the early-2010s. Welcome to the laws of physics.
 
Well in my experience we reached the point of “fast enough” in computers a few years back; the performance of M1 is just the icing on the cake, meaning there is almost no waiting involved in casual computing. Aside from updating the OS, nearly everything is instant and smooth, and thats how the computer as a tool should be.

You can see this in the increasing lifespans of computers, a computer used to last just a couple of years and now it lasts a decade. At 10% ipc increase per yearly generation it takes a while to build up enough performance difference that a switch is worthwhile. A lot of non-casual computing is moving into the cloud anyway, the whole idea of a workstation is changing into a thin client plus an online environment.
 
You seem to really value compatibility but many people feel no need to be compatible with something they have no interest in.

I deleted Parallels and threw out what little Windows software I had a few years ago and haven't missed it.

But then, most people don't care about computers at all, and would be happy even with a Chromebook.
 
What's the point? If you are committed to the old bottleneck model just get a PC workstation.
I find it questionable to call a system that is superior in terms of performance an "old bottleneck model". The powerful graphics cards should destroy AppleSilicon in terms of absolute performance (unless you are perhaps stuck with the special cases where AppleSilicon can keep up) and can be swapped out if needed.
 
I find it questionable to call a system that is superior in terms of performance an "old bottleneck model". The powerful graphics cards should destroy AppleSilicon in terms of absolute performance (unless you are perhaps stuck with the special cases where AppleSilicon can keep up) and can be swapped out if needed.

Because it is a bottleneck model. The dGPU system is fundamentally limited by the throughput of the very narrow system bus and there is nothing one can do to mitigate this. Your argument is fallacious because it fails to take into account the root causes of why Apple GPUs are currently slower. First, current software assumes the the dGPU model and thus leaves out an entire class of software designs that can utilise UMA. Second, Apple simply doesn't make "powerful" GPUs as of now. Even the M1 Ultra GPU only uses 80 watts at most — while delivering 20 TFLOPs of FP32 throughput. A third-party dGPU with the same performance uses at least two to three times more power. If Apple actually decides to ship a 300/400W GPU, it will outperform anything out there by a wide margin.

The only argument that makes sense to me is hardware modularity. Yes, Apple Silicon hardware model cannot be modular. That's the price you pay for efficiency and the more flexible software model.

Regardless, I am fairly sure this kind of discussion will become obsolete very soon. Apple is couple of years ahead of the curve, as usual, at least conceptually (their implementation still leaves much to be desired). The traditional dGPU is bound for extinction. Highly optimised systems like consoles have already abandoned the dGPU model, next-gen data centres won't use dGPUs as well, it is only the matter of time until AMD and Intel will be pushing integrated high-performance solutions in the laptop market, and desktop computing is already extremely niche.
 
Regardless, I am fairly sure this kind of discussion will become obsolete very soon.

I don't think so.

You're forgetting that as hardware gets smaller, discrete GPUs also get smaller. This means it gets cheaper, easier and more convenient to carry discrete GPUs around to extend hardware capabilities. E.g, you could easily carry two or more discrete GPUs for whatever reason to accelerate tasks (e.g, one to process video streaming, another for gaming).

"But such devices are only a dream!"

Except they're not. ASUS already makes a portable device with 6% the size of a conventional GPU, as they claim themselves. It has the power of a GeForce 3080, roughly: https://rog.asus.com/external-graphic-docks/2021-rog-xg-mobile-model/

It's probably not as powerful as a full-blown GPU, but it can boost performance significantly, and it's not a drag to carry.

At this rate, guess who's the only manufacturer who will be out of the party because they want to control user upgradeability?

That's right: Apple.
 
as hardware gets smaller, discrete GPUs also get smaller.

Have you seen the GeForce 4080?

Except they're not. ASUS already makes a portable device with 6% the size of a conventional GPU, as they claim themselves. It has the power of a GeForce 3080, roughly: https://rog.asus.com/external-graphic-docks/2021-rog-xg-mobile-model/

No it doesn't. Not even roughly. They say "UP TO GEFORCE RTX™ 3080 LAPTOP GPU". The GeForce RTX 3080 laptop isn't the same as the desktop GPU.
https://rog.asus.com/external-graphic-docks/2021-rog-xg-mobile-model/
 
  • Like
  • Wow
Reactions: Homy and diamond.g
Have you seen the GeForce 4080?

Of course I have. I addressed that in my previous post, if you read it more carefully.
GPUs are split in two types: high performance, with higher power draw and portable, with lower performance.

The GeForce 4080 is bulky because it's a design that tries to be as powerful as possible without caring for power efficiency or portability. But if you are willing to compromise, you can have a very portable, cooler design that is still powerful enough for many users, which the Asus ROG 3080 is.


No it doesn't. Not even roughly. They say "UP TO GEFORCE RTX™ 3080 LAPTOP GPU". The GeForce RTX 3080 laptop isn't the same as the desktop GPU.
https://rog.asus.com/external-graphic-docks/2021-rog-xg-mobile-model/

But I never claimed it was the same! You can't compare a design that doesn't care for power efficiency and size with a power-hungry design. And that also applies for Apple. Not even users here would claim that Apple's GPUs are comparable to full-blown discrete GPUs. What they do is they reach a nice balance between power efficiency and speed.

The problem is that Apple gives no options for users to expand on their GPUs. They are assuming what they offer will be enough for you, but sometimes it isn't. And it's nice to have a portable GPU that will give you options to accelerate your workflow.
 
I find Apple Silicon rumors quite boring, it was the most expectant rumors before M1 Max, but one we have seen how it works, you only have to multiply for 2 each version.

So we have plain M2, we know benchmarks, we can know already the performance for M2 Pro/Max/Ultra

The only thing we have to speculate is "when"

But it seems very unlikely to be surprises.

On the other hand, I find M2 progression a bit disapointment, it shows Apple is only able to achieve better performance through power compsumption or reducing nm fabrication process.
 
Of course I have. I addressed that in my previous post, if you read it more carefully.
GPUs are split in two types: high performance, with higher power draw and portable, with lower performance.

Yes, and the lower-performance ones can't compete against Apple's iGPUs.

The GeForce 4080 is bulky because it's a design that tries to be as powerful as possible without caring for power efficiency or portability.

No, it's bulky because NVIDIA had a hard time optimizing their design. We don't have to make excuses for it.

But I never claimed it was the same!

You said "it has the power, roughly". It's not close.

You can't compare a design that doesn't care for power efficiency and size with a power-hungry design. And that also applies for Apple. Not even users here would claim that Apple's GPUs are comparable to full-blown discrete GPUs. What they do is they reach a nice balance between power efficiency and speed.

The problem is that Apple gives no options for users to expand on their GPUs. They are assuming what they offer will be enough for you, but sometimes it isn't. And it's nice to have a portable GPU that will give you options to accelerate your workflow.

This is all true.
 
So we do agree that the mobile solution is not as good as the dedicated card.
However, it's still a flexible option that will be able to be combined into multiple GPUs with USB 4, which will become a more popular standard soon.

Of course, nothing stops you from simply plugging a bulky, dedicated GPU card into Thunderbolt 3 / Tunderbolt 4 / USB 4. It's all a matter of how much power efficiency / performance you are willing to sacrifice.

Also, do keep in mind that Apple's GPUs are good now. But for how long will Apple be able to keep up? Without offering an expansion option, will they still be so far ahead four years from now?
 
You're forgetting that as hardware gets smaller, discrete GPUs also get smaller. This means it gets cheaper, easier and more convenient to carry discrete GPUs around to extend hardware capabilities. E.g, you could easily carry two or more discrete GPUs for whatever reason to accelerate tasks (e.g, one to process video streaming, another for gaming).

This is not about size or cost. This is about the ability to get the data in and out. Just look at the RTX 4090. This GPU is so big that it runs a real risk starting the compute units. Compute capability is increasing dramatically but the data bus connecting the GPU to the rest of the system is not.

Nvidia has openly acknowledged this problem years ago. This is why there is a lot of frantic research into novel systems like processing-on-memory and why Nvidia's upcoming supercomputer systems are hybrid systems featuring large GPU caches, coherent RAM and ultra-fast CPU/GPU interconnect, not unlike Apple Silicon.

Except they're not. ASUS already makes a portable device with 6% the size of a conventional GPU, as they claim themselves. It has the power of a GeForce 3080, roughly: https://rog.asus.com/external-graphic-docks/2021-rog-xg-mobile-model/

It's a mobile eGPU. Up to a 150W mobile 3080. Connected via a dead slow x8 bus. How do you plan to fully utilise the data processing capability of a GPU if you are constraining yourself to a 8GB/s transfer link? And then you wonder why I talk about a bottleneck model...
https://rog.asus.com/external-graphic-docks/2021-rog-xg-mobile-model/
 
It's a mobile eGPU. Up to a 150W mobile 3080. Connected via a dead slow x8 bus. How do you plan to fully utilise the data processing capability of a GPU if you are constraining yourself to a 8GB/s transfer link? And then you wonder why I talk about a bottleneck model...
https://rog.asus.com/external-graphic-docks/2021-rog-xg-mobile-model/


You can use compression, as Nvidia is already doing.
Of course, you could also simply use Thunderbolt 4 or wait for USB 4, which transfers up to 40 GB/s.

And in fact, we're not limited to using compression OR a faster bus. We can simply combine both.

People are not standing still, you know. The industry IS moving forward.
 
Also, do keep in mind that Apple's GPUs are good now. But for how long will Apple be able to keep up?

If anything, Apple's GPU project is just getting started. It wasn't so long ago that they licensed from Imagination.

I'm not worried about that.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.