Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
For the Mac Pro, why not simply put a 96-core EPYC AMD CPU in it with a RTX 4090 (if Apple can solve their politics with NVIDIA) while retaining user expandability and repairability for the Mac Pro. Since Mac Pro usually supports dual chips, Apple could even put a 192-core AMD CPU in it even.

Does Apple really believe the M2 Extreme would beat a 192-core AMD CPU and a RTX 4090? Heck, you can probably put multiple RTX 4090 in the Mac Pro even (if Apple solves their politics with NVIDIA).

For laptops, I get it. ARM offers nice battery life, but a Mac Pro has no battery life.
So, so many reasons.

1. Control over the ecosystem — no waiting for AMD or nvidia to release updates or new products, Apple can work to its own timelines
2. Efficiency on Apple's side — fewer platforms to support and optimise for, less need for weird hacks like the T2 chip
3. Efficiency on the user's side — running Epycs and 4090s is like having a little oven running in your office. They burn massive amounts of power and spit out tons of heat. They need monster fans and take up huge amounts of space. Most people would rather have something cool and quiet and power efficient.
4. Cost — As expensive as they are, Apple's chips are actually making powerful Macs more affordable. Putting AMD and Nvidia's margins on top of the already high prices is worse for customers.
5. Compatibility — keeping everything on one platform reduces the amount of work for third-party devs to make sure their apps work properly.

Some of these overlap a little and I'm sure there are more points that someone could come up with, these are just off the top of my head.
 
So, so many reasons.

1. Control over the ecosystem — no waiting for AMD or nvidia to release updates or new products, Apple can work to its own timelines
2. Efficiency on Apple's side — fewer platforms to support and optimise for, less need for weird hacks like the T2 chip
3. Efficiency on the user's side — running Epycs and 4090s is like having a little oven running in your office. They burn massive amounts of power and spit out tons of heat. They need monster fans and take up huge amounts of space. Most people would rather have something cool and quiet and power efficient.
4. Cost — As expensive as they are, Apple's chips are actually making powerful Macs more affordable. Putting AMD and Nvidia's margins on top of the already high prices is worse for customers.
5. Compatibility — keeping everything on one platform reduces the amount of work for third-party devs to make sure their apps work properly.

Some of these overlap a little and I'm sure there are more points that someone could come up with, these are just off the top of my head.

It's nice to have a server that is power efficient, but if you want a monster computer, power efficiency is probably not priority. And I'm sure control over the ecosystem works both ways too: on a more generalized system, I can always buy a more generic part if a manufacturer decides to stop selling it. What happens if Apple decides they don't want to sell parts for your server anymore, and only they make it?
 
It's nice to have a server that is power efficient, but if you want a monster computer, power efficiency is probably not priority. And I'm sure control over the ecosystem works both ways too: on a more generalized system, I can always buy a more generic part if a manufacturer decides to stop selling it. What happens if Apple decides they don't want to sell parts for your server anymore, and only they make it?
If you want a monster computer that doesn't care about heat/power efficiency, and has great generic part support, the Mac probably isn't the platform you should be looking at in the first place.
 
Since Mac Pro usually supports dual chips, Apple could even put a 192-core AMD CPU in it even.

Apple hasn't shipped a dual-CPU Mac Pro since 2013.

The 192-thread AMD isn't a "professional content creators" CPU, it's a backend-server-renderfarm-or-database-server-or-massive-VM-host CPU. And it can consume up to 700 Watts by itself.

There are ARM server CPUs that are on par with Intel Xeon and AMD EPYC high-end server CPUs. ARM wasn't originally designed as a "mobile chip", it was designed as a "power efficient multi-purpose chip" including desktops and servers. It just only got its niche in mobile use for decades.
 
  • Like
Reactions: wyrdness
The OP is not a new member, so that's not the situation here. I think its a fair question, especially since Intel/amd Nvidia/AMD (gpus) have clear advantages in the desktop segment. The ARM processor that Apple designed, is first and foremost a mobile processor. Its roots are from Apple's phone processors, and many of the advantages built into the processor (that make it a great laptop cpu) don't hold as much value in desktop or workstation settings.

Apple hasn't yet released its real desktop chip, so far we've only seen their chip for the small desktop section, and it's faster than most others.

The power supply needed to run those chips you mentioned are roughly the size of a Mac Studio.
 
The Mac Pro is not a server, so yeah, no business would. The Mac Pro is a high end workstation.

The argument that the Mac Pro is not a server is not entirely accurate.

The Mac Pro can be configured with powerful hardware such as up to 56 cores and 1.5TB of RAM, which are specifications that are on par with many servers. Additionally, MacOS, the operating system that runs on the Mac Pro, is based on UNIX and is capable of running many server-grade applications such as file sharing, web hosting, and virtualization. Many businesses and organizations do use the Mac Pro as a server. For example, it can be used as a Xserve replacement for small and medium business, for running software like macOS Server, and for a variety of other purposes.

Furthermore, the Mac Pro is built with a high-end architecture and can handle a wide range of professional-grade tasks that are typically handled by servers, like running multiple virtual machines, hosting high-performance databases, and running complex simulations, among others.

While it's true it is not exclusively marketed as a server, its capabilities and specifications make it more than capable of handling both workstation and server tasks depending on the configuration and software used. So, just because it is not marketed as a server, it doesn't mean it couldn't be used as one.
 
  • Like
Reactions: Lcgiv
They clearly have big problem with ARM Mac Pro, since they have not updated the Mac Pro for 4 years (2019).
And the rumored one keeps getting pushed back year after year.

It's costing them a lot of pro customers who are switching NVIDIA/AMD high end solutions, with no alternative from Apple.
No, the "rumored one" was rumored for late 2022, so a month or so ago ...
 
I've seen those posts. Imagine joining a forum for enthusiasts of computers you don't like, just so you can passive-aggressively p*ss off the enthusiasts. Mind you, it happens with all interests. Pick any popular sport or interest, musician, band etc etc, and 10%+ of the user base on its official forums or SM groups are trolls who don't share enthusiasm about that interest/personality/team etc, and are only there to irritate the remainder. They must lead a sad life if this is their height of entertainment.
Internet graffiti
 
I've seen those posts. Imagine joining a forum for enthusiasts of computers you don't like, just so you can passive-aggressively p*ss off the enthusiasts. Mind you, it happens with all interests. Pick any popular sport or interest, musician, band etc etc, and 10%+ of the user base on its official forums or SM groups are trolls who don't share enthusiasm about that interest/personality/team etc, and are only there to irritate the remainder. They must lead a sad life if this is their height of entertainment.

But this is a forum where many different people disagree, right? We're not here to worship Apple or Tim Cook. And if people just worship Apple, that is bad for the company itself.
 
Not to mention, it would be not very economical to incorporate upwards of 16TB UMA RAM for big Data applications and scientific research field. Heck, we haven’t even seen 256GB UMA yet. Those workstation users are going to scoff at 128GB of RAM.
16TB of RAM is astronomically expensive no matter how you look at it.

Unified Memory Architecture (UMA) does not limit the amount of memory. It is how you do it that limits it. Apple chose to use LPDDR memory modules, and those comes as SMT modules, because it allows high thruput low latency performance. There's no technical constraints for Apple to design their AS memory controller to support DDR memory modules, other than taking a hit at memory thruput and access latency. It is a design choice by Apple.

If Apple wants to, they can design an AS Mac Pro with lots of ECC DDR5 memory slots with TBs of memory capacity, and it will still be UMA. It's performance on the other hand may be worst off that the Mac Studio tho. Engineering is always about trade offs.
 
It's costing them a lot of pro customers who are switching NVIDIA/AMD high end solutions, with no alternative from Apple.
By now? Nah. Apple’s been turning away folks that need NVIDA/AMD and high end solutions for years. Folks need to get work done, so they’re not hanging onto Apple while their competitors are beating them.
 
  • Like
Reactions: Basic75
This isn't referring the OP specifically, but we're seeing a lot of new members here who are essentially trolls. Usually, they are x86 and Nvidia supremacists who can't stand that Apple Silicon is better than what they can buy in the Windows world. I wonder if mods will do anything about it here.

Or is it that there are a lot of people that use Apple Card, iPhone, Apple Watch, and AirPods.

Then go use their Windows PC.

Speaking for myself I believe that while Apple’s mobile products are superior; PC suits everything I need to do all day at work and my Surface 7 Pro and Xbox are just fine at home. That doesn’t make me a supremacist or a troll.

Some people can use one suite of apple products yet prefer others for other things.

Furthermore calling for mods to silence those whom you disagree with, label, and dismiss probably isn’t a good look.
 
  • Like
Reactions: Kazgarth
To summarize this thread and many other ones like this Apple should fire Tim Cook and Johny Srouji, hire AMD CEO Dr. Lisa Su, put AMD Epyc CPUs and Nvidia 4090 in their Macs and allow Bootcamp with Windows 11. In other words Apple should throw their people, their silicon and their OS out the window and become Dell, or do like Dell suggested, shut down their business.

My suggestion to all these posters, let Apple do their thing and go buy a Dell if you want all that instead of trying to change Apple. There are already many pc companies just like the ones you want Apple to be. Why care about Apple when what you want is already around the corner? ;) 😄
 
ARM started out as a desktop CPU. I don't see any reason why it couldn't be one again.
x86 is a monstrosity with a huge instruction set and much overhead. It's only the price of the ecosystem that caused the world to use x86, even servers and gaming consoles now.
 
  • Like
Reactions: Gudi
Basically, this thread boils down to the ignorance of solely focusing on raw processing power while ignoring the enormous complexity and resources required to support two different architectures.
...as well as ignoring the fact that Apple Silicon is a lot more than CPU/GPU. All the hardware accelarators contribute greatly to the speed of the system and should be counted in!
 
x86 is a monstrosity with a huge instruction set and much overhead.
ARM has the same problem with instructions. Apple solved it by removing 32-bit instruction support. If AMD/Intel did the same, x86 would solve most of the instruction problem.
 
If AMD/Intel did the same, x86 would solve most of the instruction problem.
Not quite. While the variable-length encoding of x86 is very good for code density, which reduces pressure on the L1I cache, unlike with fixed-length instructions it's really hard to take a sequence of bytes and decode all instructions they contain in parallel because you don't know the instruction boundaries later down the line before decoding the earlier ones.
 
If we only compare CPU cores, are Apple's CPU cores bigger/smaller than Intel/AMD's?

According to the links below, Zen4 and M2 are about the same (only core, with L2 cache and control logic Zen4 is smaller). Intel is likely bigger, because of its humongous SIMD and wider data paths.

 
  • Like
Reactions: Basic75 and Xiao_Xi
Not quite. While the variable-length encoding of x86 is very good for code density, which reduces pressure on the L1I cache, unlike with fixed-length instructions it's really hard to take a sequence of bytes and decode all instructions they contain in parallel because you don't know the instruction boundaries later down the line before decoding the earlier ones.

It’s hard, but decoding instructions in parallel is what x86 CPUs do. You just need more circuitry (and power) to detect instruction boundaries, which can be done for a block of memory at once. And uop caches massively help here as they reduce the cost of decoding of repeating code fragments.
 
It’s hard, but decoding instructions in parallel is what x86 CPUs do. You just need more circuitry (and power) to detect instruction boundaries, which can be done for a block of memory at once. And uop caches massively help here as they reduce the cost of decoding of repeating code fragments.
Sure, there are ways around it, I was just not buying the "remove 32bit from x86 and everything is easy". Also because x86-32 and x86-64 are so very similar. That was the whole point of it, extend to 64bits in the least disruptive way possible.
 
  • Like
Reactions: maflynn
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.