Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You know, if previous and ongoing Mac Pro rumors are anything to go by, then it certainly exists a M1 Extreme variant and also a new Mac Pro chassis that had been tested specifically for Apple Silicon (has slots but less of them since no dGPU). The rumors including Gurman's were saying Apple made actual working prototypes and already tested them, but ultimately decided to scrap it as a real product, very likely due to the SoC cost that would inevitably drive up the upfront pricing of the Mac Pro to laughable levels, if they wanted to maintain the same kind of margins.

Over on the Apple Silicon subforum, a few posts discussing on this on and off would go into the same discussion like the above: Apple again designed themselves into this corner, but this time is for the better, they bottom-to-top approach of Apple Silicon design and scaling means they prioritize the low end where there are volumes, and efficiency matters which is the strength of going ARM to begin with. This means the top end simply suffers, Apple tried to make things happen but it proved to be harder than it is worth. And it shows, with the lack of Mac Pro progress, which is like a modern Apple tradition by now.
 
Why?

1. Higher cost
2. Higher chip price
3. Limited product line (So far, no Mac Pro grade chips)
4. Limited GPU performance (They cant just increase GPU cores instead of combining two chips)
5. Inefficient chip design.

If you really think the yield is fine, then you have a serious problem after all. Dont forget that Apple is the only one making and using the latest TSMC chips while others are still using 5nm which provides several advantages.
It is your backwards understanding of yields that is the “problem” here.

Intel’s former CEO Gelsinger addressed this when he said, “Speaking about yield as a % isn't appropriate. Large die will have lower yield, smaller die - high yield percentage. Anyone using % yield as a metric for semiconductor health without defining die size, doesn't understand semiconductor yield. Yields are represented as defect densities.”

The conclusion you’ve drawn from this expert insight appears to be the opposite of what is meant. The point is that die size doesn’t affect yield, it’s not the other way around. It just appears that way to people who don’t understand it.

In terms of costs, it does not matter if it is a monolithic SoC, an integrated SoIC, or even a discrete CPU/GPU approach, the defect density is the same for a given process node. Combining chiplets into an SoIC doesn’t magically make defects in the silicon go away, it’s just a slightly different way of managing them.

Chiplets are the future because they provide semiconductor lithography with an architectural path beyond the reticle limit. It has little to do with reducing costs or increasing yields (that is TSMC’s job, Apple supports that progress, and both gain from it).

Finally, Apple’s approach to silicon architecture is nothing if not “efficient.” You are dead wrong about that. It’s true that Apple is often the first to market with consumer products on the next TSMC node, but that is in pursuit of improving both performance and power efficiency.

AMD is not that far behind. Zen 6 and UDNA will go into production on TSMC N3E, a year or so behind Apple’s A18/M4 generation. At the same time, AMD is expected to lead the way with advanced SoIC packaging this year, with Apple following behind them with M5 Pro/Max. There’s a kind of symbiosis as both of them do what they do best.

Mac Pro is waiting for SoIC to solve the problems that led to Apple not building the M1 Extreme, as @Chancha mentions above. Apple is being patient, and they are not trying to be AMD. They are not trying to be a merchant silicon vendor. They have a product in mind, and they will build it when it can do what they want it to do.
 
  • Disagree
  • Like
Reactions: o9p0 and BNBMS
One can only hope that SoIC allows Apple to build a desktop/personal workstation specific "chip" that really amps up the number of GPU cores...

Please, Apple, build a fire-breathing monster for 3D/DCC/AI/ML workflows...!

  • Mn Extreme
  • 64-core CPU (48P/16E)
  • 1,024-core GPU
  • 256-core Neural Engine
  • 1.92TB ECC LPDDR6 RAM
  • 4TB/s UMA bandwidth
  • 32TB SSD (Four @ 8TB NAND blades)
And wrap it all up in an all-new Mac Pro Cube chassis...! ;^p
 
  • Like
Reactions: tenthousandthings
Mac Pro would only be great for me personally if Apple would just accept that discrete GPUs are great (and necessary) for certain fields. If they could build such add-on cards, I'd be all over it, along with pros in the pro-video, graphics, ai, etc field. The best Ultra chip can't compete with a top Nvidia gpu unfortunately, and I doubt an 'Extreme' card would either. Even if it did, 1 year on and a dedicated dGPU would destroy it.
Please, Apple. Do it!
 
Pretty sure Apple’s main demand for M3 Ultra is Apple Intelligence servers. Limited server capacity prior to M3 Ultra is a good reason for the gradual AI rollout and they couldn’t wait for M4U.

Next to M4 Max, the M3 Ultra has no CPU advantage but huge GPU and RAM advantage. Ideal for LLMs.
 
Private Cloud Compute probably runs, or used to run, mainly on ComputeModule13,1 and 13,3; it's possible that they're migrating those to M3 Ultra, though.

Next to M4 Max, the M3 Ultra has no CPU advantage

Not for most workloads, but it's 17% faster than M4 Max on XcodeBenchmark. The RAM doesn't appear to be the main reason; the extra cores must be.
 
Private Cloud Compute probably runs, or used to run, mainly on ComputeModule13,1 and 13,3; it's possible that they're migrating those to M3 Ultra, though.



Not for most workloads, but it's 17% faster than M4 Max on XcodeBenchmark. The RAM doesn't appear to be the main reason; the extra cores must be.
The relevant point is that the M4 Max has an advantage over the M3 Ultra for most workloads. because of the strongly higher single thread speed and being close to the M3U for maxed out sustained loads. For most XCode work the M4 Max is “good enough” for sustained load and better for everything else.
 
"According to one rumor, Apple's A20 chip in next year's ‌iPhone‌ 18 models will switch from the previous InFo (Integrated Fan-Out) packaging to WMCM (Wafer-Level Multi-Chip Module) packaging. WMCM integrates multiple chips within the same package, allowing for the development of more complex chipsets. Components such as the CPU, GPUs, DRAM, and Neural Engine would therefore be more tightly integrated. While we don't know for sure, this could see Apple develop the M6 using the 2nm process while taking advantage of WMCM packaging to make even more powerful versions of its custom processor."

Who said it's not possible? I'm right after all.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.