Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You really think so? I am not saying you're wrong. I just think it would be needlessly expensive and drawn-out to slowly kill of the Mac Pro by deliberately sabotaging it just to save face for a small minority of Apple's client base. It is my opinion that whoever is making the calls on the Mac Pro is really just that out-of-touch. 😆
Apple knows how many Mac Pro systems they need to make in order to make a return on the investment. They’re also communicating directly with those few users that are in the market for a Mac Pro (or maybe several) in order to ensure that what they produce will meet what those customers’ need at the price those customers want to pay. As a result, the fact that Apple are NOT making and pricing them at a mass market price, upsets the folks that wish Apple would do so.

They’re not out-of-touch, they invite the expected buyers out to Apple to spend a few days going over their workflows. Apple understands those customers quite well indeed. As soon as the new machines were available for sale, the P.O.’s were cut, sent to Apple, and machines were delivered as quickly as Apple could make them. Might there be a few outside this core group of users that find value in the Mac Pro? Perhaps, but it’s not like Apple plans to make more than a few million of these anyway before their EOL’d and the next thing is up for sale.

I expect that as long as Apple can make a return on making them, they’ll keep churning out a few hundred thousand a year, profiting on each sale along the way.
 
  • Like
Reactions: atonaldenim
You really think so? I am not saying you're wrong. I just think it would be needlessly expensive and drawn-out to slowly kill of the Mac Pro by deliberately sabotaging it just to save face for a small minority of Apple's client base. It is my opinion that whoever is making the calls on the Mac Pro is really just that out-of-touch. 😆
I do unfortunately. I'd love to be wrong, but it seems to me Apple have revealed their true colours over the years.
Despite the Cube being unanimously rejected by the public, we've been offered nothing but variations of it ever since.
In 2000 - here's a Cube - we don't want it - Apple abandons it in less than 6 months.
in 2013 Apple say here's another cube....but now its a cylinder - we don't want it - Apple abandons it (again), there's no updates at all but this time it takes 6 years for it to be discontinued.
In 2014 Apple say - The Mac Mini...the popular entry level Mac desktop that was user upgradable...well it's not any more - we soldered the RAM on!
So they've actively pushed this agenda.
It's never been based on what the user wants, it's based upon manipulation.
The Mac mini of today, The Mac Studio and all the current laptops is just more of the same.
In fact Apple's entire line up is now basically unexpandable.
The only one that is (the Mac Pro) starts at a ridiculous price point of £5499!
In fact even now, in 2023 the base model Mac Pro has less storage than the base 2009 Mac Pro model shipped with. In 2009 it shipped with 640GB (all be it spindle HD) storage and sold for just £1899.
So in the decade it took Apple to reintroduce a tower Mac, they effectively tripled the purchase price and halved the storage.
It's a giant F**k You to many users.
The Mac Pro we waited so patiently for is inaccessible to many previous users because of the starting price.
I believe it's so Apple can (wrongly) claim they offered the 'choice' of a 'Pro tower' and not enough people wanted it, which is really just them manipulating the narrative.
The truth is if you like using Mac OS or (as I do) use Logic for music you need a Mac, so Apple have made them all non upgradable and they want you to pay extortionate amounts to increase the spec...unless you pay £5499 for a Mac Pro.
They know it's unethical, they just don't care.
 
I wish Apple would move on from the cheese grater. It wore out its welcome when it first came out 20 years ago.
They need to bring back the mirrored drive door style G4 case! Now THAT was a great looking machine!!!
 
  • Like
Reactions: gusmula
It’s an “M” chip, therefore of course you can’t upgrade memory. Memory is not separate in the way Intel and PC’s use it. Hence why you such insane performance using less watts than a night light.

The only perk of the 2019 case is for the massive quad graphic cards, people that want to put in the hard drive expansions.
 
Does Apple have a chiplet design strategy at all?!

This is eye opening.

“Chiplets” seems like a very smart way of being able to improve “blocks” of an SoC in-line with the most recent technology advancements, without waiting for a years-long complete redesign of the entire die.

I agree that Apple shouldn’t decouple the CPU, NPU, ML, GPU cores from the die unless — UNLESS — the performance and clock speeds of, say, GPU cores are being held back by the much slower clock speeds that CPU cores can only run at.

If Apple could design a GPU-only IC that could run at 10 GHz — and the required I/O memory bus/data bus is sufficiently fast, I say, Go for it! Especially if more and more General Purpose instructions can be performed on a GPU instead of the CPU. (Apple needs to strive much harder to find more and more GPGPU optimizations. Linux is way ahead of Apple in this pursuit.)

But I suspect Apple doesn’t want a situation where an Apple Silicon SoC design changes every six months, much to the confusion/frustration of developers and even Apple’s own OS/SDK engineering teams. It’s already the case that two years after the M1s release, MacOS software that claims it’s optimized to run on the M1 isn’t reeeeeeally as optimized to run on the M1 as it could be. (Including even Apple’s own core apps like Final Cut Pro.)

Frequent changes to Apple Silicon via regular “chiplet” improvements might present an ever ”moving target” that developers will be demotivated to code specifically for, knowing that a fundamental part of the architecture might change in 6 months.

That’s why the burden on Apple’s own OS software engineers should be so high. If hardware abstraction is strictly adhered to, Apple’s OS (and SDK) software engineers can make changes to the underlying OS — in-line with changes to the underlying Silicon — such that existing codebases can simply inherent Silicon and OS performance improvements automatically, without the need for developers to so much as recompile existing apps.

I do realize that this is how it works already; I’m calling for even greater dedication on Apple’s part. Rosetta 2, for example, was Apple’s “Moon Shot” that Microsoft can only DREAM of ever accomplishing.

(As I understand it, Apple Rosetta 2 engineers even found plenty of Intel instructions that needed no translation at all to run on ARM.)

Rosetta 2 might have led to a lot of Apple engineer burnout, but Apple needs to find a way to motivate engineers to be as devoted and dedicated to “impossible” feats like this again.
I do feel that chiplet design is the future. AMD found what seems to be an infinitely scalable method to increasing CPU cores in its offerings. It has to be a huge boon to the engineers to be able to just add more chiplets to the package as opposed to engineering an entire monolithic design every single time.

Apple appears to have some level of scalability with their SOC's -- the Max/Pro/Extreme variants are basically "glued together" M1 SoC's, right? But they're still monolithic designs, which I'd imagine is significantly more expensive and harder to engineer around.
 
  • Like
Reactions: R2DHue
I doooonnnnn't know... A proprietary gateway/port/"standard"requiring licensing for GPUs could be under development - even in partnership with NVIDIA, AMD, et. al.
NVIDIA even?! I hadn’t heard that hell had frozen over or Apple made peace with Epic Games or Apple got onboard with MoltenVK! (The latter of which Apple should do, btw, if it ever expects Macs to pose even the merest of challenges to PCs vis-à-vis Games. Besides, it’s Khronos Group! Your ole buddies? Your pals!? ‘Member?)

Apple has to maintain the stance that its own graphics hardware is superior to all; support for AMD/NVIDIA GPUs would undercut such messaging. And Apple would probably never agree to pay licensing fees to AMD or NVIDIA (again).

Seriously, though, there is merit to having to support only ONE GPU architecture that you 100% control vs. the ceaseless hassle of supporting every permutation of every AMD and NVIDIA GPU and the dozens of cards from licensees of their GPUs. And deprecating support for whatever you decide being completely in your hands vs. being at the complete mercy of Third Party outside companies, each with their own interests. I’ll bet Apple software engineers are tearing less of their hair out than Microsoft Windows engineers are and every Windows game developer is.
 
  • Like
Reactions: DailySlow
I do feel that chiplet design is the future. AMD found what seems to be an infinitely scalable method to increasing CPU cores in its offerings. It has to be a huge boon to the engineers to be able to just add more chiplets to the package as opposed to engineering an entire monolithic design every single time.

Apple appears to have some level of scalability with their SOC's -- the Max/Pro/Extreme variants are basically "glued together" M1 SoC's, right? But they're still monolithic designs, which I'd imagine is significantly more expensive and harder to engineer around.
Re: “which I'd imagine is significantly more expensive and harder to engineer around.”

But a single, stable standard that can be relied upon not to change at least for a while — between monolithic upgrades to M2s, M3s, M4s, etc.

I can see merits to both.
 
Am I willing to fuss around upgrading my computer at 74?
Certainly.
I built my first few windows machines. It isn't rocket science nor is it difficult to upgrade.
Here's to living a couple more decades mate so you can enjoy buying several new Macs and enjoying the awesome developments in technology over the years. Assuming good health, 74 isn't what it used to be.
 
What a crock, They design the board, They could easily put replaceable features on the board, especially a PRO Product. But What's more important? THEIR PROFIT, Not the Consumer Benefit. Remember that, It's what's best for THEM not YOU.
 
Apple appears to have some level of scalability with their SOC's -- the Max/Pro/Extreme variants are basically "glued together" M1 SoC's, right? But they're still monolithic designs, which I'd imagine is significantly more expensive and harder to engineer around.

The M1, M1 Pro, and M1 Max SoCs are all their own individual monolithic designs...

The M1 Pro monolithic design is basically a "cut down" variant on the M1 Max monolithic design, it is not a physically "chopped" M1 Max SoC...

The M1 Max SoC has an UltraFabric connection, designed to be used to "glue together" two M1 Max dies and create a M1 Ultra SoC...

This UltraFusion connection is wasted on M1 Max SoCs that reside in the Mac Studio and the 14"/16" MacBook Pro laptops...

A theoretical Mn Extreme SoC would be four Mn Max dies "glued together" with some sort of 4-way UltraFusion connection...
 
“there are two SSD storage slots for graphics, media, and networking cards.”

Does Gurman mean PCIe slots?
Seems just like the Mac Studio only in terms of 2x SSD storage slots.

From a few experiments by Max Tech and a few others trying to use Mac pro’s storage this potentially could open up expandable internal storage options for the Mac Studio.

This would be nice.
 
Gurman’s Power On email newsletter I received this morning was slightly misquoted here. Not that the original was any clearer, but here it is:

In another disappointment, the new Mac Pro will look identical to the 2019 model. It will also lack one key feature from the Intel version: user-upgradeable RAM. That’s because the memory is tied directly to the M2 Ultra’s motherboard. Still, there are two SSD storage slots and for graphics, media and networking cards.

It really seems like a word is missing between “and” and “for graphics”. Like “and PCIe for graphics” would make so much sense. (I even checked the HTML for some kind of malformed code in that sentence, but found none.)

But still, he said for GRAPHICS. If “graphics” does not mean GPU, what else could it mean? HDMI capture cards? Afterburner ProRes accelerator (already on M2 SoC)? I think the most plausible interpretation of “slots… for graphics… cards” has to be that the Mac Pro will continue to support GPU cards. Right? *

I agree with whoever said pages back that it would really be burying the lede in a confusing sentence if that’s the case. And deconstruct60 has provided tons of articulate reasons why Apple Silicon so far doesn’t seem to have any real support for additional GPUs.

But just to play devil’s advocate. The size of the 2019 Mac Pro case was mostly dictated by the size of the very long and tall MPX module design. Keeping the same case for an Apple Silicon Mac Pro could very well indicate that the MPX Module expansion card design (or something very similar) will live on. MPX is designed to provide hundreds of watts of power, large silent passive cooling systems for big hot chips, and connections to the system’s Thunderbolt bus. All of that power and cooling is really only useful for GPU cards.

Keeping the same case design for Apple Silicon Mac Pro, like the PowerMac G5 that became the Intel Mac DTK and then the first Mac Pro, to me suggests they intend to continue the same philosophy for the product. That it should still be the big heavy duty beast of a machine that can be stuffed full of high power functionality and custom configured for the needs of diverse demanding industries.

A 192GB limit on RAM is definitely disappointing and likely a deal-breaker for some of those use cases. But not all. I agree with deconstruct60 that perhaps keeping an Intel Mac Pro in the lineup alongside an Apple Silicon Mac Pro (like the M1 Mac Mini) will let them say they’ve “completed” the Apple Silicon transition, while continuing to satisfy the demands of some corporate customers who may need more RAM than M2 Ultra can give. Others might be fine with 192GB or less on M2 Ultra, and SSD swap as needed.

This first AS Mac Pro is probably going to have a lot of compromises. It likely won’t be able to support as many lanes of PCIe as the Intel Xeon can. (To be fair, we haven’t seen the M2 Max yet, which will be the foundation for the M2 Ultra presumably.) But to make an attempt at competitive performance I think it’s more likely that M2 Ultra will try to offer some kind of AMD and/or Apple GPU card support on top of its integrated GPU, even if its PCIe bandwidth isn’t quite up to the task this generation.

Selling a big tower computer with decreased GPU performance compared to the 2019 model, and no ability to upgrade the GPU despite having expansion slots and lots of empty space? That just seems absurd.

The Intel Mac Pro with W6900X AMD GPU that they’ll sell you new today in 2023 must continue to be supported into many future generations of MacOS drivers. Is it so unthinkable that they’d also include AMD driver support for Apple Silicon MacOS as well as Intel MacOS?

Admittedly I don’t understand the engineering of the CPU / SoC architecture, and some others here clearly do. If they do continue to offer an Intel Mac Pro along with an M2, then maybe only the Intel model gets the MPX GPUs. Maybe they do hand-wavy graphics benchmarks based on the Intel config and also talk about Apple Silicon benefits in the same breath.

But still. Gurman said graphics. GRAPHICS!

*(Really unlikely he meant M.2 PCIe slots for SSDs and graphics cards, as Apple has never used M.2. They’ve put SSD blades on a variety of proprietary slots but never for any other purpose. They’ve used mini PCI slots for iMac GPUs, AirPort and bluetooth cards a long time ago, but not in many years. And space is hardly an issue in a Mac Pro tower to necessitate such tiny slots for expansion cards.)
 
Last edited:
For multiple potential reasons:

- Cooling issues. Having multiple of these CPUs combined into one may result into a heat buildup that expotential to the increase of CPU die surface. So you need a better cooling solution. Take Mac Studio for comparison, how large that heatsink is. And now double or quadruple the CPU die size -> You will need some space to install some really effective bad ass quiet cooling

- Time restrictions: They just didn't have enough time to invent a better casing. And out of practicality - and maybe because current users are already using and daved the space for this form factor - they keep it.
Apple has had 2yrs already to design a new case.

It took Apple what 1.5yrs to go from 2019’s Trash Can to the currently elegant beauty in the Mac Pro.

I get the chip was delayed. Canceled due to the pandemic shutdowns affect TSMC but case design?! Really?
 

Attachments

  • Seriously_Web.gif
    Seriously_Web.gif
    337.5 KB · Views: 48
My thoughts on the Mac Pro are this:

Apple's focus is on providing accelerated solutions to workflows that people do frequently on the device they are using.

Apple ticked a lot of classical 'tower PC' boxes with the 2019 Mac Pro, insofar as it offered all the usual things, with the exception of the clever integration of Thunderbolt passthrough and the MPX format for the cards. Other than that, it was, ultimately, a workstation XEON tower, much like the competitors with a nicer chassis.

2022/2023 is a bit different. Apple's shift to Apple Silicon has allowed it to be more feature oriented, hence the media encoders and neural engine, Secure Enclave etc.

I highly doubt that Apple will just 'tick the boxes' again on this one. If anything, I believe that the boxes floating around the traps now in the rumours are test machines.

This is Apple's halo Mac product. It always has been. I believe Apple will continue to innovate and push boundaries in the form of accelerated engines for common tasks in professional situations that call for a tower-like machine.

So, just like afterburner before it, we may see either additional custom cards that accelerate certain tasks, or we will have a unique chipset with that integrated into the board/chip like the media encoders on M1/M2.

I believe Apple will continue to provide a unique professional machine that is both future focused and backwards compatible with PCIe slots. Unsure how many, but I can't see them backtracking too far after the friction they caused in the past by removing slots. Yes it might even ditch banks of RAM in favour of on-die solutions, and I would not be surprised to see a custom GPU card from Apple with a ton more of the upcoming "too hot for A16" GPU cores.

It might use the 'same design' as the current Mac Pro. But that doesn't mean it's the exact same chassis. It could just be the exact same design aesthetic with cheese grater look, but a smaller overall accounting for slightly less power consumption?

Happy to be wrong about this one. But I just can't see Apple going backwards on this one. They have taken their time to get this right for a reason.

They have the VR/AR industry to take by storm. And thus, need a beast of a machine that can build/design/develop for that new medium.

I would expect Apple announces the new Mac Pro when they announce the VR Headset as a companion product for developers to use to make content for the headset.
 
Last edited:
  • Like
Reactions: atonaldenim
Does Apple have a chiplet design strategy at all?!

They pretty much have to. The fab processes are eventually force them into one. It is just a matter of how long to they want to evolve into one.

Lots of folks have chirped about how Apple is going to integrate the celluar modem into the main SoC on the phone side. That's actually isn't particularly a good path. Even chiplets (multiple dies in a package) at the phone is a better path for them to go down.



This is eye opening.

“Chiplets” seems like a very smart way of being able to improve “blocks” of an SoC in-line with the most recent technology advancements, without waiting for a years-long complete redesign of the entire die.

The way Apple is doing things with UltraFusion won't necessarily completely decouple the dies from one another.
It is more akin to hooking the internal mesh/bus/communications of the dies to one another. It is going to be tough to radically redesign one internal mesh and leave the other stuck in the 'stone ages'.

Chiplets can help in that the company is doing fewer designs so they can spend more time on the lower number of designs. It helps with economics also. AMD is using the same compute chiplet in Ryzen desktop as they are in Sever. So that is fewer and more die reuse but it isn't necessarily faster. AMD is on a more steady pace than Intel right now but that is far more so because not trying to do crazy , too large catch up in one jump technology leaps at a time. ( AMD is closer to doing tick/tock now than Intel. It is kind of like Intel complete tick/tock in the toilet and did the opposite. )

Chiplets done the right way might help Apple because they have been historically built to do a minimal number of designs better. Now that they are spread out over watch , phone , ipad , laptops , a broad range of desktops they appear to be struggling more ( some may blame it on Covid-19 but I have my doubts. It contributes though. )

Chiplets don't lead to maximum perf/watt. If Apple is fanatical about pref/watt then they are likely going to do chiplets bad. They don't have to abandon perf/watt , but they need to back off just a bit to get the desktop line up to scale well.

The intenal blocks of a monotholthic die can be made reasonably modular. If they have different clock/sleep/wake zones they are somewhat decoupled. I highly doubt there is some humongous giant technological slow down if do the functional decomposition the right way (chiplet or otherwise).

If the internal team dynamics and communications are horrible.... chiplets likely aren't going to help.

And if use bleeding edge 3D packaging to recombine the chiplet dies that has its own design and production planning/scoping overhead also.


I agree that Apple shouldn’t decouple the CPU, NPU, ML, GPU cores from the die unless — UNLESS — the performance and clock speeds of, say, GPU cores are being held back by the much slower clock speeds that CPU cores can only run at.

That is about backwards. The GPU is generally going to run at a slower clock than the CPU does. If the clocks of the CPU/GPU/NPU cores can be put to sleep to save power then the clocks are not hyper coupled to one another anyway.

Fair memory bandwidth sharing and thermal overload bleed from one zone to the next are more burdensome issues.

There are whole set of folks on these forums that have their underwear all in a twist that Apple isn't hyper focused on taking the single threaded drag racing crown away from Intel/AMD. I don't think folks are going to be happy. The whole memory subsystem based on LPDDR5 isn't really set up to do that. The CPUs clusters don't even have access to the entire memory bandwidth. So not sure why trying to get a top fuel funny car drag racer out of that.


Apple's bigger problem is keeping the core extremists happy. Only want 8 CPU cores and 800 GPU cores or only want 64 CPU cores and only 32 GPU cores. The other think is that the shared L3 cache. Apple is probably more sensitive to that leaving for another chiplet (at least if want to keep similar perf/watt targets). That is awkward because the SRAM isn't scaling. So the core/cache ratio is going to be tougher to keep if trying to hit the same cost price zone.



If Apple could design a GPU-only IC that could run at 10 GHz — and the required I/O memory bus/data bus is sufficiently fast, I say, Go for it! Especially if more and more General Purpose instructions can be performed on a GPU instead of the CPU. (Apple needs to strive much harder to find more and more GPGPU optimizations. Linux is way ahead of Apple in this pursuit.)

how going to make the memory bus got that fast with LPDDR memory as a basic building block?

GPUs don't run single threaded hot rod stuff well. They run massively and embarrasingly parallel data problems well. You don't need a few 10 GHz individual cores if have 100x as many slower cores. That is the point. If can't chop the problem up into a large number of smaller chunks then it probably shouldn't be on the GPU in the first place. That is a hammer driving a screw; wrong tool.


But I suspect Apple doesn’t want a situation where an Apple Silicon SoC design changes every six months, much to the confusion/frustration of developers and even Apple’s own OS/SDK engineering teams. It’s already the case that two years after the M1s release, MacOS software that claims it’s optimized to run on the M1 isn’t reeeeeeally as optimized to run on the M1 as it could be. (Including even Apple’s own core apps like Final Cut Pro.)

That is more so because Metal pushes more responsibility for optimization into the applications than OpenGL does. It is a dual edged sword. Sometimes app developers can squeeze out more performance than a bulky heavewight API could. But if fix the bulky API that fix is used by many apps. So the fixes roll out faster.

A contributing factor to why Apple probably took AMD/Intel Metal off the table for macOS on M-sers is that they dont' want developers to come up with their own time alloocation to spend on AMD fixes versus Apple GPU fixes. If there is only one GPU fixes to roll out the allocation is mostly done (at least for the macOS on M-series side of the code. Higher sales of new M-series Macs just makes the Intel side less interesting. But Apple can't make everyone go down to exactly zero. Unless it is an apple silicon only app. ).

If you think Chiplets are going to reduce design/test/validate/deploy validation cycles go down to 6 months I think you are just looking at the top of the iceberg. Chiplet are not necessarily going to make things move that fast.
This is the "Mythical Man Month". A woman takes 9 moths to gestate a baby so if get 9 women can get a baby out in 1 month. No.

If you take a 100Billion transistor monolithic chip and chop that into 10 10Billion chiplets it isn't necessarily going to go 10x quicker using chiplets. How that is decoposed , where the replicated portions are matters.


Frequent changes to Apple Silicon via regular “chiplet” improvements might present an ever ”moving target” that developers will be demotivated to code specifically for, knowing that a fundamental part of the architecture might change in 6 months.

For chiplets is might be more important than monolthiic that you measure twice and cut once. It isn't a mechanism for more rapidly throwing out designs that are not validated and tested at a more rapid pace. Or for frantically mutating opcodes. The external opcodes don't have to chage to get performance improvements. CPU/GPU don't have to directly execute the exact same opcodes that the programmers/compilers see.


I do realize that this is how it works already; I’m calling for even greater dedication on Apple’s part. Rosetta 2, for example, was Apple’s “Moon Shot” that Microsoft can only DREAM of ever accomplishing.

You mean throwing 32 bits apps out the window and telling your user base "tough luck, it is over" ? Yeah, Microsoft can't do that. Microsoft spent years and years on a aspect of translation that Apple largely just punted. Microsoft is going to support removable RAM and socketed CPU too. Largely because their user base is fundamentally different.


(As I understand it, Apple Rosetta 2 engineers even found plenty of Intel instructions that needed no translation at all to run on ARM.)

you mean like 2 + 2 = 4 . Shocker. Basic math operations are not the source of extremely difficult semantic mismatch problems between languages. Store standard data value 1234 at memory location 678910 isn't a huge semantic gap hurdle either.

There are easy and hard translations in all conversion problems.

If talking about exactly the same opcode binary encoding. It is odd , but if playing the same trick for the decoder perhaps not all that odd.



Rosetta 2 might have led to a lot of Apple engineer burnout, but Apple needs to find a way to motivate engineers to be as devoted and dedicated to “impossible” feats like this again.

Since there was a Rosetta 1, tagging Rosetta 2 as "impossible" isn't all that creditable. What was true was that Apple internally was out of practice internally of getting something like that done. Apple actually didn't do Rosetta. Many of the folks that did do 68K -> PPC weren't around. Apple had JIT compile skills internally but this is a bit different. It wouldn't be surprising if this took many months longer than the initial project plan said it was going to take.
 
  • Like
Reactions: R2DHue
Question: is it possible to add an external GPU to the Mac Studio?

Technically you can connect a EGPU but lack of drivers doom it, so no I don't believe you can
I’ve been curious if running Windows ARM via Parallels or VMware on a Mac Studio - could that os access an external eGPU over Thunderbolt (3/4 whichever the Studio has)?

A quick look at eGPU.io showed nothing.

Let’s just hope options based on the CPUs core count, GPU core count a Neural engine core count like what we see in the M1 Pro M1 Max and M1 Ultra give in terms of RAM choices.

Maybe a 128GB, 192GB, 256GB, 512GB and a 1/2 TB option is possible? I mean we get this on the iPhone Pro models already so why not?!
 
Seriously?! No upgradable RAM? Remember when the Mac Pro was totally upgradable and AFFORDABLE?
Remember when all the YouTubers were spending $60,000 (or free) so they could review it. The days of affordability and Mac Pro are gone.
 
  • Like
Reactions: DeepIn2U
The Mac Studio gets smoked by a RTX3090. It doesn't matter matter what Mac Studio you get, it cannot compete with a dedicated graphics card this time around.

Perhaps the new Mac Pro's plugged into a dedicated graphics cards can outperform a Window PC. We will see.
And that is probably why the Mac Pro will remain as big as it is. It would require every inch of it to cool the monstrosity that is a RTX4090.

An ITX case build with a 4090 was just done by Optimum Tech:

Claims no cooling issues at all. Much much smaller than the current Mac Pro using 12900K 12 series Intel i9. And not much bigger than a Mac Studio.

I think the afterburner card will require more cooling as Apple doesn’t use NVidia cards.
 
Seems to be a machine that is not future-proof and has a rather short moral life.

/*you shouldn't think that users are stupid and don't know what's in the box. I have built all my win/linux PCs myself. All of them worked until the CPU instruction sets were recognized as obsolete and a CPU with an updated version of the instruction set was needed.*/
 
Right right, bear with me here ...

The report says two slots free? I would guess PCIe compatible slots, yeah?

And since these Apple Silicon chips are SoCs and you can't upgrade them, what if the whole Motherboard for the next decade of Mac Pros is actually removable?

You still get PCIe slots for bespoke hardware and users can upgrade the SoC like they would a GPU.

Mac Pro 2023.jpg
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.