Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Right right, bear with me here ...

The report says two slots free? I would guess PCIe compatible slots, yeah?

And since these Apple Silicon chips are SoCs and you can't upgrade them, what if the whole Motherboard for the next decade of Mac Pros is actually removable?

You still get PCIe slots for bespoke hardware and users can upgrade the SoC like they would a GPU.

View attachment 2139303

Industrial servers/workstations used to be the kind where the CPU card went into the slot like a normal add-in card. But they too had interchangeable CPU and RAM. Example: http://ibus.com/pdf/thornbackray.pdf
 
  • Like
Reactions: DailySlow and iZac
But is non-ECC ram an issue? I'm not trying to make a mountain out of a mole hill, I really don't know. Apple felt it was needed for the current Mac Pro, but now not
For most workloads, ECC RAM does absolutely nothing. The only reason the Mac Pro ever came with ECC RAM was because Apple wanted to differentiate their "workstation" line from run-of-the-mill consumer computers by selling them with Xeon processors (even though, again, most professional workloads do not actually benefit from a Xeon).
 
  • Like
Reactions: maflynn
The whole idea that you have to be a special kind of “ hollywood pro” to have the possibility upgrade your computer over time is so fcked up. Every male kid aged 8 to 15 that I know of have a gaming PC with that feature. Get a basic rig for 1200$ at first(i5,3060) , then save up for a new gpu down the line. Already with that basic starter kit, that kid have a more powerful computer for 3d work than 99.99% of the mac users, currently only a macpro with a 6800duo beats it. At 10x the price. It is so freaking provocative that Apple just can’t seem to release machines that are for enthusiasts and for the next gen creatives. When intel cpu/amd gpu was still the mac future, macs where never in such a horrible price disadvantage as now. Apple almost needs to provide a miracle to dig themselves out of this deep hole now. Can they? A sapphire rapids xeon with 7900 duos mpx modules would have been a great workstation and a imac 27” with intel i9 13000 combined with 6700xt would have been fine. Also those could have been expanded to use egpus. Almost in tears thinking of how ruined this is and for no good reason 🥲
they make far more money on laptops so they prioritise those then fudge the components to use in desktops?
 
  • Like
Reactions: Unregistered 4U
I’m pretty certain when John Ternus teased the 2023 Mac Pro he was setting the scene for something pretty epic.

Apart from internal storage, unless there’s some clever architecture going on with MP2023 I can’t see it doing well against a fully loaded Mac Studio.

i thought the whole point of Mac Pro was decent expansion and upgrades?

Am I missing the point?

My fully loaded Mac Studio Ultra is currently looking like awesome value.
I’m worried that it will not be Ternus that’ll introduce the Apple Silicon Mac Pro, but someone whom copies the shifting Tim Cook stage stance with “breathtaking performance”

I’m hoping I’m wrong!

I want performance in my Mac not in heels n fashionable pants. She brought good energy but I want the hardware to bring that!
It’s about hardware and software!
 

Attachments

  • IMG_0312.JPG
    IMG_0312.JPG
    25.1 KB · Views: 62
Last edited by a moderator:
Unfortunately, SAAS/cloud services are the future. I don't want it to be that way, but it's true. Pro Tools costs $100/month for an actually useful version. What individual can afford $100/month software? Same with cloud services. If Apple is going to make us use cloud services to offload our work to, what the heck! Maybe I'm misunderstanding @DailySlow but if Apple is going to remove expansion capabilities and rely on CLOUD SERVICES, I'm honestly going to leave the Mac platform.

But hwere will you go?

Windows ARM is looking to do exactly that! A cloud subscription with your ARM PC to do the work you want.

Looks like Linux PCs may fully win in about 7yrs time.

The days of the User bring in control started to die with pop up ads when launching apps.

Tron warned us the MCP would rule and sadly it’s looking to be that way regardless of the OS more and more. :(
 
They pretty much have to. The fab processes are eventually force them into one. It is just a matter of how long to they want to evolve into one.

Lots of folks have chirped about how Apple is going to integrate the celluar modem into the main SoC on the phone side. That's actually isn't particularly a good path. Even chiplets (multiple dies in a package) at the phone is a better path for them to go down.





The way Apple is doing things with UltraFusion won't necessarily completely decouple the dies from one another.
It is more akin to hooking the internal mesh/bus/communications of the dies to one another. It is going to be tough to radically redesign one internal mesh and leave the other stuck in the 'stone ages'.

Chiplets can help in that the company is doing fewer designs so they can spend more time on the lower number of designs. It helps with economics also. AMD is using the same compute chiplet in Ryzen desktop as they are in Sever. So that is fewer and more die reuse but it isn't necessarily faster. AMD is on a more steady pace than Intel right now but that is far more so because not trying to do crazy , too large catch up in one jump technology leaps at a time. ( AMD is closer to doing tick/tock now than Intel. It is kind of like Intel complete tick/tock in the toilet and did the opposite. )

Chiplets done the right way might help Apple because they have been historically built to do a minimal number of designs better. Now that they are spread out over watch , phone , ipad , laptops , a broad range of desktops they appear to be struggling more ( some may blame it on Covid-19 but I have my doubts. It contributes though. )

Chiplets don't lead to maximum perf/watt. If Apple is fanatical about pref/watt then they are likely going to do chiplets bad. They don't have to abandon perf/watt , but they need to back off just a bit to get the desktop line up to scale well.

The intenal blocks of a monotholthic die can be made reasonably modular. If they have different clock/sleep/wake zones they are somewhat decoupled. I highly doubt there is some humongous giant technological slow down if do the functional decomposition the right way (chiplet or otherwise).

If the internal team dynamics and communications are horrible.... chiplets likely aren't going to help.

And if use bleeding edge 3D packaging to recombine the chiplet dies that has its own design and production planning/scoping overhead also.




That is about backwards. The GPU is generally going to run at a slower clock than the CPU does. If the clocks of the CPU/GPU/NPU cores can be put to sleep to save power then the clocks are not hyper coupled to one another anyway.

Fair memory bandwidth sharing and thermal overload bleed from one zone to the next are more burdensome issues.

There are whole set of folks on these forums that have their underwear all in a twist that Apple isn't hyper focused on taking the single threaded drag racing crown away from Intel/AMD. I don't think folks are going to be happy. The whole memory subsystem based on LPDDR5 isn't really set up to do that. The CPUs clusters don't even have access to the entire memory bandwidth. So not sure why trying to get a top fuel funny car drag racer out of that.


Apple's bigger problem is keeping the core extremists happy. Only want 8 CPU cores and 800 GPU cores or only want 64 CPU cores and only 32 GPU cores. The other think is that the shared L3 cache. Apple is probably more sensitive to that leaving for another chiplet (at least if want to keep similar perf/watt targets). That is awkward because the SRAM isn't scaling. So the core/cache ratio is going to be tougher to keep if trying to hit the same cost price zone.





how going to make the memory bus got that fast with LPDDR memory as a basic building block?

GPUs don't run single threaded hot rod stuff well. They run massively and embarrasingly parallel data problems well. You don't need a few 10 GHz individual cores if have 100x as many slower cores. That is the point. If can't chop the problem up into a large number of smaller chunks then it probably shouldn't be on the GPU in the first place. That is a hammer driving a screw; wrong tool.




That is more so because Metal pushes more responsibility for optimization into the applications than OpenGL does. It is a dual edged sword. Sometimes app developers can squeeze out more performance than a bulky heavewight API could. But if fix the bulky API that fix is used by many apps. So the fixes roll out faster.

A contributing factor to why Apple probably took AMD/Intel Metal off the table for macOS on M-sers is that they dont' want developers to come up with their own time alloocation to spend on AMD fixes versus Apple GPU fixes. If there is only one GPU fixes to roll out the allocation is mostly done (at least for the macOS on M-series side of the code. Higher sales of new M-series Macs just makes the Intel side less interesting. But Apple can't make everyone go down to exactly zero. Unless it is an apple silicon only app. ).

If you think Chiplets are going to reduce design/test/validate/deploy validation cycles go down to 6 months I think you are just looking at the top of the iceberg. Chiplet are not necessarily going to make things move that fast.
This is the "Mythical Man Month". A woman takes 9 moths to gestate a baby so if get 9 women can get a baby out in 1 month. No.

If you take a 100Billion transistor monolithic chip and chop that into 10 10Billion chiplets it isn't necessarily going to go 10x quicker using chiplets. How that is decoposed , where the replicated portions are matters.




For chiplets is might be more important than monolthiic that you measure twice and cut once. It isn't a mechanism for more rapidly throwing out designs that are not validated and tested at a more rapid pace. Or for frantically mutating opcodes. The external opcodes don't have to chage to get performance improvements. CPU/GPU don't have to directly execute the exact same opcodes that the programmers/compilers see.




You mean throwing 32 bits apps out the window and telling your user base "tough luck, it is over" ? Yeah, Microsoft can't do that. Microsoft spent years and years on a aspect of translation that Apple largely just punted. Microsoft is going to support removable RAM and socketed CPU too. Largely because their user base is fundamentally different.




you mean like 2 + 2 = 4 . Shocker. Basic math operations are not the source of extremely difficult semantic mismatch problems between languages. Store standard data value 1234 at memory location 678910 isn't a huge semantic gap hurdle either.

There are easy and hard translations in all conversion problems.

If talking about exactly the same opcode binary encoding. It is odd , but if playing the same trick for the decoder perhaps not all that odd.





Since there was a Rosetta 1, tagging Rosetta 2 as "impossible" isn't all that creditable. What was true was that Apple internally was out of practice internally of getting something like that done. Apple actually didn't do Rosetta. Many of the folks that did do 68K -> PPC weren't around. Apple had JIT compile skills internally but this is a bit different. It wouldn't be surprising if this took many months longer than the initial project plan said it was going to take.
Re: “You mean throwing 32 bits apps out the window and telling your user base "tough luck, it is over" ?”

Yes!

Apple — over the decades — is the only company willing to take risks to advance the state of technology for the entire global industry, by sometimes telling its customers "tough luck, it is over." They’ve repeatedly dragged both their customers and their competitors kicking and screaming into the future.

(I’m still mad at Apple that all my 3.5" floppy disks are WORTHLESS! 😡 Obsoleting my 5 1/4 inch floppies was already bad enough! :mad: )

With the original all-USB iMac, for example, Apple said "tough luck, it is over" to people with an investment in printers with Centronics parallel printer ports, or peripherals with serial DIN connectors, etc. (Such gall! Call the cops!)

The fact that Microsoft took several years to make “standard” Windows be Windows NT isn’t admirable and “for the convenience of customers and developers alike,” it’s just slow and incompetent — and risk averse.

Long ago, Apple made every Mac customer pay for an Ethernet port whether they wanted one or not.

Later, when Broadband Internet became a thing, the entire (reasonably recent) Mac base was Broadband ready.

Meanwhile, PC owners were out buying “How To Install An Ethernet Card and Install its Drivers For Dummies” books.

You want your Mac laptop to have detachable batteries? "tough luck, it is over" Optical drives? Telephone jacks?

From the very first iPhone, Apple essentially said, “You want physical keys? Tough luck, it is over" (and possibly “buy a Nokia.”)

You want the freedom to arrange your iOS icons anywhere you want, outside of a grid? "tough luck, it is over"

It‘s the same today. Apple sometimes tells customers “tough luck, it’s over” in order to propel the state of the industry forward. And Apple has grown to become the biggest, most valuable company in history this way — with a fiercely loyal customer base.

Metaphorically speaking, craven PC makers hide under their desks, frantically chewing their nails as they wait to see how Apple’s risks turn out. If Apple’s risks succeed, they emerge from under their desks, confidently adopt what Apple did with the Mac, and sometimes even brag that they invented it.

P.S. Don’t delete the System32 folder on your Windows PC or you’ll be sorry.
 
For most workloads, ECC RAM does absolutely nothing. The only reason the Mac Pro ever came with ECC RAM was because Apple wanted to differentiate their "workstation" line from run-of-the-mill consumer computers by selling them with Xeon processors (even though, again, most professional workloads do not actually benefit from a Xeon).
It is quite normal that there is ECC-Ram in real workstations. Just like in servers.


 
  • Like
Reactions: R2DHue
As I said multiple times on this forum, there is no indication the M chips allow for any expandability. I woudln't be surprised if the "GPU" expandability is severely limited. A lot of people need more than GPUs, they need general PCI-E expandability.
 
It strikes me that Gurman continues to not understand the Mac Pro.

The point is that it supposed to be able to support 1TB of RAM. It’s supposed to take full size PCI cards. It’s supposed to have a sizeable price tag to match.

I’m assuming Gurman is assuming that the MacPro will just be using a vanilla M2 Ultra and not anything different. Frankly I’m still aghast that apple didn’t use threadripper in their MacPro from 2019. The CPU zone is huge in comparison to the Intel chip they decided to put in there.
 
  • Like
Reactions: scottrichardson
For multiple potential reasons:

- Cooling issues. Having multiple of these CPUs combined into one may result into a heat buildup that expotential to the increase of CPU die surface. So you need a better cooling solution. Take Mac Studio for comparison, how large that heatsink is. And now double or quadruple the CPU die size -> You will need some space to install some really effective bad ass quiet cooling

- Time restrictions: They just didn't have enough time to invent a better casing. And out of practicality - and maybe because current users are already using and daved the space for this form factor - they keep it.
The envelope for M2 is like 60W so a few of those still falls lower than any HEDT Intel/AMD CPU before you even consider a GPU. Cooling inside a 2019 MacPro case will be no problem.
 
So, let's put this whole thing in perspective.

  • Apple's Mac sales grew 40% last year, in an overall declining PC market. Obviously people are buying them, because they like them, and trust them to get work done.
  • The M-series chips offer plenty of performance. I call a Mac Mini running over 900 tracks in Logic, each with their own instance of Space Designer, pro level.
  • Apple's approach, working forward from the iPhone, seems to be 'the computer as console,' i.e. a smaller set of known hardware that is easy to develop for and debug. I don't see this as a bad thing. If we're talking about 'pro' systems, i.e. systems we use to earn a living, then the simpler and more standardized, the better. The less often I have to be a sysadmin, or trying to juggle weird drivers in Safe Mode, the more time I can devote to actually working.
  • Let's say this rumor is true and the Pro SoC doesn't have expandable RAM. The question we're not asking is: How much RAM do pro users actually need?
  • The M1 Ultra currently tops out at 128GB, which is 1/12th of the potential maximum RAM in an Intel Mac Pro. and yet, it's still edging out the biggest Xeon they ship, performancewise.
  • A lot of pro software has surprisingly light RAM requirements (even supposed heavyweights like AutoCAD, Maya, etc.). Some pro tasks are more bound to the CPU, others to the GPU, others handled by specialized signal processing chips or sub-sections of chips. Other tasks are more dependent on speed and width of data lanes to and from disk storage.
  • Where having huge amounts of RAM helps is when you have to work with many very large image layers (Photoshop, or compositing tools for film); huge datasets in live memory (vs. paging in and out to disk); or tasks like OS virtualization. And even in compositing, that task is highly CPU-bound, too.
  • So "huge RAM requirements" are really more in the realm of scientific or enterprise computing. Is this an area Apple wants/needs to compete in, can be profitable in, or has anything to change the game? I argue no, at least not for now.
  • Will an Apple Silicon Mac Pro have enough memory to run something like, say, Foundry's Nuke VFX compositing software? I think yes. If the current M1 Ultra has 128GB of SoC RAM, it's not a stretch to think that an M2 / M3 version could have 256 / 512 / 1TB. It just won't be expandable, unless the SoC is on some sort of swappable daughtercard.
  • Next: Will it need expandable graphics? Maybe. We've already seen that the AS GPU cores are quite competitive, and could be scaled up. I don't see Apple willingly throwing money to AMD unless they have something that Apple needs. We know Apple is very unwilling to work with Nvidia right now, and has already spent a lot of money acquiring smaller graphics companies. If there's a card in it, it might be a new Apple one, but I'm betting on beefed-up GPU cores.
 
Last edited:
The balancing act for apple is releasing a Mac Pro that is efficient in cooling itself and providing power that is not only to compete with Intel/amd but also allow for heavy creative pros to get work done at a quick pace, without needing a nuclear fusion reactor to cool the case down. I have seen those windows/gaming desktop builds with pipes and coolers and massive power draws! I am like no thanks. It looks cool, mostly the itx builds, but prices are over the top across the board.
 
Depends upon what "alongside" is suppose to mean. close packed LPDDR5 RAM and DDR5 DIMM RAM in harmonious , homogenous "Unified Memory Pool" ? No.

DDR5 DIMM as a fixed RAM SSD pool that the file system uses as a distinctly separate pool of memory for file caching? Yes. That wouldn't be hard. Go to Activity Monitor memory tab and look at 'caches files" usage and free 80-95% of that up for app usage ( it isn't small). How APFS does file caching is almost totally transparent to applications . So if the file system move the file cache to a 'really much faster SSD" there are no real application code changes required.

There are some applications that try to use mmap and/or large explicit memory allocations to implement a RAM SSD inside their application. ( skip the file system caching and just load up vast chunks or all of several files into memory). If there was an actual real RAM SSD present perhaps modest changes could be made to those apps to use that mechanism when present.


Some folks will throw out the LPDDR5 used only as a L4 cache and the DDR5 being the RAM. I have serious doubts that will for the GPU subsystem. The vast bulk of the Apple M-series SoC's memory controller design is oriented to keeping the GPU cores feed; not the CPU cores. So techniques that Intel has tried on their CPU packages are not necessarily going to readily map over to what Apple is doing. The workload that Apple is applying to the memory system is substantively different. How many high performance GPUs out there have DIMM slots? None. There are a some foundational reasons why that is so.


There are ways to present two different types of memory to applications but that very often means making changes to applications to make that work. Special mac apps just for the Mac Pro are not a solid foundation to drive Mac Pro adoption. People are expecting the same apps to just work 'better' when handed more memory.






If you don't measure how can you improve it? Non ECC RAM doesn't even check for errors. If not even counting how can you get into a "more/less" discussion?

If trying to store a ginormous truckload of data solely in RAM ( > 128GB) then probably be more worried that you can't even count the errors or not. Again why a RAM SSD that has effectively internal mechanism to check for bit rot errors would be more than helpful. APFS isn't going to do it. It punts user data integrity checking back to the SDD ( or HDD). If going to do that then be consistent and do that on the RAM SSD also.

RAM effectively fails due to data corruption typically at a higher rate that the electronics "spontanously combust" and fail to work at all.
Thank you for your reply! I’m not gonna pretend to understand most of it.

It would be amazing if you could specify to any application which ram to use. Use case: I’m a composer and I max my ram full of sample libraries. Some libraries might not benefit from faster ram due to being used less frequently. Would be great to tell the sample engine (Kontakt) which RAM to load a patch into.
 
"...and [spaces] for graphics, media, and networking cards."

Exciting. Looking forward to see how that is handled and what the performance is like.
Could Nvidia support be back on Mac?
 
So, let's put this whole thing in perspective.

  • Apple's Mac sales grew 40% last year, in an overall declining PC market. Obviously people are buying them, because they like them, and trust them to get work done.
  • The M-series chips offer plenty of performance. I call a Mac Mini running over 900 tracks in Logic, each with their own instance of Space Designer, pro level.
  • Apple's approach, working forward from the iPhone, seems to be 'the computer as console,' i.e. a smaller set of known hardware that is easy to develop for and debug. I don't see this as a bad thing. If we're talking about 'pro' systems, i.e. systems we use to earn a living, then the simpler and more standardized, the better. The less often I have to be a sysadmin, or trying to juggle weird drivers in Safe Mode, the more time I can devote to actually working.
  • Let's say this rumor is true and the Pro SoC doesn't have expandable RAM. The question we're not asking is: How much RAM do pro users actually need?
  • The M1 Ultra currently tops out at 128GB, which is 1/12th of the potential maximum RAM in an Intel Mac Pro. and yet, it's still edging out the biggest Xeon they ship, performancewise.
  • A lot of pro software has surprisingly light RAM requirements (even supposed heavyweights like AutoCAD, Maya, etc.). Some pro tasks are more bound to the CPU, others to the GPU, others handled by specialized signal processing chips or sub-sections of chips. Other tasks are more dependent on speed and width of data lanes to and from disk storage.
  • Where having huge amounts of RAM helps is when you have to work with many very large image layers (Photoshop, or compositing tools for film); huge datasets in live memory (vs. paging in and out to disk); or tasks like OS virtualization. And even in compositing, that task is highly CPU-bound, too.
  • So "huge RAM requirements" are really more in the realm of scientific or enterprise computing. Is this an area Apple wants/needs to compete in, can be profitable in, or has anything to change the game? I argue no, at least not for now.
  • Will an Apple Silicon Mac Pro have enough memory to run something like, say, Foundry's Nuke VFX compositing software? I think yes. If the current M1 Ultra has 128GB of SoC RAM, it's not a stretch to think that an M2 / M3 version could have 256 / 512 / 1TB. It just won't be expandable, unless the SoC is on some sort of swappable daughtercard.
  • Next: Will it need expandable graphics? Maybe. We've already seen that the AS GPU cores are quite competitive, and could be scaled up. I don't see Apple willingly throwing money to AMD unless they have something that Apple needs. We know Apple is very unwilling to work with Nvidia right now, and has already spent a lot of money acquiring smaller graphics companies. If there's a card in it, it might be a new Apple one, but I'm betting on beefed-up GPU cores.
As a learning VFX artist and tech adviser at a media company. RAM is king when it comes to CG and CAD. We’re talking texture streaming, asset streaming, I’m working on a House at the moment that has a 3D project that’s 6GB which when put into render requires 48GB of RAM. If I’m doing fluid simulations 128GB is preferred, 1TB is desired: more RAM = more detail in the simulation.

To deviate from the current provision of Mac Pro and to basically offer a Mac Studio in a Mac Pro 2019 case for more money but offer no more RAM seems dumb on a level that I can’t comprehend.
 
  • Like
Reactions: Ulfric
Why don't we wait until something is announced by Apple instead of losing our minds over a rumour that might turn out to be unfounded speculation?
But that is the point of MacRumors... Clickbait titles, and clicks, clicks and even more clicks.... :)

Joking aside, Mac Pro machines were always proper workstations (aside from 6,1 and even Apple realized that with recalls and going back to tower design fro 7,1). So, if we will get the same chasis I expect proper upgrade possilibities: RAM, PCIe, PCIe (booster) power connectors, SATA connectors and so on. If not, I will have no interest in it at all as I don't have it for Mac Studio.
 
Last edited:


The upcoming high-end Apple silicon Mac Pro will feature the same design as the 2019 model, with no user-upgradeable RAM given the all-on-chip architecture of Apple silicon.

Mac-Pro-2019-Apple.jpeg

In his latest Power On newsletter, Bloomberg's Mark Gurman has revealed that Apple's upcoming Mac Pro, which is the final product to make the transition to Apple silicon, will feature the same design as the current Mac Pro from 2019. Unlike the current Intel-based Mac Pro, the upcoming model will also not feature user-upgradeable RAM.

Gurman has reported that Apple has canceled plans to release a higher-end model of the upcoming Mac Pro with 48 CPU cores and 152 GPU cores given its high cost and likely niche market.

Article Link: Apple Silicon Mac Pro Said to Feature Same Design as 2019 Model, No User-Upgradable RAM
Not sure why they just don't cancel the Mac Pro completely. It's a niche product, but when it was Intel based it was at least a viable alternative to other high-end workstations and you could run CPU intensive applications that are not available for OS-X.
 
  • Like
Reactions: turbineseaplane
Selling a big tower computer with decreased GPU performance compared to the 2019 model, and no ability to upgrade the GPU despite having expansion slots and lots of empty space? That just seems absurd.

The Intel Mac Pro with W6900X AMD GPU that they’ll sell you new today in 2023 must continue to be supported into many future generations of MacOS drivers. Is it so unthinkable that they’d also include AMD driver support for Apple Silicon MacOS as well as Intel MacOS?

There is a software issue that you are not really covering. First, Apple has sold the M-series as a way to run some designated native iPhone/iPadOS apps natively with little, to no, changes. So what happens when that non universal app that has no code to handle AMD GPU details and loads of implicit assumptions about the universality of Unified memory hits this AMD GPU? What if the user starts an native iPhone app on a screen connected to a Apple GPU and then after it is running the user drags it to a screen powered by an AMD GPU. What happens? (app refuses to go vioalting user interface guidelines, implodes , or what? )

Second, a standard off the shelf AMD card presumes there is a UEFI layer to talk to. There is no such layer at the raw boot layer on a Apple Silicon Mac. Apple doesn't do UEFI. This is exactly not like the Mac Pro where the T2 chip just validated UEFI firmware and handed to the Intel CPU to execute. UEFI has been banished here. The card isn't going to work in the boot environment. Possibly could hack it into running after system boots with the iGPU.
The secure boot login screen of the Mac though requires an Apple iGPU. .... so if the drive is encrypted you get what if don't have a monitor hooked to the iGPU.


There are work-arounds for most of these issues. Trap iphone apps before launch and abort them if AMD GPU is present and might cause problems. Custom boot ROM just for Apple only GPU cards (which was more problematical in the past than most folks want to admit. lots of ROM copying without paying which leads to a corrosive funding structure for the proprietary work over the long term. ) .


Linux and Window virtual machines on augmented versions of Apple's hypervisor framework do have access to a virtual UEFI implementation and don't have any native phone app assumptions. If there was IOMMU pass-thru to the VM where the card is fixed assigned to that environment those would work more easily.

Similarily if just used the GPU card as a "Compute' GPGPU accelerator you dodge the need to be present at boot and any GUI monitor work.

But the expectation that it will work exactly like it did on a old school UEFI machine because it is a general PC box with slots. It isn't a general PC. The legacy , much bigger ecosystem that Apple Silicon is hooked to is iOS/iPadOS which have zero 3rd Party display GPUs. It is not all that absurd the Mac riding on that base line infrastructure would pick up the same constraint. Mac were riding on Intel got lots of ancient BIOS and UEFI quirks there for the much larger ecosystem.


The other hidden presumption here is that more future discrete GPU from AMD would get covered with drivers. (it magically happens by default on the general PC market side , but not necesarily going to happen on the Mac side.)
As Apple's iGPU cover more and more of the preformance range of AMD GPU product range what is going to be the large motivator to cover the parts that Apple already covers (even if extend that out to the eGPU deployments. ) ?


This first AS Mac Pro is probably going to have a lot of compromises. It likely won’t be able to support as many lanes of PCIe as the Intel Xeon can. (To be fair, we haven’t seen the M2 Max yet, which will be the foundation for the M2 Ultra presumably.)
...

I have doubts the M2 Max in the laptop is going to be the sole basic building block for the M2 Ultra. Especially if the Ultra is picking up substantially enhanced PCI-e lane provisioning. Pretty good chance the PCI-e provisioning is off on another chiplet that just won't be present at all in the laptop deployment.




Admittedly I don’t understand the engineering of the CPU / SoC architecture, and some others here clearly do. If they do continue to offer an Intel Mac Pro along with an M2, then maybe only the Intel model gets the MPX GPUs. Maybe they do hand-wavy graphics benchmarks based on the Intel config and also talk about Apple Silicon benefits in the same breath.

Apple wouldn't have to "hand-wave" all that much. The M1 Max and Ultra beat the W5700 in the Mac Pro. The Also performed better than the 16 core Mac Pro 2019. If the configurations of the MP 2019 stay 100% still (no new CPU or GPUs ) then the M2 Ultra will still beat those. Those two are the most common CPU/GPU components bought from Apple. If the M2 Ultra gets decently close to the W6800 they could add those to the "beat that" list also.

They probably will just stay way from any notion of being a Nvidia 4000 series or AMD 7900 'killer'. That is more ducking the issue than 'hand waving".




But still. Gurman said graphics. GRAPHICS!

*(Really unlikely he meant M.2 PCIe slots for SSDs and graphics cards, as Apple has never used M.2. They’ve put SSD blades on a variety of proprietary slots but never for any other purpose. They’ve used mini PCI slots for iMac GPUs, AirPort and bluetooth cards a long time ago, but not in many years. And space is hardly an issue in a Mac Pro tower to necessitate such tiny slots for expansion cards.)

Apple's new native boot environment has support for generic NVMe SSDs so why should Apple pretend that M.2 devices do not commonly exist. In 2023 , the number of user workstation motherboards in the general market that have zero M.2 slots on them is about the same number of boards that have zero SATA sockets on them. In 2019, Apple put a SATA socket on the motherboard. 3-4 years M.2 is about at the same ubiquity.

This is the one that actually seems more appropriately labeled absurd. The firmware support for this is already present. OS support present and working. Fully enabled out the box is put an adapter in a PCI slot. But the end of the world if put the connector on the logicboard directly.

If the new system has a realtively puny PCI-e backhaul that is already excessively oversubscribed then yeah it would make sense. But if they do any decent job of provided decent overall system backhaul on the logicabord. This kind of loopy. APFS is mainly about getting more folks onto SSDs and block the path to more SSD usage.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.