Yes, I want to know the technical reason. So far it seemed to me that a computer was made of a CPU, a GPU and RAM, among other things, and that the RAM and GPU could be swapped if they weren't soldered in.
How many mid level to high end GPUs have socketed RAM? AMD's , Nvidia's , Intel's.
Same baseline techincal reasons. What you are ignoring in your examples is the soldered VRAM on the GPU cards being cast aside. The hyper modularity folks don't have hissy fits over that. The technical needs are for
1. relatively very wide memory bus width. The bus width on a high end GPU is likely going to wider than most mainstream CPU only packages.
The bus with on Intel and AMD iGPUs is the same as the mainstream. But are any of them mid-high end GPUs ? Nope. If you cannot get the data to the arithmatic units fast enough then they starve. Hence the wider than 'normal' buses. Don't have the space for the bulkiness of DIMMs busses. ( and that new connector for laptops is an even bigger space waster. )
2. caches are not going to get you 98% hit rates all the time. Caches are effective over a percentage of the larger capacity they cover. 10MB cache on a 1,000,000MB storage drive isn't going to be as comprehensively effective as a 1,000MB cache over the same amount. The lower the percentage cover the more likley the hit rate will be lower. (even more so as throwing multiple concurrent workloads grabbing at different ranges of the primary source being cached. Get more mapping conflicts and data being evicted prematurely )
Joe Blow putting in random RAM capacities means can't have a predictable hit rate over a range of configurations.
(don't tend to see high performance GPUs with extremely large variance in configurable RAM. It is bad for optimized turning. )
3. power. The farther away the memory is the more power inefficient it is to get to.
All of those lead to why have dense package GDDR VRAM packed around the perimeter of a GPU package on a discrete card. And also why in the even higher performance zone will see stuff like HBM memory where stacking RAM dies even taller and with wider busses.
The other issue progressing over last 5-6 years is better packaging technology. Which is stacking things higher and closer at lower power consumption. Pointing back at the technology limits of the 1980's isn't going to give much insight into what can do now.
I get that AMD, Intel, and Apple make different types of CPUs. But both AMD and Intel allow you to have whatever RAM and GPU you like. Why not Apple? I suspect it's just "we want to force you to pay 5x price for RAM and we want you to get the M2 Pro Max Ultra just to get an extra GPU core".
Apple isn't mostly doing a CPU. Intel was Apple's largest GPU vendor ( not AMD or Nvidia. ). That's is a primary 'dump' target here. Lots of folks have spun this change as Apple dumping Intel ment that Apple was obssesed with just dumping Intel CPU cores. It was both.
THe Mx Max die is a GPU with some CPU/NPU elements sprikled around it. Apple is primarily building a mid-large size GPU so it really shouldn't that it looks like the mid-large size GPUs that other folks also build.
This is usually get 'twisted' and claim that if Apple is doing a GPU then they 'have to' come out with some 4090 killer ( whatever the largest , most expensive GPU currently out there... have to 'kill' that or it is a complete bust. ). That is nonsense. They don't have to do everything for everybody to be competive in a targeted area. AMD often got into trouble in the first half of this century trying to 'monkey see , monkey' do everything that Intel was doing.
AMD narrowed their focus and they got better ( reusable chiplets for both mainstream dekstop and severs. for a while did lower half of GPU range and then did upper half ). Apple also when Jobs came back and tossed have the model variations out the window.
Doing everything for everybody isn't necessary. Even in Mac Pro space.
But I think it's a huge, huge shame that there are companies like nVidia who make great GPUs, they innovate in the AI revolution, there are amazing games that make use of raytracing and all kinds of cutting edge technology, there's VR... and Mac users don't even know what the hell any of that is. Apple is like the North Korea of tech. "Our tech is perfectly good, you're not allowed to try anything else."
Just a few years ago we had nVidia GPUs in MacBook Pros. Then for some odd reason it was limited to AMD GPUs.
It is far, far, far, from "some odd reason". Nvidia did an 'embrace, extend , extinguish' attempt on OpenCL. They put CUDA in front of Metal once Apple moved onto Metal as the alternative. when there was a huge problem with Nvidia iGPUs they ran anyway and left Apple holding the financial bag. Finally toward the end Nvidia had some code that would "Halt and catch fire' for every new macOS kernel update. Throwing drivers out in a completely uncoordinated approach to Apple's plans. And Nvidia relatively publicly blamed all the quirkiness all on Apple. (we have drivers but we have zero idea why apple won't sign them. Hoping to get fanbase to put more heat on Apple to get a reprieve. ). That just put the icing on the cake.
It isn't 'odd'. Nvidia and Apple got into a 'war' as to who was more powerful and Nvidia basically got dropped as a bad partner to Apple. They were bad. Apple contirubted to the decline also. They are just as dogmatic about Metal first as Nvidia is about CUDA first. But it is Apple's systems and their operating system. And Apple's Metal first strategy was deeply intertwined with the iPhone business. Apple had zero incentiveto let Nvidia do an "embrace, extend , extinguish" move on Metal. There is no 'new' Mac Pro business they were going to drive that was going to offset the iPhone business.
Technically , Nvidia is just a 'subcontractor'. If you are a subcontractor who is always causing drama for the prime contractor ... eventually you get dropped. After a while Apple just stopped signing their drivers. That's it.
Some folks in the general PC parts world start inside out. First pick an Nvidia card and then wrap the rest of PC parts around that. In that alternative 'world' Nvidia is the prime contractor calling the shots. Apple doesn't work that way at all.
Both sides got much bigger over time on business areas that had almost zero overlap. Apple didn't 'need' Nvidia financially or technically and vice-versa. It was very easy for the 'war' to escalate until both sides 'blew up' their side of the bridge between them.
Nvidia is a pain to work with. Almost no potential 'business partners' wanted them to buy Arm. Similar dust ups with other vendors. Nobody trusts Nvidia except their end users who buy into the moat that Nvidia digs around their products.
But you could still get external GPUs, like the BlackMagic eGPUs via Thunderbolt. It seemed like things were expanding, that more options were becoming available...
There was a trend toward the end of the Intel Mac era where Apple's boot firmware got closer and closer to UEFI. Early on there were special "Mac boot ROM" requirements for video cards because Apple was mainly interested in EFI (not backward looking BIOS). That UEFI support crept in mainly because the Intel CPU packages needed it.
Given a free hand as to what the boot firmware was going to be Apple tossed UEFI out the window. Macs officially boot macOS and that's it. Can do some hackery with Linux but it has no official technical support coverage.
From the first WWDC 2020 when the Apple Silicon was announced Apple said the GPU driver coverage was only Apple GPU. That didn't change at all over the next two WWDC session ( 21
and 22 ). Maybe it would be 'low priority' so it would slide to '22, but nothing. The major driver coverage expansion announced at WWDC 22 was that DriverKit drivers should work on iPadOS on M1/M2 iPads. Crickets chirping on any 3rd party GPU drivers.
There are over 50+ cards that work in a Thunderbolt PCI-e card expansion box. Just not anything that deals with the early boot set up that isn't a generic driver like USB or NVMe or SATA. THere is no DriverKit coverage for display GPU in the object hierarchy for the framework ( unlike the preceeding IOKit framework). Thunderbolt works with Apple Silicon. That isn't the issue.
The big push for the Blackmagic eGPUs was for the laptops which had limited GPUs. The new Mn Pro and Max are no where near as limited. The Mini ... again now no where near as limited (especially if loop in a Studio Ultra).
The iMac Pro was less limited. And the MP 2019 not particularly limited at all in an "augment through eGPU" wise. Those last two were not where eGPUs were primarily targeting.
and now all Mac users are stuck with integrated graphics made by Apple regardless of how much they spend on their computer,
I suspect that is part of Apple's point of excluding 3rd party GPUs. They want developers extremely focused on optimizing for Apple GPUs. If there are not other options there are no distractions. Apple is trying to get rid of the notion that 'iGPU == slow' . That doesn't have to be if don't cripple the memory bus to kowtow to DIMM options.
a company that has no experience making GPUs and has not brought gaming, AI or VR to the Mac in any form.
That far enough in the delusional zone that appears to come from an alternative universe. Apple does billions in gaming on Apple GPUs. (the high end PC game market isn't the whole gaming market). No AI. chuckle you app exposure is limited. Apple is far more focused on AR than VR and have laid lots of foundation.
Apple's approach is closer to a console gaming than perhaps more hard core Windows PC gamers are comfortable with. It is somewhat of a hybird approach. Apple is looking to match highly optimized graphics code to a finite set of good hardware. A small enough set so that can optimize very well for all of them, but not so large where have constantly mutating drivers trying to plug every quirky hole that corner case chasing apps dive into.
When Apple does VR it will deep links to be mobile (on battery) VR. Ai on battery . Gaming on battery . 75+% of Mac sold are mobile capable. That is where the inertia is. The Mac Pro is gigantic chasm away from what is driving the vast bulk of Mac sales ( it is likely down in the 1% (or less) range). Highly focused , well optimized code when run on a plugged in and larger SoC will often run faster. (brute force code will proably require more expensive brute force to run faster.) They are not likely to completely miss out on all of the mid-high end range of performance coverage. Especially if have a high quality of optimized code on their GPUs than the competition.
They're just using their new CPU as an excuse to lock down their already massively crippled ecosystem even more.
Again no. Apple is more about creating a system than in a subcomponent. If looking to Apple just to make incomplete subcomponents you are kind of shopping for a pork sandwich at a conservative kosher deli.
If the unified memory wasn't buying them largely differentiated performance you'd have a point. It is a trade-off that apple is making. Don't like the trade-off then don't have to buy from them. Apple isn't trying to make everything for everybody.
Apple does need to add the ability to provide more general PCI-e bandwidth to the Ultra (and up) SoC that will never land in a laptop. Some acceleration that adds more performance in 'scale out' (in the box) workloads is likely necessary because there is only so much 'horsepower' you can put into a single package for a fixed fab process technology level. These days a supercomputer is a bunch of little computers cluster in a very expensive, custom network.