The problem is, Apple can to the realization that a Mac Pro must have a discrete GPU and it must be Nvidia.
"Must be Nvidia" ? What alternative universe are you living in? In this universe, Nvidia blew up their relationship with Apple. It isn't coming. There is no "must" that Apple needs to do here. It could be a 'nice to have' , but a hard requirement. No. There were no Nvidia MacOS support with the MP 2019 ( other than running off to Windows. And that is also out at the 'raw iron' boot level) .
Same issue with "must have" display GPU. Pretty high likelihood the baseline GPU will be the Apple iGPU. Whether there is other optional GPU ( e.g,. compute GPUs) may be up in the air , but are far as a hard requirment an apple iGPU is the modern Mac requirement at this point ( seemless native running of iOS software, assumptions on Unified Memory that Apple has heavily guided developers into adding to their applications fort he last 2.5 years , etc. )
It would probably be useful for the whole Mac line up if Apple upped their game on hypervisor and virtualization framework and allows a guest OS to take full responsibility for a 3rd party GPU ( e.g., Nvidia running solely on a Llnux OS guest through a direct ( or extremely thin) IOMMU static assignment. ). But that shouldn't be Mac Pro specific. Probably a more common use case than a MBA with a external TB PCI-e card enclosure, but not MP exclusive.
However, that is not native driver support inside of macOS proper.
Too many of the tasks that will be performed by a Mac Pro will require CUDA.
Back in the era when Steve Jobs penned his "notes on Flash" there were a decent number of folks running around saying that the iPhone was going to flop if Apple didn't add flash. lots and lots of developers were using Flash and without it the iPhone would never get traction.
Somewhat similar boat here. The tasks doesn't strictly require it. It is completely possible to do AI/ML without using CUDA. There are some specific pieces of software eyeball deep in deeply proprietary code. But that software isn't really the 'task'.
In the Flash context Apple pushed open Web standards to fill the gap. It took years but HTML 5 / Javascript evolution , WebAssembly , WebGPU , etc there were not many whimpers when Flash finally died off. Apple is somewhat in a Metal versus CUDA dogfight here which really isn't as "open standards". Nvidia's 'exbrace, extend, extinguish" on OpenCL and the large moat to dig around the iPhone has put Apple on a more proprietary path this battle. Similarly Nvidia more than willing to put nudge Metal into loosing so that CUDA can win to dig a bigger moat around their hardware. Have two companies that don't strategically need each other each going in different directions.
**************************
Going back to the newer chips and marketing, I understand TSMC is planning to flip the next generation of chips upside down and call it -3nm. After that, they will just make the chips bigger and assign a larger negative number to them. I wonder if they could build their new factory in New Zealand?
Another post from some alternative universe. N3 grouping of fab processes will be followed by N2. This first N3 version, N3B is going to be design rule incompatible with the rest of the N3 family ( N3E .. etc) but the only "upside down" is that N3E eases off the aggressive shrink. (e.g., the SRAM/cache feature size will be the same as N5P. So that part of the chip won't get smaller at all. ) It isn't that the chips are getting much bigger, but they aren't getting a lot smaller either ( unless specifically leave certain features out). If the die size stays the same but the wafer costs are 20-40% more than costs get bigger.
Apple has already bloated the A14->A15 (A16 is still significantly above simple A-series normal using N4) and M1-> M2 , M1 Pro -> M2Pro , M1 Max -> M2 Max by adding more stuff to push performance to a relatively same N5 process.
N3 wafers cost more. So there is some pressure on Apple to use 'less wafer' with slightly smaller dies. There may be enough tension to add "more stuff" to keep the die sizes roughly the same . Since the caches are not shrinking much there isn't going to be gobs and gobs of 'free space' freed up by going to N3. Some ( and will likely get some more fixed functions), but folks trying to spin the "we are going to get the kitchen sink amount of new stuff" will likely be disappointed.