The difference will be that now, you can plug egpu in any tb3 socket in any mac.
In the future, if there are tb3-non-gpu sockets and tb3-yes-gpu sockets, it will be a mess.
There isn't going to be those. The TBv3+ ( probably 4 or better) will have PCI-e.
Whether an external GPU works on not has nothing to do with Thunderbolt. It only has to do with a driver being available for macOS.
There is a decent chance that Apple may push 3rd party GPUs into a System Extension ( the new driver/extension) format. The GPU is going to have to work in a IOMMU virtual addressing context, but it probably will be a matter of writing the software/firmware as oppose to some hardware level thing.
What is gong to keep eGPU significantly in the game is Apple using discrete GPUs in some Mac systems. The larger problem will occurs if Apple isn't helping to sponsor any other GPU driver development. If they invest zero in that there is a big financial in getting the drivers written ( what 3rd party GPU is going to want to solely bankroll that if the "keeper of the kernel" is 100% disinterested in helping them. ).
There is a more than decent chance Apple isn't so hyped up on eGPUs more so than their own complete systems . Enabling eGPUs probably will be on the "do in our copious spare time" priority list up until Apple is about to ship a system with a discrete GPU in it. Then that "free" (already paid for) driver work will dribble out to the eGPU ecosystem. Cards on Apple's supported list all have either a direct ( or strong chip implementation ) link to a Mac system. eGPU never were decoupled from Mac system GPU configurations at all. That won't be "new" with Apple Silicon, that is how it has always been. Probably how it is going to be in the future also.
The eGPU is just as specialized exteranal PCI-e connector enclosure. Apple going extreme myopic on hardware implementations isn't just going to be a GPU only problem. If the Apple goes 100% disinterested in discrete USB controllers. The vast majority of 3rd party discrete Ethernet controllers. etc. then TB docks will start loosing robustness and functionality too.
BTW, has anybody counted how many different dgpu's are in the current mac lineup?
If 1/4 of macs sold now have dgpu, it will show how small number of each gpu model is needed.
There are two dimensions. One is time and other is systems. There being a track record on previous dGPU usages brings in cards. That contributing factor may substantively hiccup with Apple Silicon. The intel systems will keep what they have but also hiccup going forward is the Apple Silicon systems don't bring in much new ( the Mac Pro corner case may keep things alive until it is converted. )
On the other dimension it really isn't the case that Apple is trying to cover the GPU offering families of the 3rd party implementations just for completeness sake. If some Mac needed one, then it gets added. So really what is at issue is how many Mac will "need one" in the Apple Silicon era.
At present the current Apple systems are using AMD 5300 ( at different clocks) , 5500 (different clocks ) , 5700 (different clocks) , "Vega64" (gen 1 ) , and data center Vega ( Vega 20 ) . Vega64 is somewhat an artifact of a mostly comatose model ( iMac Pro 2017 ...creeping up on 3 years old ). So there is about 4 levels now.
5300 level is probably at some risk. ( the iMac 2020 5300 doing as well as the older 580X on some tasks shows that it is in the competitive toss up stage though. Even if Apple 'catches' 5300 with an iGPU in 1-2 years that AMD class will likely be in a higher performance zone. )
Maybe Apple could designt just one gpu model and using it in all cases? Marking bad cores off for weaker need and clocking them also accordingly?
That doesn't work economically. The "bad core" defects are not going to generate enough volume to fulfill something that sells at much higher volume than the "bigger core count" model. You'd have to fill low core orders with dies with perfectly good cores in a very large substantial ratio to the dead ones.
You can fill a relatively low volume product with leftovers but that isn't how to do high volume products. Making the die to fit is much better. Far better wafer utilization ( get more dies out of a single wafer). [ doing a bigger die means get fewer dies off the wafer. Which means need to do higher wafer throughput to get t the same number of units to sell. That costs more. ]
More likely Apple wants Apple GPU ubiquity ( appears in all Macs ) more so than trying to take on the upper 25-35% percentile performance level discrete GPU market. The WWDC chart is more a reflection of that likely inevitable situations where it is always. ( So therefore developers should optimize all apps to it being there; no distractions or procrastination on 'would-a could-a should-a' future excuses . )
There may be 3 players in that high end space by 2022. Apple has a big enough battle to win at the iGPU level. Intel Gen12 is looking more than decent. AMD isn't failing there either. By early 2022 both will probably have integrated to something better. So will have Nvidia in low power dGPU space for laptops. If AMD and Intel fail at competitive dGPU that strongly support Metal then Apple may be squeezed into stepping into that role. But that isn't happening so far, and Apple probably doesn't want to add that to the pile. ( that does absolutely nothing for iOS , iPadOS , and every other xxOS variant they have out there. whereas
all of those (and likely every Mac ) have iGPU. )