It’s significant enough.
Right now, all Apple Silicon devices, from as small as the Watch to as big as the Mac Studio (not counting the Mac Pro here for a second), can share the same traits: an SoC with various heterogenous CPU, GPU, NPU cores, memory packages directly connected, SSD controller built in if there is one, and so on, and critically,
- no additional memory, and
- no additional GPU.
Even things like the SSD aren't managed through PCIe but through an Apple-specific interface. PCIe is relegated to more specific purposes such as Thunderbolt.
This strategy serves Apple well because it doesn't just lead to fantastically high bandwidth/low latency, but also quite frankly production benefits. The iPhone's e-cores can scale up to the M series on the iPad and Mac, including eventually the M Ultra, just at a higher clock than on the A-series. And they can scale down to the S series on the Watch.
That goes for the software side as well. Once macOS no longer supports x86, which isn't long away any more, all their OSes can presume there's one GPU, there's integrated, Apple-controlled RAM, and so forth.
And then there's the Mac Pro where some of that is a downside. No upgrading the CPU, GPU and NPU cores. No upgrading the SSD controller to a faster spec. No adding additional memory. No adding an additional GPU. Or two. Or four. And as you point out, yeah, on multiple Macs, perhaps even on iPads (heck, possibly even on the Vision Pro), eGPU could be beneficial.
But is there cost to it? Absolutely. Not just in designing such an architecture, but also in retaining the complexity it implies, for many OS releases to come.