There's a lot of flaws in the argument when dealing with fab vs. fab on this thread.
Part of the arguments you see on here is presumptions that 5nm TSMC = 5nm Intel. That's just not true.
As Intel prepares to crank up its process technology under Kickin’ Pat Gelsinger, an interesting comparison of Intel’s nodes with TSMC’s nodes has been
www.electronicsweekly.com
"We are projecting that Intel’s 7nm node will have an EN value of 4.1nm (intermediate between TSMC 5nm and 3nm nodes), the Intel 5nm node will have an EN value of 2.4nm (intermediate between TSMC 3nm and 2nm nodes),” says Jones, adding “and if Intel stays with a 2x per generation shrink the Intel 3nm node could have an EN value of 1.3nm or slightly better than TSMC’s 1.5nm. This of course presupposes Intel can execute 2x shrinks at a much faster pace than in the past.”"
Basically put... 7nm Intel = 4.1nm TSMC. TSMC is currently testing 3nm but it's not primetime ready, yet (2H 2022). Various tech sites from Tom's Hardware, Anandtech, down to YouTube tech sites like Gamers Nexus and Linus Tech Tips acknowledge that TSMC's and Intel's fabs are handled in very different ways in terms of transistor density and therefore the ability to shrink a die and also maintain a level of transistors for lower latency computing data transfer aren't an exact 1:1 match. They're not an exact Apples to Oranges comparison.
That isn't to take away from Intel basically dragging their feet through the entire Ryzen launch (first gen) to the current gen allowing AMD to creep up and surpass them. That said, while AMD holds a lead... it's not this massive insurmountable lead over Intel at this point. Further, x86 as a whole has been watching ARM play catch up in terms of massive uplift from generation to generation that x86 COLLECTIVELY hasn't been able to match. In terms of power to watt, it's an across the board advantage for ARM. People tend to only attribute ARM to "oh, that cute little SOIC in my cell phone or tablet..."
And then they fail to realize that Fugaku Supercomputer on CPU is currently the fastest super computer in the world running on Fujitsu a64fx, and it's not by some slim margin. They fail to realize that Amazon with Graviton 2 (Neoverse N1-based) socked both Intel and AMD in the jaw with regards to Xeon and Epyc in Enterprise (performance per watt, even if Epyc is faster at the high end [not by a lot]... it's still a small furnace less efficient regardless of the performance gains it made in current gen - heat = $ as you have to run cooling to compensate on top of the additional power draw; tapping into a chip that's nearly as fast with significantly lower cost of operation = a huge advantage for Amazon and part of why Microsoft joined the fray with first Marvell/Cavium and their announcements to build their own ARM CPU families). And the reality that with ARM's future roadmap now splitting the high end portion of the CPU design... the new high end is a two-tier approach that gives ARM the ability to maintain it's performance per watt significant advantage in Enterprise (Neoverse N2) while also pushing for brute force performance (Neoverse V1).
Those questioning why NVidia wants ARM? Look at it like this... They didn't have enough volume to keep Tegra in development. They only were able to sell the Shield (which is more of a niche product -- it's beloved, sure... but in the streamer market it's a tiny faction of the whole dominated by Roku, FireTV, AppleTV and Google with Google TV and Chromecast) and the Nintendo Switch based off of it. While NVidia seems committed to building a new SOIC for Nintendofor the Switch (2? Pro?), they had 0 pathway into the mobile market that Qualcomm, Samsung, Hisilicon (Huawei) and Apple dominate in marketshare. Even with Huawei's hit with the U.S., it's still a major global player.
Yet watching AMD looking poised to take the super computing crown by integrating their x86 Enterprise CPU dev team and their Radeon division together... and the amount of revenues contracts like these provide for a company on 1 partnership, it's obvious why NVidia would enter the fray. Integrating the massive performance gains in ARM in the Enterprise market with the NVidia GPU teams for Enterprise and Scientific-level computing... it gives NVidia an outlet to compete. Beyond that, they still use chip controllers from ARM or RISC-V tailored to their needs on GPU's. Their movement to be a RISC-V (open source architecture) partner came from a lack of outlet to really steward the ship with ARM. Being able to steward ARM more as an owner while also having a revenue stream from licensees... it's enticing and a big part of why NVidia is interested. That and it's a lot easier if NVidia wants to get into the Enterprise fray alongside Super Computing, for them to do so.