if you're so inclined, read this ...
Who's doing what in next-gen chips, and when they expect to do it.
semiengineering.com
The problem several of those solutions only work on a narrow set of circuit types. The logic to implement the multiplier in a computational core is different from the logic to implement SRAM. Analog-to-digital transition logic different still.
The Hot Chips 34 conference that is normally held at Stanford University is in full swing this week, and thanks to the coronavirus pandemic is being held
www.nextplatform.com
Memory cost decreases are flattening out. That has ramifications at the on chip cache level also.
One reason why aggregated dies onto a package is where "trillion transistors on a 'chip' " is going.
If only can continue to crank up the density on only 1/4 , 1/5 , 1/8 of the die then the overall impact on overall die is going to go down on some computational problems that need the whole chip to work in a more balanced fashion.
It is more so going to open the door for more fixed function specialized compute. Certain computation problems will get way faster ( 3x - 5x ) while some 'dick , jane , spot' , extremely, general purpose compute operations do not.
Already covered in another response above. If these super exotic approaches to even smaller result in wafer costs that are 3-4x times as much as the current ones , lots of electronics designs are not going to not follow those up. There will be some that pay, but if the costs are spread over fewer and fewer players things will likely slow down over the long term.
There are not 400+mm wafers largely because nobody really wants to spend the money to covert most of the fab infrastructure over to that. If just one of TSMC , Samsung, Intel balks are going to something way more expensive then that effort has a decent chance of collapsing. Done to just one EUV fab machine maker ASML. If they don't want to do something that is way to expensive for them... it isn't happening.