Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I remember back in 1995, while finishing my degree, taking a class on VLSI, where we were talking about how the design rules for chips were going to change significantly when we went below 0.3 µm (300 nm), because propagation delay along wires would be greater than propagation delay through logic gates. Up until then, the assumption was that wiring delay was nil and gate delay was all that mattered. Copper interconnects were still a few years off.

25 years later and we've almost moved the decimal two places.

The physical limit for electrical separation has a lot of factors going into it, but for reference the gate insulator layer (silicon dioxide insulator isolating the transistor gate from the channel) is about 3-5 atoms thick.

Also bear in mind that a transistor is much larger than the process feature size, though the latter drives the former. so "3 nm" transistors may well be spaced 50 nm apart (guessing there). You need room for the wiring, too, which is also driven by feature size.

Much of my job in the early 2000’s was developing software at AMD to model propagation delay on wires, taking into account slew rate on the output of the driving gate, and the parasitic capacitance and resistance on the wires. Where things got interesting is that, at that time, coupling between wires started to become a problem. So not only did you have to worry about the wire slowing down your critical path, you had to worry about neighboring wires influencing your wire - if you are trying to switch high, and your neighbors are all switching low, it delays things. Worse, if all the wires are switching in the same direction as your wire, you could end up with a hold time violation.

We had a process where you’d layout all the logic, then run it through a tool that took an hour or more to model the capacitances and resistances, then you’d run that all through another tool that would take many hours to give you timing information.

As a designer I hated this. So I wrote a tool that would allow you to pick a part of the chip you were trying to optimize, and then you could drag gates around on the screen graphically, and as you did that it would tell you the approximate new timing information, within a couple percent of what the other process would predict. Took me months to learn about Asymtotic Waveform Evaluation, etc. Plus I really didn’t know how to code. But when I was done I ended up in charge of AMD’s design methodology team :)
 
3 nm. Wow. It's a great time to be alive. The technological advances seem to know no bounds. Intel has a lot of work ahead of them.
 
  • Like
Reactions: ModusOperandi
Much of my job in the early 2000’s was developing software at AMD to model propagation delay on wires, taking into account slew rate on the output of the driving gate, and the parasitic capacitance and resistance on the wires. Where things got interesting is that, at that time, coupling between wires started to become a problem. So not only did you have to worry about the wire slowing down your critical path, you had to worry about neighboring wires influencing your wire - if you are trying to switch high, and your neighbors are all switching low, it delays things. Worse, if all the wires are switching in the same direction as your wire, you could end up with a hold time violation.

We had a process where you’d layout all the logic, then run it through a tool that took an hour or more to model the capacitances and resistances, then you’d run that all through another tool that would take many hours to give you timing information.

As a designer I hated this. So I wrote a tool that would allow you to pick a part of the chip you were trying to optimize, and then you could drag gates around on the screen graphically, and as you did that it would tell you the approximate new timing information, within a couple percent of what the other process would predict. Took me months to learn about Asymtotic Waveform Evaluation, etc. Plus I really didn’t know how to code. But when I was done I ended up in charge of AMD’s design methodology team :)
Fascinating - apologies if this is a really stupid question - but we're always hearing that each new chip has x billion transistors on it, which represents a doubling over the last 2 years etc. My question is - how do chip designers physically keep up with the need to "fill the space" in terms of utilising the capacity of a chip. Is it more a matter of taking pre-designed blocks and fitting them together in the same space (obviously a horrible simplification), or do you start again from Transistor number 1 and think - right only 15,999,999,999 to go... Presumably it must be the former!!
 
Roadmap doesn’t mean theyll ever see the light of day. Could face major production issues down the line.

I dont think you quite understand how TSMC's roadmaps works.

Not to mention it is not exactly an argument because nothing in the world is 100% certain.
 
There used to be a 'standard' but now each manufacturers process is different. It has been proposed to now measure density, in which case Intel could potentially 'win' as their 10nm process has more density than TSMC's 7nm process.


Exactly 10nm, 7nm, 5nm is all BS.
It just tells the smallest feature size and does not correlate to density.
Intel 10nm is more dense than TSMC 7nm.
But sadly people believe the hype.
 
What happens after 1nm? Do we go negative?

0.8nm.

It is important to note none of these number has anything to do with the actual size any longer. They used to represent something but even that is no longer true.

And beyond 1nm is too far off in the future no one has any idea. ( Actually they do, they just dont know if it would work in practice ) Quantum tunneling will hit some day. But we have no idea when.

3nm is done. Nothing to worry about there.
2nm is based on something called GAAFET ( Which was originally scheduled for 3nm but couldn't make it in time, hence that is why most of these numbers are irrelevant because their spec and technicals changes in time due to yield and other technical difficulties )

1.4nm is based on MBCFET ( Or Nanosheet if you use Intel's term )

All of these information are based on TSMC investor meeting notes and information released in Industry Forum or other reputable website. I try googling some links for reference but unfortunately nothing quite good enough came up. You could look up the keyword above in semiengineering if you are interested. ( Google is now coming up with all the News and Junk results )

Edit: And Oh, one aspect I forgot to mention, it isn't technical that is going to slow us down. It is the price / economical. ( Or they could shrink the die size to save cost...... ) Sooner or later the market wont be able to have large enough volume to sustain these leading edge development in a 2 years cadence. Although right now 3nm and 2nm seems to be relatively safe.
 
Fascinating - apologies if this is a really stupid question - but we're always hearing that each new chip has x billion transistors on it, which represents a doubling over the last 2 years etc. My question is - how do chip designers physically keep up with the need to "fill the space" in terms of utilising the capacity of a chip. Is it more a matter of taking pre-designed blocks and fitting them together in the same space (obviously a horrible simplification), or do you start again from Transistor number 1 and think - right only 15,999,999,999 to go... Presumably it must be the former!!

A little of both :). We never think in terms of number of transistors. We always worry about die area - we don’t want to (or can’t) go beyond a specific size.

So the architects come up with an architecture - we want X cores, Y cache, etc. etc. The designers then estimate the size of the various sub-blocks. For example, a CPU core would need an integer execution unit (with multiple ALUs), a scheduling unit, a floating point unit, a load/store unit, an instruction decoder, etc. etc. We estimate the area of everything, then try and floor plan it (i.e. determine where the units go, what shape they have, etc.). If it’s too big, that information is given to the architects, who may adjust their demands. We also estimate worse case timing - the architects may have made it so block A has to talk to block B within a certain amount of time, and there’s no possible way to make that work. So there’s a process of iteration. Once everyone thinks it may work, we start the design process from the bottom up.

So, for example, when I owned the integer execution units, I started by looking at the 64-bit multiplier, because that would be the biggest part of each ALU in terms of area. Then I design it to meet the timing constraints and area and power constraints. Once I get close, I move on to the divide circuitry, or the adders, etc. At no point am I counting transistors.

Once we have everything pretty close, we begin to run verification processes to make sure we got the logic design right - it has to do what the architects said it should do - and to test if it is meeting timing, power, reliability, and other electrical constraints. Then, most of the work is iterating to try and hit all those constraints. And sometimes this means that I have to ask designers of neighboring blocks to make changes to their blocks, etc.

At the companies I worked at, we did it largely by hand, because the automated software is never as good as an experienced physical designer. We had tools to help, many of which I wrote, but we didn’t “synthesize” the chip like NVIDIA and other companies mostly do.
 
Fascinating - apologies if this is a really stupid question - but we're always hearing that each new chip has x billion transistors on it, which represents a doubling over the last 2 years etc. My question is - how do chip designers physically keep up with the need to "fill the space" in terms of utilising the capacity of a chip. Is it more a matter of taking pre-designed blocks and fitting them together in the same space (obviously a horrible simplification), or do you start again from Transistor number 1 and think - right only 15,999,999,999 to go... Presumably it must be the former!!
There is no "fill the space".
There are always enough new features and more memory you want to put on a chip to make it bigger and "fill the space".
There are physical limits to how big a chip can be; although there are wafer scale chips.
No matter how dense you make a silicon process, there will always be enough stuff to completely fill a chip.
Much like, no matter how fast you make a computer there will always be software to drag it to its knees.
 
There is no "fill the space".
There are always enough new features and more memory you want to put on a chip to make it bigger and "fill the space".
There are physical limits to how big a chip can be; although there are wafer scale chips.
No matter how dense you make a silicon process, there will always be enough stuff to completely fill a chip.
Much like, no matter how fast you make a computer there will always be software to drag it to its knees.

Yeah, I’ve never had to do a design where there wasn’t an issue where we had TOO MUCH in the first cut, and had to remove stuff to make it fit the area budget (or, sometimes, just to make it fit the reticle!)
 
Yeah, I’ve never had to do a design where there wasn’t an issue where we had TOO MUCH in the first cut, and had to remove stuff to make it fit the area budget (or, sometimes, just to make it fit the reticle!)
Me neither.
It's always too big and you go back and remove features, memory or whatever to make it fit and meet timing.
Oh yeah and that funny thing called the "reticle limit".
 
In Reality - we should all be thanking Intel and all their missteps - exactly the justification needed for:
+) Amazon to build out their own ARM based designs in their data centres
+) The worlds fastest supercomputer to be based on ARM
+) Microsoft to be forging ahead with a computationally competitive ARM design
+) For ARM IPC to have have overtaken x86 (it would have happened eventually due to the ISA)
+) For Apple to justify developing their (mobile) ARM ISA APUs - and for these to already outperform many Intel Desktop CPUs.
+) For qualcomm to return to the ARM world (Nuvia)

If China are prevented (by CFIUS) to purchase EUV equipment - perhaps what we will see next is that China are the first to move to GAA transistors + 3D chips with hundreds of layers - all designed by ML - first.

AJ
 
In Reality - we should all be thanking Intel and all their missteps - exactly the justification needed for:
+) Amazon to build out their own ARM based designs in their data centres
+) The worlds fastest supercomputer to be based on ARM
+) Microsoft to be forging ahead with a computationally competitive ARM design
+) For ARM IPC to have have overtaken x86 (it would have happened eventually due to the ISA)
+) For Apple to justify developing their (mobile) ARM ISA APUs - and for these to already outperform many Intel Desktop CPUs.
+) For qualcomm to return to the ARM world (Nuvia)

If China are prevented (by CFIUS) to purchase EUV equipment - perhaps what we will see next is that China are the first to move to GAA transistors + 3D chips with hundreds of layers - all designed by ML - first.

AJ
Samsung belongs somewhere in there too.
 
In Reality - we should all be thanking Intel and all their missteps - exactly the justification needed for:
+) Amazon to build out their own ARM based designs in their data centres
+) The worlds fastest supercomputer to be based on ARM
+) Microsoft to be forging ahead with a computationally competitive ARM design
+) For ARM IPC to have have overtaken x86 (it would have happened eventually due to the ISA)
+) For Apple to justify developing their (mobile) ARM ISA APUs - and for these to already outperform many Intel Desktop CPUs.
+) For qualcomm to return to the ARM world (Nuvia)

If China are prevented (by CFIUS) to purchase EUV equipment - perhaps what we will see next is that China are the first to move to GAA transistors + 3D chips with hundreds of layers - all designed by ML - first.

AJ

Nuvia doesn’t return qualcomm to Arm. They’ve been doing Arm all along.
 
+) For qualcomm to return to the ARM world (Nuvia)

Qualcomm is ARM and always has been.
Do you mean it's Qualcomm's return back to full custom ARM processors and not using the ARM program where ARM modifies the core for you?
Qualcomm hasn't done full custom since I think the SD820 or so.
I think part of this move is because Nvidia is on the path to purchase ARM.
That does create an interesting issue for ARM customers.
 
I really think 3mn will be the limit. I will be extremely surprised if they can go any smaller
just more cores is all that's left.
Not even close. There is a substantial roadmap of advances beyond 3nm, not of which look outrageously difficult.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.