Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Should Intel actually get their act together, they could be a decent secondary supplier of older SoCs that would allow Apple to make sure TSMC is only product of the cutting edge and not saddled with older process fab equipment. This would be very lucrative for Intel if they can swallow their pride and realize what their role is now relative to Apple. They are no longer a leader and it will be incredibly hard for them to regain any cache as a mover and shaker at this point. Perhaps they can find their niche in the elder statesman role if these lose the fiefdoms and douchebags that plague middle management and product group leadership.
 
I thought the jump would be from 5nm to 3nm from earlier rumors.
Honestly, does it matter what they'll call it? A lower number just points at a different generation these days and has nothing to do with actual nm's anymore. It's just marketing terms :)


Example of how confusing nm naming can be:
  • TSMC 5nm is way denser than Samsung 5nm.
  • Samsung 5nm is a little denser than TSMC 6nm
  • TSMC 6nm is about the same density as TSMC 7nm+.
  • Intel 10nm is a little denser than TSMC 7nm.
  • TSMC 7nm is a littledenser than Samsung 7nm.
 
  • Like
Reactions: Bobby Smallwood
Honestly, does it matter what they'll call it? A lower number just points at a different generation these days and has nothing to do with actual nm's anymore. It's just marketing terms :)


Example of how confusing nm naming can be:
  • TSMC 5nm is way denser than Samsung 5nm.
  • Samsung 5nm is a little denser than TSMC 6nm
  • TSMC 6nm is about the same density as TSMC 7nm+.
  • Intel 10nm is a little denser than TSMC 7nm.
  • TSMC 7nm is a littledenser than Samsung 7nm.

The “name” is intended to correspond roughly to the minimum gate width (or, now, fin width). It was never intended to reflect density, which has to take into account min pitch/spacing rules, as well as min-spacing/width rules on other layers (like metal, poly, etc.)

And “density” doesn’t tell us much useful about the quality of a process, though it is better than min gate width, I guess.
 
Might be a silly question.. If a gpu\cpu was manufactured at 4nm.. Can you do 4nm cpu + 4nm cpu in a larger area? Would that make it 8nm? I guess the intent is thinking that if I needed a more powerful workstation that I company could use 4nm but add another chip for more power.. Make sense?
 
Might be a silly question.. If a gpu\cpu was manufactured at 4nm.. Can you do 4nm cpu + 4nm cpu in a larger area? Would that make it 8nm? I guess the intent is thinking that if I needed a more powerful workstation that I company could use 4nm but add another chip for more power.. Make sense?
The “4nm” has nothing to do with the size of the chip (the weird phrasing in the original post notwithstanding). It’s a rough measure of the size of part of the smallest transistor that you can make.

Usually, when you go from 7nm to 4nm, or whatever, you actually do not shrink the size of the chip - you add more stuff to the chip to fill up any newly-available space. Chip sizes have been growing while the size of transistors has been shrinking.
 
  • Like
Reactions: Nütztjanix
Should Intel actually get their act together, they could be a decent secondary supplier of older SoCs that would allow Apple to make sure TSMC is only product of the cutting edge and not saddled with older process fab equipment. This would be very lucrative for Intel if they can swallow their pride and realize what their role is now relative to Apple. They are no longer a leader and it will be incredibly hard for them to regain any cache as a mover and shaker at this point. Perhaps they can find their niche in the elder statesman role if these lose the fiefdoms and douchebags that plague middle management and product group leadership.

Except it’s hard to design a chip at node size X so that it can be made by two different fabs with different design rules at that node size. And Apple isn’t going to re-design an older chip to be used by Intel’s fabs.

I’ve designed chips to be made by two different fabs, and you end up creating a “least common denominator” set of design rules that works for both, but which gives up some performance and die area on both.

If Intel could clone the older TSMC design rules, then it would work.
 
  • Like
Reactions: Zdigital2015
While it’s possible you can use the same design, I haven’t seen any evidence of that? If the metal layer thicknesses are different, or the doping of the semiconductor is different, you would have to change your design. Is there some information out there about the 4nm process?
Great question! I heard this directly from a TSMC interview - but when I try to google, one the best quotes I get is from HEXUS: "N4 (4nm) is described as a 5nm 'family member'. Thus, it is closely related to 5nm, and the 5nm enhanced version (N5P). TSMC's customers will therefore be able to use the existing design infrastructure they are familiar with, without compatibility or redesign concerns."

It also doesn't require new equipment from TSMC compared to existing 5nm, and at one point (2019) they referred to the N4 process it as 5nm before later calling it 4nm. I think the original roadmap had N4 launching in 2022 after N5P as a way to get expanded capacity out of 5nm for certain clients. I really thought Apple had zero chance of using 4nm (skip right ahead to 3nm which was supposed to launch sooner) until the M1 launched.

Now it appears as if Apple will use N4(4nm) for their Mac Series, and 3nm for their late 2022 iPhone. Sort of makes sense. They can continue to use older designs on the N4 for M2, etc.

Thoughts? Feel free to correct any errors (you are the expert on this subject, I am not)

Edit: the way I've seen it talked about in technical interviews is that N4 is essentially N5++, unless I've been misled.
 
Last edited:
Also "One of the most important aspects of N4 is that it features N5-compatible design rules, SPICE (simulation program with integrated circuit emphasis) models and IP. To that end, it will be very easy for SoC developers with 5nm designs to adopt TSMC’s 4nm technology and even re-use some of the building blocks they already have."
 
  • Like
Reactions: jdb8167
Great question! I heard this directly from a TSMC interview - but when I try to google, one the best quotes I get is from HEXUS: "N4 (4nm) is described as a 5nm 'family member'. Thus, it is closely related to 5nm, and the 5nm enhanced version (N5P). TSMC's customers will therefore be able to use the existing design infrastructure they are familiar with, without compatibility or redesign concerns."

It also doesn't require new equipment from TSMC compared to existing 5nm, and at one point (2019) they referred to the N4 process it as 5nm before later calling it 4nm. I think the original roadmap had N4 launching in 2022 after N5P as a way to get expanded capacity out of 5nm for certain clients. I really thought Apple had zero chance of using 4nm (skip right ahead to 3nm which was supposed to launch sooner) until the M1 launched.

Now it appears as if Apple will use N4(4nm) for their Mac Series, and 3nm for their late 2022 iPhone. Sort of makes sense. They can continue to use older designs on the N4 for M2, etc.

Thoughts? Feel free to correct any errors (you are the expert on this subject, I am not)

Edit: the way I've seen it talked about in technical interviews is that N4 is essentially N5++, unless I've been misled.

That all makes sense, based on the scant available evidence.
 
  • Like
Reactions: iFan
Also "One of the most important aspects of N4 is that it features N5-compatible design rules, SPICE (simulation program with integrated circuit emphasis) models and IP. To that end, it will be very easy for SoC developers with 5nm designs to adopt TSMC’s 4nm technology and even re-use some of the building blocks they already have."
If the spice models are the same, the transistors didn’t change.
 
4nm. Wow. So the silicon ride should be over by 2030?

What the hell is graphene up to these days?
 
The silicon atom is about 0.3 nanometers; A 10 nanometers process have a transistor with about 70 atoms. I guess we are reaching the very physical limit, and apple is light years ahead intel in that space. I don’t think can miniaturise much more than probably a 1 nm. Who knows though!
As far as I know, these nm numbers don't really mean anything anymore; they are a marketing terms to indicate some advancements in technology. Originally, these numbers were the actual distances between transistors. Later on, they indicated an "equivalent" distance (because the actual distance couldn't get any smaller, so they started doing other tricks such as stacking transistors etc. I think). I believe today they use them in lieu of "reverse generation" where the smaller the number the higher the generation.

...or they simply hacked into my iCloud, saw the pivate photos I sent to my wife, and now they are making fun of my size...
 
Intel 7nm is not coming in any reasonable quantities before 2023-2024... and it will probably be roughly comparable to TSMC 5nm at best. I am quite confident that TSMC will keep their process lead at least until 2025 or maybe even later...
Unless China attacks Taiwan and takes over.
 
As far as I know, these nm numbers don't really mean anything anymore; they are a marketing terms to indicate some advancements in technology. Originally, these numbers were the actual distances between transistors. Later on, they indicated an "equivalent" distance (because the actual distance couldn't get any smaller, so they started doing other tricks such as stacking transistors etc. I think). I believe today they use them in lieu of "reverse generation" where the smaller the number the higher the generation.

...or they simply hacked into my iCloud, saw the pivate photos I sent to my wife, and now they are making fun of my size...

They were never the actual distance between transistors. They were the minimum feature size on the active layer (i.e. the minimum gate width, or the gate length). Now they approximately track the transistor fin dimensions in most cases. The distance between transistors continues to shrink. And transistors are still not stacked, except in certain exotic cases (not mass market CPUs).

These numbers still have meaning, but because they only ever told you one thing (minimum feature size) it was never all that useful except within a single company. An intel process and a TSMC process could have the same minimum feature size, but very different widths on each metal layer, different minimum spacings, etc.
 
Honestly, does it matter what they'll call it? A lower number just points at a different generation these days and has nothing to do with actual nm's anymore. It's just marketing terms :)


Example of how confusing nm naming can be:
  • TSMC 5nm is way denser than Samsung 5nm.
  • Samsung 5nm is a little denser than TSMC 6nm
  • TSMC 6nm is about the same density as TSMC 7nm+.
  • Intel 10nm is a little denser than TSMC 7nm.
  • TSMC 7nm is a littledenser than Samsung 7nm.

you're confusing transister # count for lithography.
 
Your statement is very misleading. [...] The Apple density for A14 is much lower than expected, and some have claimed this shows a serious flaw in TSMC 5nm.

I assume you are referring to this analysis: https://semiwiki.com/semiconductor-manufacturers/293627-apples-a14-packs-134-million-transistors-mm²-but-falls-short-of-tsmcs-density-claims/ It maybe be accurate, or it may be not, I suppose future products will tell.

I don't think that it changes much about what I have written though, if there is a problem with SRAM scaling, then everyone is affected, and status quo won't change much.

The A14 (and M1) were clearly rush jobs where THE priority was getting an Apple Silicon mac chip working (so the changes necessary for x86 support, hypervisor support and suchlike). Everything else was subservient to this goal. The core design appears to be substantially the same as the A13, the GPU even more similar.

A14/M1 are clearly evolutionary designs along the lines "it's important, so let's not break anything" but I don't know if I would refer to them as "rush jobs". The focus here seemed to be achieving feature parity with desktop machines and fixing some peak power usage issues of A13. The list of important changes from the top of my head:

- Larger CPU caches
- An extra CPU FP vector unit
- GPU desktop feature parity: barycentrics+primitive id, SIMD reduction intrinsics, new texture clamp mode
- GPU performance enhancements: doubled FP32 throughput per clock (M1 only), faster atomics (more parallelism), faster GPU-driven pipelines, better memory compression, new matrix multiplication intrinsics

All in all, these are some fairly significant changes, especially on the GPU side.
 
I guess we are reaching the very physical limit ..... I don’t think can miniaturise much more than probably a 1 nm
Can you imagine reading this comment 100 years from now? It’ll be pure comedy!
 
I assume you are referring to this analysis: https://semiwiki.com/semiconductor-manufacturers/293627-apples-a14-packs-134-million-transistors-mm²-but-falls-short-of-tsmcs-density-claims/ It maybe be accurate, or it may be not, I suppose future products will tell.

I don't think that it changes much about what I have written though, if there is a problem with SRAM scaling, then everyone is affected, and status quo won't change much.



A14/M1 are clearly evolutionary designs along the lines "it's important, so let's not break anything" but I don't know if I would refer to them as "rush jobs". The focus here seemed to be achieving feature parity with desktop machines and fixing some peak power usage issues of A13. The list of important changes from the top of my head:

- Larger CPU caches
- An extra CPU FP vector unit
- GPU desktop feature parity: barycentrics+primitive id, SIMD reduction intrinsics, new texture clamp mode
- GPU performance enhancements: doubled FP32 throughput per clock (M1 only), faster atomics (more parallelism), faster GPU-driven pipelines, better memory compression, new matrix multiplication intrinsics

All in all, these are some fairly significant changes, especially on the GPU side.

The entire idea of using density as a benchmark is ridiculous other than for something like SRAM - the density of anything else will highly depend on the physical design and micro architecture. The majority of the design is not regular blocks where transistors stack nice and neatly like legos. And Apple presumably does things like EVSX’s n-of-m patents (since they bought Intrinsity) which can reduce density more.

As for SRAMs, they have to cope with physics. So the fact that the width/pitch/spacing rules permit a theoretical maximum density means nothing if you have to design an actual product where cross-coupling, leakage, IR drop, electromigration, clock skew, etc are actual effects you need to compensate for. The idea that anyone cares about these “density” figures, or, even worse, expects real world density to match the figures, is bizarre to me.
 
  • Like
Reactions: Nütztjanix
Yet they wanna help Apple build their chips....
And Intel may do fine with that... Their lithography process is fine and they have demonstrated they are capable of 5nm and potentially smaller. The problem with Intel getting their chips into 10nm~7nm territory is a different issue. I truly believe that the Intel core CPU design has become so bloated and layered with obfuscation and legacy that their best option at this point is to totally scrap it and start from the ground up designing for a 3~5nm process. Not so simple and not a path they took initially, although they may be on that road now. I’m betting there are legacy structures in the latest Intel CPUs that none of their current engineers even know what they do. Yet if you shrink them, move them, etc.. they break functionality.
 
And Intel may do fine with that... Their lithography process is fine and they have demonstrated they are capable of 5nm and potentially smaller. The problem with Intel getting their chips into 10nm~7nm territory is a different issue. I truly believe that the Intel core CPU design has become so bloated and layered with obfuscation and legacy that their best option at this point is to totally scrap it and start from the ground up designing for a 3~5nm process. Not so simple and not a path they took initially, although they may be on that road now. I’m betting there are legacy structures in the latest Intel CPUs that none of their current engineers even know what they do. Yet if you shrink them, move them, etc.. they break functionality.

When you shrink you redesign everything. Only time that’s not the case is when you know you are going to shrink a design, so you come up with design rules that scale. Gives up performance for both scales, though, and you still have to redesign all the analog stuff (PLLs, etc). Intel’s problem is with the fab. They blew the 10nm transition.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.