Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
We're running out of nano metres! Time to engage the pico-chu metres!

1624264686473.png
 
Why?

Every year they improve by 20% single core.
I know! but my worries is if this 20% is relative to manufacturer size process, I dont know much about this but from my newbie POV, it seems size is related to improvements, and if they are already in 3nm, where is the limit? 1nm?…

as I say, I dont know about this, but is something I'm wondering if could be some impediment for continuing with this 20% improvement over the next years in middle term
 
I'm sorry, did I miss something? Did Apple not switch from PPC to Intel 15 years ago because IBM couldn't make the G6 happen? Have I dropped into the multiverse? (Also, IBM's Power9 has been replaced by the 7nm Power10, a much better chip, but not something you're every going to see in a laptop any time soon.)
So if apple isn‘t using an architecture it’s a failure? I guess all those PowerPC game consoles, workstations, and supercomputers aren’t a thing.
 
IBM still designs CPUs, researches semiconductor technologies, designs and sells workstations and mainframes, artificial intelligence, etc.

I thought those mainframes or workstations runs Intel or AMD server CPUs , does IBM CPUs run Unix or Linux or whatever is used to run servers? Any reason for corporates to buy the IBM CPU? Just curious
 
I never said anything was important.

I simply stated this is problem with using abritrary numbers for node sizes, so who don't do any research will just compared 10 > 7 must be worse because one number is bigger than the other. It's like comparing car engines and just looking at displacement without any consideration to anything else.
You seem to be awfully worked up about a non-existent problem.
I can assure you that the people who actually pay for the use of these processes DO know what node numbers mean.
Everyone else buys the phone/car/IoT device they want, and they get what they get.

Do you get similarly upset about denier (as a fabric measure)? Or mired in the context of light temperature?
The world is full of inverse units, and we can't get worked up about the fact that most people are morons.
 
A good design will use a range of transistors from the fastest that are appropriate sitting in critical paths, to the lowest power that can still do the job, sitting in non-critical paths.
I don’t want to oversimplify it, but is that how chip designers work? Once you speed up one process on the critical path, you’ll soon uncover a new bottleneck/critical path.

Or do they define an overall system performance target and then work to that goal?
 
I don’t want to oversimplify it, but is that how chip designers work? Once you speed up one process on the critical path, you’ll soon uncover a new bottleneck/critical path.

Or do they define an overall system performance target and then work to that goal?

These are not contradictory approaches. We have a target, and we hit that target by attacking critical paths starting at the worst, and working down until we hit the target. That may involve resizing transistors, or relocating them, or changing the metal routes between gates, or even moving entire blocks of logic around. It could also involve reimplementing logic, moving bits of logic to other pipe stages, replicating logic, etc. It’s a very complicated process, and it’s where we spend most of our time.
 
So if apple isn‘t using an architecture it’s a failure? I guess all those PowerPC game consoles, workstations, and supercomputers aren’t a thing.

You're drifting from the point: IBM's 2nm fabrication process will be late to the party. (I just read my previous responce: I came off as a bit of an *******, there. My apologies, should you care to accept them.)
 
  • Like
Reactions: cmaier
I think you may be confusing "ready for production" with a design concept lol
I think it's slightly more than a design concept... I mean they *did* build a 2nm chip in a research facility ;), but you're right in it being many years from "ready for production".

As a comparison, IBM also produced the worlds first 7nm chip from this same research facility, and that announcement was 6 years ago(!) That is the same technology that will be available for the first time in their POWER10 systems, deliveries beginning in the 2nd half of 2021, fabricated by Samsung. POWER9 (current system) is running on their 14nm technology.
 
Actually you are incorrect.

When you reduce the node size, you can decrease the size of transistors and wires. Doing so allows you to decrease the transistor gate capacitances and the interconnect parasitic capacitances. Decreasing capacitance increases speed and decreases power. First, power is decreased because power is a linear function of capacitance. (It is also a linear frequency of switching frequency, but more on that in a moment).

Second, speed is increased because the time it takes to charge or discharge a wire and a transistor gate is a function of current, and current is a linear function of capacitance. (It is also a linear function of voltage).

So you have a lot of choices here. You can keep frequency and voltage the same, and then get a 40-45% power reduction. You can ramp up the clock by 15% and get a 30’ish% power improvement. You can increase the voltage and get more speed at the same power. You can decrease voltage and get a huge power improvement at the same speed. etc. etc.

While their use of “never” may have been incorrect, this post misstates the original article it quotes which states “N3 will offer up to 15% speed gain or consume up to 30% less power than N5”, and logic literally dictates that or does not equal and.
 
This might be a stupid question but we are at 3 nm now... Once we hit 0 nm isn't that at the Quantum Computing level? I can see Apple doing research into this area! Could we someday have Qauntum Computing powered iPhones?
 
This might be a stupid question but we are at 3 nm now... Once we hit 0 nm isn't that at the Quantum Computing level? I can see Apple doing research into this area! Could we someday have Qauntum Computing powered iPhones?
if you check where is quantum computing right now, you will see how far this scenario is…
 
This might be a stupid question but we are at 3 nm now... Once we hit 0 nm isn't that at the Quantum Computing level? I can see Apple doing research into this area! Could we someday have Qauntum Computing powered iPhones?
Isn't X-ray lithography the next step after EUV lithography? Sub nm?
 

sorry for the bump, who uses these chips and are they really faster than desktop chips? My understanding that IBM create chips for appliances, machinery, and cars and those have much much slower chips. I heard jet fighters run on G5 chips, the ones Apple used back in early 2000.

If they are really superior to desktop chips, why can't we have them in the desktop?! Apple just pop in power9 in Mac Pro?
 
sorry for the bump, who uses these chips and are they really faster than desktop chips? My understanding that IBM create chips for appliances, machinery, and cars and those have much much slower chips. I heard jet fighters run on G5 chips, the ones Apple used back in early 2000.

If they are really superior to desktop chips, why can't we have them in the desktop?! Apple just pop in power9 in Mac Pro?
Haha. POWER9 chips are ginormous, and meant for high end servers. They also sometimes build supercomputers from these chips.

I guess in theory they could put it into a Mac Pro, but there wouldn't be any reason for Apple to do this, given that they have their own ginormous chips with dozens of cores in the pipeline already which are much more appropriate for Apple's target market. For example, POWER9 would be terrible at ProRes video editing. Not to mention the fact it's a completely different architecture with a different instruction set.
 
sorry for the bump, who uses these chips and are they really faster than desktop chips? My understanding that IBM create chips for appliances, machinery, and cars and those have much much slower chips. I heard jet fighters run on G5 chips, the ones Apple used back in early 2000.

If they are really superior to desktop chips, why can't we have them in the desktop?! Apple just pop in power9 in Mac Pro?
These chips are designed for a very different class of computing tasks from that of the Mac Pro.

Also, Apple charges $10k for 12 x 64GB RAM in the Mac Pro. Imagine how entertaining the reaction would be if customers instead had to pay 12 x $3,197 = $38k! :eek:
1635571484678.png
 

Attachments

  • 1635571337139.png
    1635571337139.png
    21.1 KB · Views: 50
  • 1635571360407.png
    1635571360407.png
    127.3 KB · Views: 55
Haha. POWER9 chips are ginormous, and meant for high end servers. They also sometimes build supercomputers from these chips.

I guess in theory they could put it into a Mac Pro, but there wouldn't be any reason for Apple to do this, given that they have their own ginormous chips with dozens of cores in the pipeline already which are much more appropriate for Apple's target market. For example, POWER9 would be terrible at ProRes video editing. Not to mention the fact it's a completely different architecture with a different instruction set.

what are they made for and who uses them then? Is there another instruction set that RISC and x86?
 
what are they made for and who uses them then? Is there another instruction set that RISC and x86?
Like I said, they are made for high end servers. So, some people who want high end servers will use them. Enterprise, large corporations, data centres, etc.

And while I'm not a chip designer, I can tell you that it's probably simpler for people like you and me to just forget about those simplistic "RISC" vs "CISC" categories. Those were general terms from 40 years ago, and they don't really apply directly anymore to modern chips in 2021.
 
what are they made for and who uses them then? Is there another instruction set that RISC and x86?

POWER CPUs use the POWER instruction set. This is a RISC-style instruction set that's definitely easier to implement than x86, but with a few unfortunate features that ARMv8 (being later) was able to avoid.
(Experts will know what these are -- things like the dedicated CTR and LR registers and multiple condition registers, which all seemed like a good idea in the days for decoupled fetch, but which now just get in the way, use up ISA bits, and complicated decoding.)
But as far as ISA goes, nothing especially unusual there.

The real issue with POWER is that it's addressing a totally different market. In particular the uncharitable view is that the design exists essentially as legal arbitrage. The types of SW that are run on these machines is often licensed by the "CPU", and so IBM has contorted the definition of 'CPU" vastly beyond what makes sense. POWER's big thing is that a single CPU is split into essentially two independent halves which are each essentially replicas of again two independent quarters. This allows for easy construction of something that can run eight independent threads, but has just enough shared state (in particular one I and D cache, and one fetch unit) to just barely quality as a single CPU.

This is a great design if you want something that runs eight threads while pretending to be a single CPU. It's a TERRIBLE design if you want any of
- single-threaded performance
- low energy usage OR
- eight-threaded performance (but don't have to pretend to be one CPU).
Just why this is the case is far more detail than we can go into here, but it essentially boils down to lack of coherence. A modern CPU goes fast because it exploits the coherence (ie the repeating patterns) in software execution, whether via caching, via branch predictors, via loop buffers, or anything else. Forcing two threads (let alone eight!) onto a common set of hardware destroys most of this coherence because one thread's patterns are not another thread's patterns.

So bottom line is
- POWER is not a terrible machine. The ISA is OK, and they do very impressive things with large caches that are fairly intelligently shared between multiple CPUs.
- But the target audience is TOTALLY different from Apple, and that massive difference in concerns has led to a design style that would be insane for Apple (or Intel, or anyone who is not IBM) to adopt.
 
  • Like
Reactions: MacBH928
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.