Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Not always New is better . I mean almost in mean time.
Generally it takes some time to make a new tech perfect for people needs.
 
[MOD NOTE]
A number of posts have been removed due to the thread being derailed. Please stay on topic.
 
x86 has been in troubles for last 5 years. Intel is still mostly serving Skylake variants, AMD is now on same level as Intel. However, ARM, particularly Apple A series chips are already faster and more power efficient. Intel and AMD are going Motorola/PowerPC route...
To get to performance level of Apple A13 they need to consume 4x more energy. No wonder when you start doing something CPU intensive MacBook changes to vacuum cleaner..

We may expect much faster MacBooks that last longer. Much longer. I am pretty sure Apple will want to hit sweet spot between speed and power efficiency. 20 hour of battery life? No problem. 100% faster in multicore. No problem.

have a look:
perf.jpg
 
x86 has been in troubles for last 5 years. Intel is still mostly serving Skylake variants, AMD is now on same level as Intel. However, ARM, particularly Apple A series chips are already faster and more power efficient. Intel and AMD are going Motorola/PowerPC route...
To get to performance level of Apple A13 they need to consume 4x more energy. No wonder when you start doing something CPU intensive MacBook changes to vacuum cleaner..

We may expect much faster MacBooks that last longer. Much longer. I am pretty sure Apple will want to hit sweet spot between speed and power efficiency. 20 hour of battery life? No problem. 100% faster in multicore. No problem.

have a look:
View attachment 948300
Intel performance exceeds ARM performance !! 5 Years Old Intel Processor Problems Have Dropped (Marketing Wars)! ARM has no future, it is not a powerful processor compared to Intel !!!:cool::cool::cool::cool: ! Locked topic!!
 
  • Disagree
Reactions: CarlJ
Intel performance to heat up the room in winter is by far better than ARM. True. But not all of us live in ice cave to enjoy this fact.
Absolute nonsense! Better admit defeat to Intel Locked topic!!
P.S Vector ARM module does not have the phenomenal performance, which is a vector VMX-512 module from Intel!!:cool::cool::cool: Tiger Lake is a cooling processors!! Locked topic!!
 
Last edited:
  • Disagree
Reactions: CarlJ
Intel performance exceeds ARM performance !! 5 Years Old Intel Processor Problems Have Dropped (Marketing Wars)! ARM has no future, it is not a powerful processor compared to Intel !!!:cool::cool::cool::cool: ! Locked topic!!

Where can I compare this to the chip that the topic is about, the A series from Apple. Please backup your claims.
 
You are wrong again!! Intel processors have the increased power of the blue giant's X86 cores, as well as its IPC!!!:cool::cool::cool::cool::cool: Locked topic

what blue giant? IBM? What are you talking about? They have nothing to do with x86.

And even the a12 has a higher IPC than intel. So, again, what are you referring to?
 
You are wrong again!! Intel processors have the increased power of the blue giant's X86 cores, as well as its IPC!!!:cool::cool::cool::cool::cool: Locked topic

You obviously cannot accept any facts & evidence that disprove your comments, so really, there's no point in us bothering with you anymore. You don't know what you're talking about. It seems that if you don't believe it, then it's not true, no matter what the evidence.
 
You obviously cannot accept any facts & evidence that disprove your comments, so really, there's no point in us bothering with you anymore. You don't know what you're talking about. It seems that if you don't believe it, then it's not true, no matter what the evidence.
I have already cited all the facts as an example !!! It's not my fault, you can't read facts !!:cool::cool::cool:
 
what blue giant? IBM? What are you talking about? They have nothing to do with x86.

And even the a12 has a higher IPC than intel. So, again, what are you referring to?
Increased power of X86 cores!! Or high power cores X86!!!:cool::cool::cool::cool::cool::cool::cool:
This huge disadvantage of ARM Apple will compensate by increasing the clock speed to 6-7 or above gigahertz!!!But silicon is already at its limit !!! They can hardly do it!!!!:cool::cool::cool::cool::cool::cool::cool::cool::cool::cool::cool::cool::cool:
Intel is also considered a blue giant, not only IBM!!! The conversation is over!!!! :cool::cool::cool::cool::cool::cool::cool:
 
Last edited:
Increased power of X86 cores!! Or high power cores X86!!!:cool::cool::cool::cool::cool::cool::cool:
This huge disadvantage of ARM Apple will compensate by increasing the clock speed to 6-7 or above gigahertz!!!But silicon is already at its limit !!! They can hardly do it!!!!:cool::cool::cool::cool::cool::cool::cool::cool::cool::cool::cool::cool::cool:
Intel is also considered a blue giant, not only IBM!!! The conversation is over!!!! :cool::cool::cool::cool::cool::cool::cool:
No, ”big blue” is IBM, not intel.

Apple has increased ARM clock speeds at a much steeper slope than Intel, and there are no indications of a limit for ARM. You have to remember that the clock rate is determined by the critical path, and the x86 critical path is much longer than ARM. This is why Intel has to do crazy things like double the number of pipe stages compared to Apple, just to get the crtiical paths short enough. But doing that has major disadvantages - if you guess wrong on a conditional branch, then the penalty for missing is much much higher because of all those clock stages. So Intel has to build massive branch prediction engines, which suck up a lot of power and die area. Which means they can’t use that power and die area for actual computations.

And your premise is crazy - Apple’s A12 has higher IPC than Intel cores. Which means it is *Intel*, not Apple, that needs to raise their clock frequencies to keep up.

And if they *do* increase their clock frequency, then they are screwed, because that is the WORST way to increase performance. In a CPU, you have these variables:

C: capacitance (from wires, and from the gates of transistors)
V: power supply voltage (Vdd - Vss)
f: toggle frequency - how many times per second do you charge and discharge the capacitance
P: power consumed - which is also power dissipated as heat

It turns out that P = CfV^2 (some people put a ½ in there, depending on how you define f)

So by doubling the frequency, you double the power used (halving battery life) and you double the heat generated.

But it’s worse than that.

When you increase the clock frequency, that means that each transistor in the critical path must be able to complete its transition (from Vdd to Vss or vice versa) within one clock cycle. Otherwise the chip doesn’t work. But it takes current to do that. Each logic gate must charge (or discharge) Its capacitive load (the wire and any connected transistors).

CV = q (where q is the charge required to charge the capacitance C)

so:

V = q/C

This can be written in terms of current like:

dV/dt = I/C

or

I = CdV/dt

To charge and discharge more quickly, you need more current (because the definition of current is the movement of charge). To increase current, you need to increase the voltage. In other words, if you just increase clock speed, the chip will fail unless you also increase voltage.

But if we increase the voltage, the power consumption and heat goes up much faster, as P = f(V^2).

In general, then, only dumb intel engineers think it is a good idea to increase frequency. Instead, to the extent possible, you increase IPC and parallelism, which increases *C*. That way the effect on power consumption is, at most, linear. For example, instead of doing work twice as fast in series, which cycles half the capacitors twice as often, do it in parallel, using double the capacitance, half as fast. This is a net win because the voltage can be lower.
 
No, ”big blue” is IBM, not intel.

Apple has increased ARM clock speeds at a much steeper slope than Intel, and there are no indications of a limit for ARM. You have to remember that the clock rate is determined by the critical path, and the x86 critical path is much longer than ARM. This is why Intel has to do crazy things like double the number of pipe stages compared to Apple, just to get the crtiical paths short enough. But doing that has major disadvantages - if you guess wrong on a conditional branch, then the penalty for missing is much much higher because of all those clock stages. So Intel has to build massive branch prediction engines, which suck up a lot of power and die area. Which means they can’t use that power and die area for actual computations.

And your premise is crazy - Apple’s A12 has higher IPC than Intel cores. Which means it is *Intel*, not Apple, that needs to raise their clock frequencies to keep up.

And if they *do* increase their clock frequency, then they are screwed, because that is the WORST way to increase performance. In a CPU, you have these variables:

C: capacitance (from wires, and from the gates of transistors)
V: power supply voltage (Vdd - Vss)
f: toggle frequency - how many times per second do you charge and discharge the capacitance
P: power consumed - which is also power dissipated as heat

It turns out that P = CfV^2 (some people put a ½ in there, depending on how you define f)

So by doubling the frequency, you double the power used (halving battery life) and you double the heat generated.

But it’s worse than that.

When you increase the clock frequency, that means that each transistor in the critical path must be able to complete its transition (from Vdd to Vss or vice versa) within one clock cycle. Otherwise the chip doesn’t work. But it takes current to do that. Each logic gate must charge (or discharge) Its capacitive load (the wire and any connected transistors).

CV = q (where q is the charge required to charge the capacitance C)

so:

V = q/C

This can be written in terms of current like:

dV/dt = I/C

or

I = CdV/dt

To charge and discharge more quickly, you need more current (because the definition of current is the movement of charge). To increase current, you need to increase the voltage. In other words, if you just increase clock speed, the chip will fail unless you also increase voltage.

But if we increase the voltage, the power consumption and heat goes up much faster, as P = f(V^2).

In general, then, only dumb intel engineers think it is a good idea to increase frequency. Instead, to the extent possible, you increase IPC and parallelism, which increases *C*. That way the effect on power consumption is, at most, linear. For example, instead of doing work twice as fast in series, which cycles half the capacitors twice as often, do it in parallel, using double the capacitance, half as fast. This is a net win because the voltage can be lower.
:p:p:p:p:p:p:p:p:p

Ridiculous !!! Long pipeline and clock speed give a huge performance boost !!! The PowerPC G4 was already inferior to the performance of the Pentium 4 !! Although Apple claimed that a short pipeline allows a lower frequency to process data faster!!!!!:cool::cool::cool::cool::cool::cool::cool::cool::cool::cool::cool: Intel engineers cannot be called stupid engineers, people who create the best processors in the world !!!
 
:p:p:p:p:p:p:p:p:p

Ridiculous !!! Long pipeline and clock speed give a huge performance boost !!! The PowerPC G4 was already inferior to the performance of the Pentium 4 !! Although Apple claimed that a short pipeline allows a lower frequency to process data faster!!!!!:cool::cool::cool::cool::cool::cool::cool::cool::cool::cool::cool: Intel engineers cannot be called stupid engineers, people who create the best processors in the world !!!

everything you just said is completely wrong. It’s like saying black is white. Go read Hennessy and Patterson. You will learn a lot about computer architecture. A long pipeline is ALWAYS a bad thing.
 
everything you just said is completely wrong. It’s like saying black is white. Go read Hennessy and Patterson. You will learn a lot about computer architecture. A long pipeline is ALWAYS a bad thing.
Long conveyor is not bad because Intel has applied its own RISC architecture !!!:cool::cool::cool::cool::cool::cool: The long pipeline allows you to process a large amount of data in one clock cycle !! But in order not to decrease the performance Intel has applied its RISC architecture in the kernel !!! Super victory!!!! Intel Combines Big Data Processing with Phenomenal Performance !!! Intel has made to combine the treatment with a phenomenal performance of large data !!!The long pipeline allows you to process a large amount of data in one clock cycle !! But in order not to decrease the performance Intel has applied its RISC architecture in the kernel !!! The conversation is over!!!!:cool::cool::cool::cool::cool::cool::cool::cool::cool::cool:
 
Last edited:
So scores of the (lowest-end) Tiger Lake-U i5-1135G7 have started appearing. The single-core score of ~1350 is basically on par with that of the Apple A13. It achieves that (a year later) at about 13% lower clock, but presumably around two to three times the TDP (as always, it's hard to say with Geekbench numbers how long these scores can be sustained).

Also of note is that this is significantly faster than AMD. Its counterpart, the Ryzen 3 4300U, only scores ~1050. It does come with more cores and handily win the multi-core score, though.

In conclusion, Apple shouldn't feel particularly threatened by Tiger Lake just yet, but I continue to suspect the worst of the Skylake era is behind us.
 
Last edited:
Let's not forget that Intel has been having problems with x86 chip production below 14 nm that it wasn't until 2018 they finally succeeded. By contrast Apple got to 10 nm in 2017 with the A11. Then they went down to 7 nm in 2918 with the A12 which they continued with the A13 in 2019 and the planned A14 is a 5 nm process. I'm not sure how the planned AX for 2022 is going to be the 3 nm claimed because that is below the 5 nm where quantum tunneling happens.
 
Last edited:
Let's not forget that Intel has been having problems with x86 chip production below 14 nm that it wasn't until 2018 they finally succeeded. By contrast Apple got to 10 nm in 2017 with the A11. Then they went down to 7 nm in 2918 with the A12 which they continued with the A13 in 2019 and the planned A14 is a 5 nm process. I'm not sure how the planned AX for 202w is going to be the 3 nm claimed because that is below the 5 nm where quantum tunneling happens.

Different manufacturers. TSMC's 7nm process (the one Apple currently uses) is actually bigger than Intel's 10nm.
 
  • Like
Reactions: silvermacpro
Different manufacturers. TSMC's 7nm process (the one Apple currently uses) is actually bigger than Intel's 10nm.
That makes no sense as wikipedia (yes I know but it's all I have to work with) says this:

* In semiconductor fabrication, the International Technology Roadmap for Semiconductors (ITRS) defines the 10 nm process as the MOSFET technology node following the 14 nm node. "10 nm class" denotes chips made using process technologies between 10 and 20 nm. All production "10 nm" processes are based on FinFET (fin field-effect transistor) technology, a type of multi-gate MOSFET technology that is a non-planar evolution of planar silicon CMOS technology.

* In semiconductor manufacturing, the International Technology Roadmap for Semiconductors defines the 7 nm process as the MOSFET technology nodefollowing the 10 nm node. It is based on FinFET (fin field-effect transistor) technology, a type of multi-gate MOSFET technology.

IF what is 10 nm and what is 7 nm are both defined by ITRS then logically 10 nm has to be bigger than 7 nm. One of the whole purposes of a standard is that you don't have the Humpty Dumpty "it means what I say it means" nonsense we saw during the old console wars where 16, 32, and 64 bit were effectively useless.

Also unlike an inch and a cm a nm is a nm. This is what males articles like Intel 10nm isn't bigger than AMD 7nm, you're just measuring wrong total nonsense as the claim "With many competing technologies and companies involved, and playing by their own rules as to how they define transistor length, the name attached to process node isn't so much a technical term as it is a marketing one." as there is a freaking standard that defines this.

Now if you can show the standard is borked (like USB 2.0 was. Gads that was a mess) then you'd have a point.
 
Let's not forget that Intel has been having problems with x86 chip production below 14 nm that it wasn't until 2018 they finally succeeded. By contrast Apple got to 10 nm in 2017 with the A11. Then they went down to 7 nm in 2918 with the A12 which they continued with the A13 in 2019 and the planned A14 is a 5 nm process. I'm not sure how the planned AX for 2022 is going to be the 3 nm claimed because that is below the 5 nm where quantum tunneling happens.

there is no quantum tunneling, because the gate width is not 5nm in a 5nm process node. That’s not what the node name means.
 
That makes no sense as wikipedia (yes I know but it's all I have to work with) says this:

* In semiconductor fabrication, the International Technology Roadmap for Semiconductors (ITRS) defines the 10 nm process as the MOSFET technology node following the 14 nm node. "10 nm class" denotes chips made using process technologies between 10 and 20 nm. All production "10 nm" processes are based on FinFET (fin field-effect transistor) technology, a type of multi-gate MOSFET technology that is a non-planar evolution of planar silicon CMOS technology.

* In semiconductor manufacturing, the International Technology Roadmap for Semiconductors defines the 7 nm process as the MOSFET technology nodefollowing the 10 nm node. It is based on FinFET (fin field-effect transistor) technology, a type of multi-gate MOSFET technology.

IF what is 10 nm and what is 7 nm are both defined by ITRS then logically 10 nm has to be bigger than 7 nm. One of the whole purposes of a standard is that you don't have the Humpty Dumpty "it means what I say it means" nonsense we saw during the old console wars where 16, 32, and 64 bit were effectively useless.

Also unlike an inch and a cm a nm is a nm. This is what males articles like Intel 10nm isn't bigger than AMD 7nm, you're just measuring wrong total nonsense as the claim "With many competing technologies and companies involved, and playing by their own rules as to how they define transistor length, the name attached to process node isn't so much a technical term as it is a marketing one." as there is a freaking standard that defines this.

Now if you can show the standard is borked (like USB 2.0 was. Gads that was a mess) then you'd have a point.

Having looked at the actual design rules (the public version) for the intel 10nm and the TSMC 7nm, these are essentially identical in most respects. The minimum spacing and minimum width are about the same. TSMC 7nm is not bigger than 10nm intel.
 
IF what is 10 nm and what is 7 nm are both defined by ITRS then logically 10 nm has to be bigger than 7 nm. One of the whole purposes of a standard is that you don't have the Humpty Dumpty "it means what I say it means" nonsense we saw during the old console wars where 16, 32, and 64 bit were effectively useless.

Yes, but are they? Genuinely asking. I believe not.

BTW I know what a nm is. The question is not about how the fundamental unit for lenght is defined but about how does it relate to transistor density in fabrication processes nomenclatures.

From a related wikipedia page:

"The naming of process nodes by different major manufacturers (TSMC, Intel, Samsung, GlobalFoundries) is partially marketing-driven and not directly related to any measurable distance on a chip – for example TSMC's 7 nm node is similar in some key dimensions to Intel's 10 nm node (see transistor density, gate pitch and metal pitch in the following table)."

Having looked at the actual design rules (the public version) for the intel 10nm and the TSMC 7nm, these are essentially identical in most respects. The minimum spacing and minimum width are about the same. TSMC 7nm is not bigger than 10nm intel.

My understanding was that TSMC 7nm was very close to Intel's 10nm albeit a tiny bit bigger, but I got that from regular tech websites so you're probably (definitely) right. I take from your post that TSMC's 7nm isn't significantly smaller than Intel's 10nm either.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.