Intel performance exceeds ARM performance !! 5 Years Old Intel Processor Problems Have Dropped (Marketing Wars)! ARM has no future, it is not a powerful processor compared to Intel !!!x86 has been in troubles for last 5 years. Intel is still mostly serving Skylake variants, AMD is now on same level as Intel. However, ARM, particularly Apple A series chips are already faster and more power efficient. Intel and AMD are going Motorola/PowerPC route...
To get to performance level of Apple A13 they need to consume 4x more energy. No wonder when you start doing something CPU intensive MacBook changes to vacuum cleaner..
We may expect much faster MacBooks that last longer. Much longer. I am pretty sure Apple will want to hit sweet spot between speed and power efficiency. 20 hour of battery life? No problem. 100% faster in multicore. No problem.
have a look:
View attachment 948300
Intel performance exceeds ARM performance !!
You are wrong again!! Intel processors have the increased power of the blue giant's X86 cores, as well as its IPC!!!No. Intel performance doesn’t even exceed AMD performance. Two posts above you, from @Juraj22, is proof that even iphones beat Intel performance.
Intel performance exceeds ARM performance !! 5 Years Old Intel Processor Problems Have Dropped (Marketing Wars)! ARM has no future, it is not a powerful processor compared to Intel !!!! Locked topic!!
Absolute nonsense! Better admit defeat to Intel Locked topic!!Intel performance to heat up the room in winter is by far better than ARM. True. But not all of us live in ice cave to enjoy this fact.
Intel performance exceeds ARM performance !! 5 Years Old Intel Processor Problems Have Dropped (Marketing Wars)! ARM has no future, it is not a powerful processor compared to Intel !!!! Locked topic!!
You are wrong again!! Intel processors have the increased power of the blue giant's X86 cores, as well as its IPC!!!Locked topic
You are wrong again!! Intel processors have the increased power of the blue giant's X86 cores, as well as its IPC!!!Locked topic
I have already cited all the facts as an example !!! It's not my fault, you can't read facts !!You obviously cannot accept any facts & evidence that disprove your comments, so really, there's no point in us bothering with you anymore. You don't know what you're talking about. It seems that if you don't believe it, then it's not true, no matter what the evidence.
Increased power of X86 cores!! Or high power cores X86!!!what blue giant? IBM? What are you talking about? They have nothing to do with x86.
And even the a12 has a higher IPC than intel. So, again, what are you referring to?
No, ”big blue” is IBM, not intel.Increased power of X86 cores!! Or high power cores X86!!!
This huge disadvantage of ARM Apple will compensate by increasing the clock speed to 6-7 or above gigahertz!!!But silicon is already at its limit !!! They can hardly do it!!!!
Intel is also considered a blue giant, not only IBM!!! The conversation is over!!!!![]()
I have already cited all the facts as an example !!! It's not my fault, you can't read facts !!![]()
No, ”big blue” is IBM, not intel.
Apple has increased ARM clock speeds at a much steeper slope than Intel, and there are no indications of a limit for ARM. You have to remember that the clock rate is determined by the critical path, and the x86 critical path is much longer than ARM. This is why Intel has to do crazy things like double the number of pipe stages compared to Apple, just to get the crtiical paths short enough. But doing that has major disadvantages - if you guess wrong on a conditional branch, then the penalty for missing is much much higher because of all those clock stages. So Intel has to build massive branch prediction engines, which suck up a lot of power and die area. Which means they can’t use that power and die area for actual computations.
And your premise is crazy - Apple’s A12 has higher IPC than Intel cores. Which means it is *Intel*, not Apple, that needs to raise their clock frequencies to keep up.
And if they *do* increase their clock frequency, then they are screwed, because that is the WORST way to increase performance. In a CPU, you have these variables:
C: capacitance (from wires, and from the gates of transistors)
V: power supply voltage (Vdd - Vss)
f: toggle frequency - how many times per second do you charge and discharge the capacitance
P: power consumed - which is also power dissipated as heat
It turns out that P = CfV^2 (some people put a ½ in there, depending on how you define f)
So by doubling the frequency, you double the power used (halving battery life) and you double the heat generated.
But it’s worse than that.
When you increase the clock frequency, that means that each transistor in the critical path must be able to complete its transition (from Vdd to Vss or vice versa) within one clock cycle. Otherwise the chip doesn’t work. But it takes current to do that. Each logic gate must charge (or discharge) Its capacitive load (the wire and any connected transistors).
CV = q (where q is the charge required to charge the capacitance C)
so:
V = q/C
This can be written in terms of current like:
dV/dt = I/C
or
I = CdV/dt
To charge and discharge more quickly, you need more current (because the definition of current is the movement of charge). To increase current, you need to increase the voltage. In other words, if you just increase clock speed, the chip will fail unless you also increase voltage.
But if we increase the voltage, the power consumption and heat goes up much faster, as P = f(V^2).
In general, then, only dumb intel engineers think it is a good idea to increase frequency. Instead, to the extent possible, you increase IPC and parallelism, which increases *C*. That way the effect on power consumption is, at most, linear. For example, instead of doing work twice as fast in series, which cycles half the capacitors twice as often, do it in parallel, using double the capacitance, half as fast. This is a net win because the voltage can be lower.
Ridiculous !!! Long pipeline and clock speed give a huge performance boost !!! The PowerPC G4 was already inferior to the performance of the Pentium 4 !! Although Apple claimed that a short pipeline allows a lower frequency to process data faster!!!!!Intel engineers cannot be called stupid engineers, people who create the best processors in the world !!!
Long conveyor is not bad because Intel has applied its own RISC architecture !!!everything you just said is completely wrong. It’s like saying black is white. Go read Hennessy and Patterson. You will learn a lot about computer architecture. A long pipeline is ALWAYS a bad thing.
Let's not forget that Intel has been having problems with x86 chip production below 14 nm that it wasn't until 2018 they finally succeeded. By contrast Apple got to 10 nm in 2017 with the A11. Then they went down to 7 nm in 2918 with the A12 which they continued with the A13 in 2019 and the planned A14 is a 5 nm process. I'm not sure how the planned AX for 202w is going to be the 3 nm claimed because that is below the 5 nm where quantum tunneling happens.
That makes no sense as wikipedia (yes I know but it's all I have to work with) says this:Different manufacturers. TSMC's 7nm process (the one Apple currently uses) is actually bigger than Intel's 10nm.
Let's not forget that Intel has been having problems with x86 chip production below 14 nm that it wasn't until 2018 they finally succeeded. By contrast Apple got to 10 nm in 2017 with the A11. Then they went down to 7 nm in 2918 with the A12 which they continued with the A13 in 2019 and the planned A14 is a 5 nm process. I'm not sure how the planned AX for 2022 is going to be the 3 nm claimed because that is below the 5 nm where quantum tunneling happens.
That makes no sense as wikipedia (yes I know but it's all I have to work with) says this:
* In semiconductor fabrication, the International Technology Roadmap for Semiconductors (ITRS) defines the 10 nm process as the MOSFET technology node following the 14 nm node. "10 nm class" denotes chips made using process technologies between 10 and 20 nm. All production "10 nm" processes are based on FinFET (fin field-effect transistor) technology, a type of multi-gate MOSFET technology that is a non-planar evolution of planar silicon CMOS technology.
* In semiconductor manufacturing, the International Technology Roadmap for Semiconductors defines the 7 nm process as the MOSFET technology nodefollowing the 10 nm node. It is based on FinFET (fin field-effect transistor) technology, a type of multi-gate MOSFET technology.
IF what is 10 nm and what is 7 nm are both defined by ITRS then logically 10 nm has to be bigger than 7 nm. One of the whole purposes of a standard is that you don't have the Humpty Dumpty "it means what I say it means" nonsense we saw during the old console wars where 16, 32, and 64 bit were effectively useless.
Also unlike an inch and a cm a nm is a nm. This is what males articles like Intel 10nm isn't bigger than AMD 7nm, you're just measuring wrong total nonsense as the claim "With many competing technologies and companies involved, and playing by their own rules as to how they define transistor length, the name attached to process node isn't so much a technical term as it is a marketing one." as there is a freaking standard that defines this.
Now if you can show the standard is borked (like USB 2.0 was. Gads that was a mess) then you'd have a point.
IF what is 10 nm and what is 7 nm are both defined by ITRS then logically 10 nm has to be bigger than 7 nm. One of the whole purposes of a standard is that you don't have the Humpty Dumpty "it means what I say it means" nonsense we saw during the old console wars where 16, 32, and 64 bit were effectively useless.
Having looked at the actual design rules (the public version) for the intel 10nm and the TSMC 7nm, these are essentially identical in most respects. The minimum spacing and minimum width are about the same. TSMC 7nm is not bigger than 10nm intel.