Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Apple’s A-Series CPUs don‘t have identical cores, in contrast to Intels CPUs.
Some are high performance, some high efficiency. So you can‘t just multiply single core score by number of cores.

The macOS first-party app might not even know how to use ARM-Based low power cores.
 
Please correct if I'm wrong:

If under the hood the current x86 chips are RISC + CISC-to-RISC-translater, then they are actually inferior than a pure RISC chip in terms of:

- Performance: because a pure RISC chip doesn't have to waste time making the CISC-to-RISC translation. And maybe more important, the internal space used by the CISC-to-RISC-translater could be used to make a pure RISC chip bigger, with more cores;

- Energy: because the CISC-to-RISC-translater consumes energy, it makes heat.

CISC has the advantage of producing smaller, more compact, Assembly code. But this was important when storage was expensive. Now that we have very cheap storage...

Conclusions are right. Not quite accurate to say x86 is RISC+“translator,” but close enough.
 
read again
my reply concerned the comment about the supposed « engineering arm superiority », in term of engineering it’s a metric !
You don’t seem to be an engineer. I am.
The goal for every engineer is to find the simplest feasible solution for a given problem - not the most complex one.
 
  • Like
Reactions: high heaven
CISC are way more complex than ARM RISC architecture

Being complex doesn't mean better and it actually worse than ARM ISA. x86 is quite complexed cause it needs to convert CISC to RISC quite a lot and that's why it consumes a lot of power. This is also why ARM is more power efficient and there is ARM CPU with 80 cores consumes only 210W.
 
read again
my reply concerned the comment about the supposed « engineering arm superiority », in term of engineering it’s a metric !
More complex, if there's no benefit, is inferior in terms of engineering. I absolutely consider the x86 approach inferior in that way, but it worked for them business-wise. Intel has been building generally the fastest desktop CPUs for a long time. Anyway, that can change.
 
Last edited:
If you have not been paying attention Apple is aligning the iPad to replace the iMac. It seems redundant to release an ARM iMac when we have a capable iPad and the thing limiting it is iPadOS and software titles, development that is cheaper then what is being proposed by many.

Was this reply meant for someone else? Because it doesn’t have anything to do with what I wrote.
 
Um. Sigh. No it doesn't.

It is becoming completely clear that you didn't read any of the Data provided. What is each individual test in GeekBench specifically testing ( https://www.geekbench.com/doc/geekbench5-cpu-workloads.pdf , and that is the *proper* usage of the term work load ) , and not just bunch of numbers. What did Anandtech specifically tested, which even includes single thread benchmark. And the actual AWS Data on multiple work loads. And you should properly read on https://en.wikipedia.org/wiki/Amdahl's_law

There is no point in discussing if anyone is simply refusing to read. Where there are actual Data listed in the Anandtech article.

I pointed those out in the interest for the community to learn something. But I guess that was me in good faith everyone wants to learn. May be it wasn't a wise idea to discuss anything in technical on MacRumours.

Anyway, I am out.
We read it. They both have poor single-core performance. That keystore test is also very specialized, so it's not useful look at when considering Mac workload, while Geekbench is. The article you linked doesn't say anything about single-core performance either, but I think the "medium" test is that, based on the AWS tiers documentation.
 
  • Like
Reactions: chucker23n1
That’s a problem of the developer, if not interested to port it then make the code OpenSource so the community can do so. Seems more selfish by the developer to just allow they software to die rather than donate it so it’s customers are not left out in the cold.
An easier solution, if you are using Xcode: By an ARM Mac. In Build Settings, select both x86 and arm. Press Command-R and see the app running. Do some testing before you submit your app to the AppStore.
 
Being complex doesn't mean better and it actually worse than ARM ISA. x86 is quite complexed cause it needs to convert CISC to RISC quite a lot and that's why it consumes a lot of power. This is also why ARM is more power efficient and there is ARM CPU with 80 cores consumes only 210W.

The decoder is a really small part of the x86 cpu
It’s actually an advantage because complex x86 instruction are not litteraly translated but often reorder in order to reach a good IPC and avoid nope instructions due to cache access
 
It's interesting that they have the same single and multi-core scores but a different number of cores. Usually single-core score times number of cores roughly equals multi-core score, but not so on the iPad. Maybe the iPad thermal-throttled even during the test.

It's complicated by multiple factors, including:

  • Intel's Turbo Boost / Thermal Velocity Boost. It can do better than base clock in short bursts. It can do way better in short bursts once it starts temporarily turning cores off. That's very useful, since few workloads scale well to all cores.
  • Apple's "Fusion" setup (similar to ARM's big.LITTLE).
Same performance with lower TDP sounds like a winner. Only thing is the i7 is older by a year, but whatever.

Yeah.

Geekbench still has no official score for the 2020 Air. If we average Jason Snell's and MKBHD's results, we get 1120 for single-core and 2948 for multi-core. Those numbers are more interesting, because the Air's CPU, at apparently around 10W, is closer to what Apple ships in iOS devices, as far as thermals are concerned (an iPhone doesn't have an official TDP, but it's probably around 5W). It also uses Ice Lake / Sunny Cove, unlike any other Mac so far.

So if we take that and pit it against the A13, Apple wins: 1328 is 19% faster, and at multi-core, 3315 is still 12% faster.

And Apple accomplishes that despite about half the TDP.

But! Apple needs six cores rather than the Air's four. And it runs at a constant 2.6 GHz, whereas Intel various uses a 1.1 GHz base that goes up to 3.5 GHz in boosts.

That's interesting, because it leaves an open question: what about a Sunny Cove CPU that runs at 2.6 GHz? Can Intel deliver that any time soon, and if so, how will it fare against Apple?

Apple’s A-Series CPUs don‘t have identical cores, in contrast to Intels CPUs.
Some are high performance, some high efficiency. So you can‘t just multiply single core score by number of cores.

Right. But even accounting for that, Apple reaches the same scaling with eight cores that Intel does with four.
 
  • Like
Reactions: fairuz
Well if you want the $1,299 entry level two TB3 model to increase to $1,499, and the $1,799 base four TB3 model to increase to $1,999, sure.

You are still sticking with this faulty logic? Apple literally just doubled the base storage of the Air while *reducing* the price. They did the same for the Mini while maintaining the same price. According to your logic, this was impossible, despite Apple (and every other manufacturer) doing it over and over and over.
 
Um. Sigh. No it doesn't.

It is becoming completely clear that you didn't read any of the Data provided. What is each individual test in GeekBench specifically testing ( https://www.geekbench.com/doc/geekbench5-cpu-workloads.pdf , and that is the *proper* usage of the term work load ) , and not just bunch of numbers. What did Anandtech specifically tested, which even includes single thread benchmark. And the actual AWS Data on multiple work loads. And you should properly read on https://en.wikipedia.org/wiki/Amdahl's_law

There is no point in discussing if anyone is simply refusing to read. Where there are actual Data listed in the Anandtech article.

I pointed those out in the interest for the community to learn something. But I guess that was me in good faith everyone wants to learn. May be it wasn't a wise idea to discuss anything in technical on MacRumours.

Anyway, I am out.

It doesn't matter how many times you try to make this claim.

Amazon's CPU is neither good at single-core, nor do they care if it's good at single-core, because that's not the type of workload they've optimized it for. You rent a Graviton2 because you're doing highly-specialized work that benefits from many cores, same as with a Xeon Phi, or an Nvidia Tesla, or Radeon Instinct, or Google TPU. It's not very interesting to use it to draw conclusions for Apple's chips.

As for proper usage of words, I don't know what the hell you're talking about, and also don't care.
[automerge]1585422602[/automerge]
Granted. But with a fraction of the thermal envelope.
I guess a definite answer will only be possible the moment we do actually see a Mac with an Apple designed ARM CPU.

Yup.
[automerge]1585422800[/automerge]
You are still sticking with this faulty logic? Apple literally just doubled the base storage of the Air while *reducing* the price. They did the same for the Mini while maintaining the same price. According to your logic, this was impossible, despite Apple (and every other manufacturer) doing it over and over and over.

Your examples are the exception rather than the rule.

Apple will likely double the base storage on the next 13-inch MBP, sure. But they rarely do so.
 
For the short term, same. Anyone claiming ARM is faster or something doesn't have anything to support it.
I ran benchmarks (identical source code compiled by me, so no cheating) where an iPhone XR (two fast + four slow cores) consistently beat an iMac (Quad Core x86) when using all threads. There was a beat of slowdown, about 10%, after ten minutes due to thermals, but the iPhone XR was still significantly faster.

And the processor in an iPhone XR is designed to run in a tiny device with no cooling. An ARM processor inside an MacBook Pro case could easily have four fast cores (and a few slow ones) running at higher clock rate.
 
  • Like
Reactions: firewood and fairuz
The decoder is a really small part of the x86 cpu
It’s actually an advantage because complex x86 instruction are not litteraly translated but often reorder in order to reach a good IPC and avoid nope instructions due to cache access
I'll try to explain it using a scenario that was quite common in the early 2000's.

Simple solution:
Digital image signal (graphics card) -> digital transmission (DVI) -> digital display (LCD)

"Way more complex" solution:
Digital image signal (graphics card) -> digital/analog conversion -> analog transmission (VGA) -> analog/digital conversion -> digital display (LCD with VGA only input)

Which solution do you think is technically superior?
 
  • Like
Reactions: high heaven
The decoder is a really small part of the x86 cpu
It’s actually an advantage because complex x86 instruction are not litteraly translated but often reorder in order to reach a good IPC and avoid nope instructions due to cache access

lol, it's not. Complex instructions cause more unnecessary powers and heat. x86 itself is an old CPU instruction. What do you expect?
 
The decoder is a really small part of the x86 cpu
It’s actually an advantage because complex x86 instruction are not litteraly translated but often reorder in order to reach a good IPC and avoid nope instructions due to cache access
Not that small. Around 20% of the core in an x86 machine.

But RISC instructions are also reordered (i know. I designed the reorder circuitry on a couple of them), so there’s no advantage to CISC over RISC with respect to that.
 
I'll try to explain it using a scenario that was quite common in the early 2000's.

Simple solution:
Digital image signal (graphics card) -> digital transmission (DVI) -> digital display (LCD)

"Way more complex" solution:
Digital image signal (graphics card) -> digital/analog conversion -> analog transmission (VGA) -> analog/digital conversion -> digital display (LCD with VGA only input)

Which solution do you think is technically superior?

I guess he doesnt know anything about it.
 
  • Like
Reactions: Nütztjanix
OK but the question is - will Adobe update their apps to use those new CPUs to 100% or will they just remain with same, crappy code from ancient past? If the apps will not be optimized, then it is useless.
 
OK but the question is - will Adobe update their apps to use those new CPUs to 100% or will they just remain with same, crappy code from ancient past? If the apps will not be optimized, then it is useless.

They have to make a whole new software just for ARM-based devices. But they already made Photoshop for iPad.
 
They have to make a whole new software just for ARM-based devices. But they already made Photoshop for iPad.
I don’t think that’s correct. As long as they didn’t use any low level coding, they might get away with some minor adjustments and a recompile targeting ARM64 - as will most developers.

This is an architecture switch, not an OS switch with entirely different APIs.
 
OK but the question is - will Adobe update their apps to use those new CPUs to 100% or will they just remain with same, crappy code from ancient past? If the apps will not be optimized, then it is useless.

I think it depends in part on what Microsoft's roadmap is. They've been waffling on Windows on ARM.

They have to make a whole new software just for ARM-based devices.

Not really.

Photoshop has architecture-specific code, but not to the same extent as twenty years ago.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.