Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Your machine isn't useless because a new model using the same basic architecture comes out. You might not enjoy performance benefits of a new machine, but you can still take advantage of new software and software / OS updates that get released.

When they change the architecture fundamentally, it closes off all future software from your machine from the moment the transition completes.

This was how it was during the last PPC to Intel transition. Support for the PPC Macs dropped like a stone.
There will be a cutoff date, but it’s not going to be the day that the transition completes. Apple still has unreleased Intel hardware for Christ’s sake!
 
Why bother building Rosetta to work with low power cores if you don’t plan to use them in the released version of Apple Silicon? I’m thinking these low power cores won’t be part of Apple’s laptop/desktop chips.

There's a WWDC Session online for you to watch, in which Apple explains they will also use low power cores in Macs with Apple Silicon.
 
1) down-clocked slower than iPad Pro!
2) Running benchmark in rosetta
3) Only using 4 out of 8 cores for some reason
4) not the chip that will be used in macs

These benchmarks mean absolutely nothing.
Any ideas why it is overclocked? The thermals clearly should be better. Is it possible that they had to do it because desktop usage implies possibility of sustainable high loads. For iPad, on the other hand, they may rely on throttling so the base clock does not really matter that much.
 
Okay, Geekbench likely uses "install-time" and the special case "dynamic" translation. It would be interesting to see the iOS version of Geekbench for comparision on the same machine.



Do you have info on relative performance of the two types of cores?

I do not. I’m not sure there is clear information on that anywhere, since up until now the only place to try and benchmark such things has been on iOS devices which don’t provide a lot of hooks to control how many and which cores are used.

We do know that Geekbench running 100mhz faster on ipad pros and using all cores gets multi core scores about 2x what we are seeing from the dtk
 

There will be entire libraries of software written for the MacOS that will be no longer accessible because their authors choose not to update them for the new architecture.

Just like there was when the Intel-to-PPC transition happened.

Apple is balkanizing their MacOS application library to squeeze a bit more performance out of their machines that most users are neither demanding nor need.
 
  • Disagree
Reactions: Arline
There will be a cutoff date, but it’s not going to be the day that the transition completes. Apple still has unreleased Intel hardware for Christ’s sake!

I think it more depends on how quickly the install base switches over. I suspect many devs will have a hard time ditching Intel until ARM silicon accounts for at least 50% of their user base.
 
Impressive that their x86 translator runs that well on this chip. I wish the commenters would have read the title of the article before getting all bowed up over the #s.
 
I do not. I’m not sure there is clear information on that anywhere, since up until now the only place to try and benchmark such things has been on iOS devices which don’t provide a lot of hooks to control how many and which cores are used.

We do know that Geekbench running 100mhz faster on ipad pros and using all cores gets multi core scores about 2x what we are seeing from the dtk

It could simply be a measurement error.

Geekbench tends to guess hardware specs, and it might be getting misleading/unreliable information through the Rosetta layer.
 
I have to admit I expected less. With more powerful chips and additional optimisations in final product Rosetta might actually be quite usable. Of course Geekbench is only a small part of the whole story, but I'm positively suprised.
 
We were told before that using GeekBench for cross platform comparisons is pointless. Could this be a contributing factor in this case?
 
There will be entire libraries of software written for the MacOS that will be no longer accessible because their authors choose not to update them for the new architecture.

Just like there was when the Intel-to-PPC transition happened.

Apple is balkanizing their MacOS application library to squeeze a bit more performance out of their machines that most users are neither demanding nor need.

This will be rare for any software that is actively maintained. APIs haven't changed, so for most developers it involves clicking a checkbox and recompiling. If the company is out of business, then yeah, no updates for you.
 
It could simply be a measurement error.

Geekbench tends to guess hardware specs, and it might be getting misleading/unreliable information through the Rosetta layer.
Maybe. Though the quantity of cores actually used *should* be something Geekbench would likely be able to tell.
 
I think it more depends on how quickly the install base switches over. I suspect many devs will have a hard time ditching Intel until ARM silicon accounts for at least 50% of their user base.
I think it already does. Most people who write for Mac also write for iOS and iOS install base trumps Mac by 100 to 1 probably.
 
  • Like
Reactions: psychicist
If it is a one-time translation then it is fairly meaningless that the scores are less than people expected.

Most people are commenting that the figures are very good, all things considered.

On par with a 2016 MacBook Pro 15" a comment above says, or similar to a 2020 MBA in multi-thread. And that's emulated/translated worse-case (a benchmark) x86-64, running at iPad TDPs (presumably, given the clock is not higher). The OpenCL benchmark leaked in another comment shows the GPU is doing excellent as well.
 
  • Like
Reactions: psychicist
They may or may not. Certainly at least 50%. It depends on the instruction stream - a lot of instructions will run more or less equally fast on a low power or high performances core. A lot won’t. And we don’t know much about how the little cores behave in an environment where they don’t need to be throttled for thermal reasons.


I do not. I’m not sure there is clear information on that anywhere, since up until now the only place to try and benchmark such things has been on iOS devices which don’t provide a lot of hooks to control how many and which cores are used.

Alright.. I am just speculating but over time you'd either have divergence or convergence. In other words either the cores get good at what they are aimed for or they converge until a superior design is chosen. No need to have two different designs with very similar characteristics.


We do know that Geekbench running 100mhz faster on ipad pros and using all cores gets multi core scores about 2x what we are seeing from the dtk

*The iOS version.. So it could be still be some overhead from translation. Single-core score is also lower.

Hopefully it's an easily optimisable overhead.
 
Really underwhelming results, makes me wonder if it was the right time for Apple to do this, or maybe they should have waited a few more years for the silicon team to catch up to Intel, or maybe they should have just gone with AMD.

Quite the opposite, these results are excellent when you think that it is basically an iPad Pro running a benchmark compiled for a different instruction set! This is on par with the current MacBook Air. The final hardware will undoubtedly be faster.
 
eperm-d995af6e2ef02771 … That really sounds like each unit has it's own "model" as a watermark. This person is going to be in trouble …
 
There will be entire libraries of software written for the MacOS that will be no longer accessible because their authors choose not to update them for the new architecture.

Just like there was when the Intel-to-PPC transition happened.

Apple is balkanizing their MacOS application library to squeeze a bit more performance out of their machines that most users are neither demanding nor need.

It's not just about raw performance. This transition is also going to bring better thermal performance and power efficiency (pretty important for laptops, which are Apple's most popular Macs).

This is something that will benefit the majority of users far more than it will hurt them. The edge-cases (people booting into Windows for a single piece of software or running stuff that sees very infrequent updates or is highly specialised) will obviously have a more difficult transition, but that's the cost of progress and Apple seems to be doing all they can to mitigate that.
 
Last edited:
Alright.. I am just speculating but over time you'd either have divergence or convergence. In other words either the cores get good at what they are aimed for or they converge until a superior design is chosen. No need to have two different designs with very similar characteristics.




*The iOS version.. So it could be still be some overhead from translation. Single-core score is also lower.

Hopefully it's an easily optimisable overhead.

For sure the high performance cores are better when you need performance. But, for example, if all you are doing is adding a series of integers, there is likely little difference between the two types of cores. If you want to multiply 2 integers, perhaps the high performance core has a power-sucking Wallace tree that lets you multiply 2 64-bit numbers in 5 cycles, whereas the high-efficiency core uses a design with fewer reduction layers and runs things through multiple times and takes 15 cycles. Putting aside clock rates, there are lots of design choices that you can make to make a particular operation super fast or super efficient. But certain things, like addition, tend to use the same techniques either way.
 
Yeh, and by the same argument they would make developers melt down sand and make their own silicon.

Your argument is silly.

Well, providing something that's say only a third (or even worse) of the performance of what the real deal would be is just as ridiculous. Right?

I am expecting the lowest-end ARMs (read fan-less Macbook) to be in the ballpark of the A12Z (iPad Pro) + reasonable generational improvement (+20-40%). Which is completely sufficient for the end-users use case.
 
  • Like
Reactions: ader42
We all knew that this would happen. However, we also know that the real Apple Silicon Macs will use a completely different chip, no doubt modified and optimised in ways we don't know about yet. While this is interesting (and I'll read all the news articles that come up about this), its going to tell us next to nothing about what's coming.

Exactly. And this is definitely why Apple didn't want benchmarks. This is just for developers to get their software ready. The chip we see in the first Mac will be so completely different than the A12Z.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.