Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
No video accelerators are engaged in Geekbench. If you want to see how good the video accelerators are - there is now Geekbench compute benchmark that attempts to do this.

What the benchmark description ? The pdf you linked is only a "Marketing Document", no sufficiency evident show they don't use accelerators as normal the CPU will.

The benchmark description don't tell you the graph / video accelerators is switch off on CPU,
All CPU will and should use the accelerators for almost ALL task of Geekbench test.

Intel with special hardware accerlator for graphic, jpeg and mpeg, so that why website is showed fast in modern CPU.

ARM "MUST" use special DSP for JPEG, even A10x otherwise the peformance is drop very very bad and drain much much battery.
 
What the benchmark description ? The pdf you linked is only a "Marketing Document", no sufficiency evident show they don't use accelerators as normal the CPU will.

The benchmark description don't tell you the graph / video accelerators is switch off on CPU,
All CPU will and should use the accelerators for almost ALL task of Geekbench test.

Intel with special hardware accerlator for graphic, jpeg and mpeg, so that why website is showed fast in modern CPU.

ARM "MUST" use special DSP for JPEG, even A10x otherwise the peformance is drop very very bad and drain much much battery.
The descriptions clearly state they are not using the GPU. So you are saying they are lying? If they were then you should produce evidence they are lying and then discredit the whole company. That would make you happy.

Geekbench does have another benchmark that test GPU workloads - that is Geekbench Compute. Just use your mind - why would they produce a completely separate benchmark when you say the original Geekbench was already testing the GPU on all tasks.
 
The descriptions clearly state they are not using the GPU. So you are saying they are lying? If they were then you should produce evidence they are lying and then discredit the whole company. That would make you happy.

Geekbench does have another benchmark that test GPU workloads - that is Geekbench Compute. Just use your mind - why would they produce a completely separate benchmark when you say the original Geekbench was already testing the GPU on all tasks.

1. The document just said "Camera" sub-part without use the GPU accelerator, not full Benchmark.
2. Hardware accelerator don't mean = GPU accelerator, this mean that special hardware or DSP working for some task,
it may or may not inside of CPU (As GPU also can identify is a special hardware inside on CPU).
Even the Benchmark don't direct call the GPU (for the camera work actually don't need), the CPU will use all their resource they can and normally CPU will seek the GPU help for process.
 
So a bit of digging shows Geekbench on Mac crashing will cause it to reveal that it's using the Accelerate framework on Mac, which... incidentally, is also available on iOS.
https://developer.apple.com/documentation/accelerate

Case closed. The video accelerator IS in use during Geekbench. It's not just the CPU anymore. This will explain why the newer iPad and Kaby Lake chips benchmark higher than the older computers, as all of them have the new video accelerator chip that supports HEVC.

Also to note is that the Accelerator framework will kick the GPU in for matrix operations, image processing (DSP), and large integer calculations as well, so yeah, A10X is also benefiting from its GPU here. That explains why it's faster than the Mac Pro.

And no, my MacBook 12" is not faster than the Mac Pro. I can run a simple prime number generating algorithm on either one to show that single-thread performance on the Mac Pro runs leaps and bounds beyond the 12" MacBook.
 
> on desktop, an app may only be able to access anywhere between 10 -> 70% of the device's resources, but not much more.

Not sure what you're suggesting here. The OS is somehow consuming 30-90% of CPU, so an app can only get what's left? That would be a terrible OS; not macOS or any other reasonable OS. If the OS constantly consumed 30%+ of resources we'd have very low battery life!
 
Yes. The problem is that there now exists something called "Turbo Boost" which "boosts the performance" of the CPU. Or at least this is true in the case of all Mac computers at this point in time.

Turbo boost gives an additional 50% performance boost, or approximately somewhere along there. So you can either look at it like how maximum turbo boost allows 150% performance, or if max performance (with Turbo) is 100%, then the "without Turbo" part is about 70%.

But MacOS will NOT allow the CPU to reach maximum Turbo boost in order to reduce CPU temperature. This is in part because Apple does not want to ramp up the fan and/or in the case of the MacBook, they do not want the computer to heat up to the point where people can't put it on their laps anymore even though the CPU can technically withstand much higher temperatures. It is also in response to many cases like this:
http://osxdaily.com/2016/07/13/disable-enable-turbo-boost-mac/

...where people want to disable Turbo Boost just to gain battery life.

Is this temperature thing a real issue? Yes!
https://www.notebookcheck.net/Face-Off-Apple-MacBook-12-Core-m3-Core-m5-and-Core-m7.172046.0.html

The Core m3 cannot reach its maximum Turbo of 2.2 GHz, despite the lower temperatures. The clock will be steady at 1.9 to 2.0 GHz in the Cinebench R15 Single test, which corresponds with a consumption of around 4 watts.

All three processors cannot utilize their maximum potential in the single-core tests

The CPU will only reach about 70 Celsius before they drop in performance. The maximum these CPUs can take is up to 100 Celsius.

Perhaps saying this applies to all desktops is a bit far-fetched, but it is indeed a real problem with the MacBook and potentially also a problem with the MacBook Pro as Apple is so fixated on thin and light but also lower temperatures and lower fan noise.

Just to note, this is a hard restraint. Apple seems to do this at the firmware level so no matter what OS you install (as evident in the article I linked above), the limit stands.

Now you know why the m3 MacBook is so slow. It's not because it's not capable, but because Apple is purposefully limiting it.

There are signs that they have lifted this limit a bit with the new 2017 MacBook, though, so we'll see.
 
Case closed. The video accelerator IS in use during Geekbench. It's not just the CPU anymore. This will explain why the newer iPad and Kaby Lake chips benchmark higher than the older computers, as all of them have the new video accelerator chip that supports HEVC.

Case not closed
https://stackoverflow.com/questions...ework-vdsp-gather-memory-from-the-gpu-back-to

It does not use the gpu. Apple has open cl and metal for gpu which is benchmarked in geekbench compute.

Also in the benchmarks description it explicitly says that the imaging dsp is not used.

What is used is the simd units which is stated in the benchmark description. Things like Avx and neon which is one reason why the newer processors have faster performance because of use of the simd units
 
Last edited:
Why do you keep mentioning the imaging DSP?

The video accelerator is independent of the imaging DSP, which is used only for processing... images (surprising?). I'm talking about video encoding and decoding.

I might have been wrong about the Accelerate framework but the fundamental question remains: what exactly is Geekbench doing that causes those weird benchmark scores?

Let's say I agree with you for a moment and that the Apple A10X chip really is that fast. Why is my MacBook 12" faster than the Mac Pro? (this is a desktop to desktop, Apple to Apple comparison)
 
I'm unclear what you're trying to say. Turboboost offers a trade off between speed & power consumption (heat). The chip always runs as fast as it can inside its thermal envelope. That is 100% of resources. Saying the OS only allows 10-70% of this for apps makes little sense. A, say, 2.3GHz CPU cannot sustain a turbo boosted 3GHz without some insane cooling. They can hit the turbo speeds briefly, which for many workloads is a great trade-off.
[doublepost=1497323133][/doublepost]Something I learned was that the Apple SoC's don't have a turboboost equivalent - or if they do I couldn't find a reference. I had just assumed they did. Instead they opted for separate slower, lower power cores (since the A9 I think).
 
Why do you keep mentioning the imaging DSP?

The video accelerator is independent of the imaging DSP, which is used only for processing... images (surprising?). I'm talking about video encoding and decoding.

I might have been wrong about the Accelerate framework but the fundamental question remains: what exactly is Geekbench doing that causes those weird benchmark scores?

Let's say I agree with you for a moment and that the Apple A10X chip really is that fast. Why is my MacBook 12" faster than the Mac Pro? (this is a desktop to desktop, Apple to Apple comparison)
If you read the geekbench benchmark descriptions you wouldn't of know that video encoding and decoding are not there at all. There are imaging operations but it explicitly says the imaging dsp is not used.

I briefly mentioned that simd instructions are used. The simd unit are more advanced in the macbook than the Mac Pro. I.e. Avx
 
Apart from the performance discussion why do u think it would be worth it?
They are selling 4million Macs. So maybe 400t MacBooks. Compare that to about 60m iOS devices.
It doesn't look like that Apple would save much money...

And than u have the biggest problem to convince developers to support their apps also for ARM with these sale numbers. Same reason why there is so less demand for windows mobile.

And honestly I don't think that Intel a company who is just focused on CPU is more lazy or dump in developing or just missing the magical knowledge Apple has for the A processors...
 
Last edited:
Case not closed
https://stackoverflow.com/questions...ework-vdsp-gather-memory-from-the-gpu-back-to

It does not use the gpu. Apple has open cl and metal for gpu which is benchmarked in geekbench compute.

Also in the benchmarks description it explicitly says that the imaging dsp is not used.

What is used is the simd units which is stated in the benchmark description. Things like Avx and neon which is one reason why the newer processors have faster performance because of use of the simd units

It just not use resource of external GPU, not mean CPU internal will not use as DSP or GPU unit, it transparent to software and can not be controlled by benchmark, even Camera said not use GPU resources don't mean the CPU or the complier doesn't do

Also, repeat again, you provided link just the "MARKETING" document don't mention any technical detail about the benchmark:

For example, Camera sub-test in Geekbench is all not relative to "Camera", this just a serial of graphic process, not realative to Camera, all Camera should "HARDWARE" and control by System for iOS,OSX,Andriod and other system, and fully accelerated.
[doublepost=1497330578][/doublepost]
Yes. The problem is that there now exists something called "Turbo Boost" which "boosts the performance" of the CPU. Or at least this is true in the case of all Mac computers at this point in time.

Turbo boost gives an additional 50% performance boost, or approximately somewhere along there. So you can either look at it like how maximum turbo boost allows 150% performance, or if max performance (with Turbo) is 100%, then the "without Turbo" part is about 70%.

But MacOS will NOT allow the CPU to reach maximum Turbo boost in order to reduce CPU temperature. This is in part because Apple does not want to ramp up the fan and/or in the case of the MacBook, they do not want the computer to heat up to the point where people can't put it on their laps anymore even though the CPU can technically withstand much higher temperatures. It is also in response to many cases like this:
http://osxdaily.com/2016/07/13/disable-enable-turbo-boost-mac/

...where people want to disable Turbo Boost just to gain battery life.

Is this temperature thing a real issue? Yes!
https://www.notebookcheck.net/Face-Off-Apple-MacBook-12-Core-m3-Core-m5-and-Core-m7.172046.0.html





The CPU will only reach about 70 Celsius before they drop in performance. The maximum these CPUs can take is up to 100 Celsius.

Perhaps saying this applies to all desktops is a bit far-fetched, but it is indeed a real problem with the MacBook and potentially also a problem with the MacBook Pro as Apple is so fixated on thin and light but also lower temperatures and lower fan noise.

Just to note, this is a hard restraint. Apple seems to do this at the firmware level so no matter what OS you install (as evident in the article I linked above), the limit stands.

Now you know why the m3 MacBook is so slow. It's not because it's not capable, but because Apple is purposefully limiting it.

There are signs that they have lifted this limit a bit with the new 2017 MacBook, though, so we'll see.

MAC OS give maximum turbo boost, it just INTEL mislead people said the CPU "Turbo boost UP TO".
The turbo boost only achieved on single core is rarely happened.

The restrict of MacBook actually from the temperature of battery and them Motherboard,
after the battery heat up to around 40C or the motherboard react 47C, the CPU must reduce the speed to avoid overheat of other parts.

But it not reason of MacBook slow, the MacBook is fast enough when it react 2.0Ghz, but when it really heat it up by GPU+CPU or compiling , it will down to very low frequency if no external (just said fan) to make it cool down.

The Cinebench better reflect of sustain speed, however the Cinebench workload is too simple and also can help by internal DSP and CPU tend to switch off some part to give it power down, if you working for some really complex task such as Xcode compiling, the MacBook 12 will heat it up very fast and use much more power showed in Power gadget, in this case the M3 will use more that 12w for the core, very heavy workload.
 
Last edited:
1. The memory weight is not too high but it can help a lot, such as Memory Copy from 12GB to 16GB can help get few hundred. (which can help the sub-score of memory from 3500 to 4600)
But this nothing mean for anything.

2. Most of the testing is actually helped by hardware DSP in ARM cpu,
such as jpeg depression/compression, html5, camera graphic process....,
modern CPU use special circuit design for compression (LZMA)
,also the sqlite test is high affected by cache size,
Geekbench said that "Quickly Generating Billion-Record Synthetic Databases" but actually we rarely use sqlite and sqlite not optimum for this task, instead sqlite is design for light database for storage small information but frequency update and search for future use.

LLVM also useless, the code too small and no linking for reflect the real cause.

Also, i don't why Dijkstra is test for Integer, it all mislead by Geekbench.

May be Geekbench thing Floating Point = 3D application relatively, and other they said this is Integer.

I really am sorry but I cannot understand your English at all. Is it being translated from your default language?

Please do not be offended by this I really would and do like to read informed opinion on this site but having only one language at my disposal can be limiting.

I have no real suggestions as to how to improve my understanding of what you write. That is a shame.

Regards. Sharkey
 
Last edited:
It just not use resource of external GPU, not mean CPU internal will not use as DSP or GPU unit, it transparent to software and can not be controlled by benchmark, even Camera said not use GPU resources don't mean the CPU or the complier doesn't do

Also, repeat again, you provided link just the "MARKETING" document don't mention any technical detail about the benchmark:

For example, Camera sub-test in Geekbench is all not relative to "Camera", this just a serial of graphic process, not realative to Camera, all Camera should "HARDWARE" and control by System for iOS,OSX,Andriod and other system, and fully accelerated.
[doublepost=1497330578][/doublepost]
I didn't understand half of that - but you seem to have a misconception on how to program for the GPU

If you program for the CPU - it will NOT automatically somehow get help by the GPU or any other DSP. You have to program specifically for the GPU - that's why Apple has the Metal framework for compute. Also that's why there is a Geekbench compute benchmark. Just think before you spout more nonsense.

If you don't understand that - there is no help for you.
[doublepost=1497356072][/doublepost]
I'm unclear what you're trying to say. Turboboost offers a trade off between speed & power consumption (heat). The chip always runs as fast as it can inside its thermal envelope. That is 100% of resources. Saying the OS only allows 10-70% of this for apps makes little sense. A, say, 2.3GHz CPU cannot sustain a turbo boosted 3GHz without some insane cooling. They can hit the turbo speeds briefly, which for many workloads is a great trade-off.
[doublepost=1497323133][/doublepost]Something I learned was that the Apple SoC's don't have a turboboost equivalent - or if they do I couldn't find a reference. I had just assumed they did. Instead they opted for separate slower, lower power cores (since the A9 I think).

I also called him out of the ridiculous 10-70% figure and he's been trying to defend it and won't let go. Image an OS that hogs up to 90% of your resources for no apparent reason. It would be the worst modern OS ever created.
 
I didn't understand half of that - but you seem to have a misconception on how to program for the GPU

If you program for the CPU - it will NOT automatically somehow get help by the GPU or any other DSP. You have to program specifically for the GPU - that's why Apple has the Metal framework for compute. Also that's why there is a Geekbench compute benchmark. Just think before you spout more nonsense.

If you don't understand that - there is no help for you.
[doublepost=1497356072][/doublepost]

I also called him out of the ridiculous 10-70% figure and he's been trying to defend it and won't let go. Image an OS that hogs up to 90% of your resources for no apparent reason. It would be the worst modern OS ever created.

If you don't understand basic modern cpu architecture theory of Intel and ARM, there is no help for you.
As Linus Torvalds with some opinion as jpeg should some kind of GPU and this is not relative to integer.
He also said Camera is confusion.

http://www.realworldtech.com/forum/?threadid=159853&curpostid=159860
 
If you don't understand basic modern cpu architecture theory of Intel and ARM, there is no help for you.
As Linus Torvalds with some opinion as jpeg should some kind of GPU and this is not relative to integer.
He also said Camera is confusion.

http://www.realworldtech.com/forum/?threadid=159853&curpostid=159860

Did you even read the reply from the geekbench developer in the very next post. GPU IS NOT USED.

You are basically saying the results cannot be true - let's just try to discredit geekbench saying the benchmark is rubbish and they are lying about what the benchmarks tests with no evidence whatsoever.
 
Last edited:
I don't think everything is ready yet but they could do it in the future, it's cheaper, more powerful and less power-hungry
 
I don't think everything is ready yet but they could do it in the future, it's cheaper, more powerful and less power-hungry
Yeah - performance is not an issue. It is interesting that arstechnica just reviewed the latest Surface Pro with the top Kaby Lake i7 (with fan) - so similar to the 2017 macbook pro 13" processor:

https://arstechnica.com/gadgets/2017/06/surface-pro-review-incremental-improvement-isnt-enough/

You can compare with the ipad Pro 10.5 review here:

https://arstechnica.com/apple/2017/...-hardware-patiently-waiting-for-pro-software/

Jestream and Kraken - lower is better
Surface Pro first number, iPad Pro 10.5 second number. I've bolded the winner.

GeekBench 4.1 Single Core: 4548 / 3930
GeekBench 4.1 Multi Core: 9228 / 9311
Octane V2: 27280 / 31700
Jetstream 1.1: 231 / 202.5
Kraken 1.1: 970 / 969
GfxBench T-Rex (1080p offscreen): 98.8 / 213.5
GfxBench Manhattan (1080p offscreen): 62.2 / 99.6
GfxBench Manhattan 3.1 (1080p offscreen): 49.6 / 74.3
Geekbench 4.1 Compute: 30953 / 27673

So end result is:
Surface Pro: 3
iPad Pro: 5
tie: 1

So conclusion from a fairly wide range of benchmarks - A10X performance wise is very impressive and rivals the top end Kaby Lake i7 processor.
 
  • Like
Reactions: epca12
Yeah - performance is not an issue. It is interesting that arstechnica just reviewed the latest Surface Pro with the top Kaby Lake i7 (with fan) - so similar to the 2017 macbook pro 13" processor:

https://arstechnica.com/gadgets/2017/06/surface-pro-review-incremental-improvement-isnt-enough/

You can compare with the ipad Pro 10.5 review here:

https://arstechnica.com/apple/2017/...-hardware-patiently-waiting-for-pro-software/

Jestream and Kraken - lower is better
Surface Pro first number, iPad Pro 10.5 second number. I've bolded the winner.

GeekBench 4.1 Single Core: 4548 / 3930
GeekBench 4.1 Multi Core: 9228 / 9311
Octane V2: 27280 / 31700
Jetstream 1.1: 231 / 202.5
Kraken 1.1: 970 / 969
GfxBench T-Rex (1080p offscreen): 98.8 / 213.5
GfxBench Manhattan (1080p offscreen): 62.2 / 99.6
GfxBench Manhattan 3.1 (1080p offscreen): 49.6 / 74.3
Geekbench 4.1 Compute: 30953 / 27673

So end result is:
Surface Pro: 3
iPad Pro: 5
tie: 1

So conclusion from a fairly wide range of benchmarks - A10X performance wise is very impressive and rivals the top end Kaby Lake i7 processor.
Definitely impressive, it'll be cool to see what it's like with 4 cores although I think increasing single core performance is more important at this stage. I feel like we might reach double the single core performance of Intel before we see anything from Apple in regards to Macs. Unless Intel makes a breakthrough
 
I do not care if the scores are comparable or not. But when I saw the Affinity demo in WWDC, I'm pretty sure that a MacBook pro cannot do that, at least with comparable latency without the fans humming.

Just take YouTube for example. The iPhone can stream a 4K video it without any hiccup, while the MBP's fan need to whirr loudly to stream the same video.

The iPad can load up to 4 parallel video streams on iMovie without a hiccup. My MacBook Air cannot do that.

Also when browsing in Safari, I am not seeing a perceptible difference in speed, and sometimes the iPad for me is faster.

Someone should do like a speed test comparison between launching apps, browsing, playing games etc. on a Mac vs iPad

The above are comparable examples that shows that the A10x is up to speed. It is not true anymore that tablets are slower. Them being at least on par in performance is a marvel in itself.

The lesson being, benchmarks do not matter. For most people, the iPad is as fast or faster for things that they mostly need to do.
 
I do not care if the scores are comparable or not. But when I saw the Affinity demo in WWDC, I'm pretty sure that a MacBook pro cannot do that, at least with comparable latency without the fans humming.
Definitely not - Even Serif is saying that Affinity photo for iPad can be up to 4X faster than an i7 running Affinity photo for mac.

Reason for this - I think they are using the inbuilt imaging DSP of the A10X chip. Standard benchmarks don't show this because they are just CPU benchmarks.

Just take YouTube for example. The iPhone can stream a 4K video it without any hiccup, while the MBP's fan need to whirr loudly to stream the same video.

The iPad can load up to 4 parallel video streams on iMovie without a hiccup. My MacBook Air cannot do that.
iPads can make use of video encoding/decoding DSP.

Also when browsing in Safari, I am not seeing a perceptible difference in speed, and sometimes the iPad for me is faster.

Someone should do like a speed test comparison between launching apps, browsing, playing games etc. on a Mac vs iPad

The above are comparable examples that shows that the A10x is up to speed. It is not true anymore that tablets are slower. Them being at least on par in performance is a marvel in itself.

The lesson being, benchmarks do not matter. For most people, the iPad is as fast or faster for things that they mostly need to do.
Yes- iPad are plenty fast.

check this review out and this sentence:
https://www.laptopmag.com/reviews/tablets/apple-ipad-pro-10-5

"I handed the iPad Pro to our senior video producer, Judi, and she used Adobe Premiere Clip to take three clips shot with a Canon 5D and then edit them together on both the iPad Pro and a 15-inch MacBook Pro. After choosing the same in/out points and applying the same filters and transitions using Adobe Premiere Clip on the iPad Pro and Adobe Premier Pro on the laptop, it took 22 seconds to render and export the clip on the iPad, versus 2.5 minutes on the Mac."

Not an exactly fair comparison as I believe the iPad was using the video dsp and the macbook pro had to make do using just the CPU - but you can see the advantages of the A10X chip having those extra units on board.
 
  • Like
Reactions: Graham Perks
Since geekbench 4 mobile and desktop versions run exactly the same benchmark. MacOS while running the benchmark uses all available resources so the benchmarks are comparable. I have no idea why you think they are different.

That's not true.
Lots of floating point operations are running at half precision on ARM due to missing FPUs.
Geekbench has been putting ARM at an advantage for years now.
[doublepost=1497599539][/doublepost]
Just take YouTube for example. The iPhone can stream a 4K video it without any hiccup, while the MBP's fan need to whirr loudly to stream the same video.

The iPad can load up to 4 parallel video streams on iMovie without a hiccup. My MacBook Air cannot do that.

Which generations? Video playback is about iGPU hardware decoding and not CPU or GPU performance.
A 150$ Android with some Cortex-A53 potato chip and hardware support could beat a 3000$ MacBook Pro with software decoding.
[doublepost=1497599956][/doublepost]
Also when browsing in Safari, I am not seeing a perceptible difference in speed, and sometimes the iPad for me is faster.

Mobile pages are often way smaller than their desktop counterparts and not so overblown with scripts.
 
Last edited:
That's not true.
Lots of floating point operations are running at half precision on ARM due to missing FPUs.
Geekbench has been putting ARM at an advantage for years now.
What are you talking about?
There are no missing FPU in ARM. It is fully IEEE 754 complaint and supports half-, single- and double-precision floating point arithmetic.

The Geekbench 4 floating point tests run at the precision they are told to run at for all supported architectures there is no deviation for ARM.
 
Remember what Jobs said for switching to intel from PowerPC? Performance per watt. Although intel made great strides here a while back, I'm not seeing the same improvements in performance nor battery life lately after Haswell. Meanwhile Apple's Ax solution is pushing performance while maintaining the battery life that Apple likes (the legendary 10h battery life of iPads). I'm sure we are still far off, but one probably cannot deny that an all Apple "Mac," inside and out, is probably in the making (just like how Apple been compiling OS X for intel since day 1 of the OS).

Apple also continues to do feature parity between OS X and iOS, even down to the core technologies like Metal and APFS. Feels like some baby steps in prepping OS X to marry it with the Ax SoC. Obviously we won't just say good bye to intel (moving OS X developers to some new binary won't be as simple as pushing iOS developers). But seeing Apple's obsession in sleek hardware, it's not about the question of if, it's when.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.