Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Moving to the AS MSP and iMac, and regarding their FP performance: While Apple isn't targeting HPC scientific computing, the Mac does have a strong presence among scientists, who use Macs not only for office work, but also for desktop scientific computing, e.g., with apps like Mathematica and Matlab, as well as for prototyping. Apple recognizes this, giving these apps equal prominence, in its marketing of the MBP and iMac, with "creative" apps like Autodesk, Logic Pro, Photoshop, etc. (see screenshot below). Many Mathematica and Matlab calculations are FP, so it will be disappointing if this is a weak area for AS.

I would not at all be surprised if FP lags Intel on these machines (at least traditional, non-saturating FP). Never been a priority for Apple (nor for most CPU designs). It’s also a little tricky to compare, because IEEE FP, which I believe is what ARM supports, is quite different than Intel’s FP. And porting apps from one to the other is only easy if you don’t care about differences in results and precision.
 
  • Like
Reactions: simonmet
Thanks for the clarification. If you're saying that, for the types of apps one would generally expect to be used on a MacBook, you expect the SC performance of the first AS MacBook will (at least typically) be faster than *anything* Intel's got, not merely faster than the comparable Intel laptop chips, then there is no inconsistency with using GB as a predictor.

If so, that would be an exciting prediction. It would certainly blow the computer industry away if it comes to pass.

Moving to the AS MBP and iMac, and regarding their FP performance: While Apple isn't targeting HPC scientific computing, the Mac does have a strong presence among scientists, who use Macs not only for office work, but also for desktop scientific computing, e.g., with apps like Mathematica and Matlab, as well as for prototyping. Apple recognizes this, giving these apps equal prominence, in its marketing of the MBP and iMac, with "creative" apps like Autodesk, Logic Pro, Photoshop, etc. (see screenshot below). Many Mathematica/Matlab/prototyping calculations are FP, so it will be disappointing if this is a weak area for AS.

As an interesting historical note: When Apple introduced the dual-2.0 GHz G5 PowerPC at WWDC 2003, Steve Jobs claimed Mathematica was 2.3 x faster on the PPC than on the fastest PC chip—a dual 3.06 Xeon. However, when I later ran Mathematica timing tests myself, I found it was typically 20% - 40% slower on my G5 than on my plain-vanilla PC (2.8 GHz Pentium IV). When I discussed this with a contact at Wolfram (the maker of Mathematica), he said the performance comparison was based on "a pretty specific function, large integer multiplication", where the G5 is faster. I.e., Apple was only able to make the performance claim because it completely cherry-picked the data.


View attachment 963020
Fortunately, Apple doesn’t cherry pick benchmarks anymore and hasn’t for a while. For example, they have been underselling their performance improvements recently. look at GPU compute. A14 seems to be around 2x faster than a12, but they only claim 30%. They probably cherry picked the worst case scenario

another example is a Anandtech’s deep dive of the a13 and found gpu performance was 20-60% faster. Apple only claimed 20% faster.
 
I would not at all be surprised if FP lags Intel on these machines (at least traditional, non-saturating FP).
I don't know how this would translate to comparative FP performance for, say, Mathematica or C++, but SPEC06 comparisons have found the A13 doesn't keep up with Intel/AMD desktop chips as well in FP as it does in integer; though the difference was only ~15%.

Quoting from Anandtech: "Last year I’ve noted that the A12 was margins off the best desktop CPU cores. This year, the A13 has essentially matched best that AMD and Intel have to offer – in SPECint2006 at least. In SPECfp2006 the A13 is still roughly 15% behind."


It’s also a little tricky to compare, because IEEE FP, which I believe is what ARM supports, is quite different than Intel’s FP.
Given this, how does SPECfp2006 handle the FP comparison between AS and Intel? [screenshot from Anandtech]

1601871221241.png
 
I don't know how this would translate to comparative FP performance for, say, Mathematica or C++, but SPEC06 comparisons have found the A13 doesn't keep up with Intel/AMD desktop chips as well in FP as it does in integer; though the difference was only ~15%.

Quoting from Anandtech: "Last year I’ve noted that the A12 was margins off the best desktop CPU cores. This year, the A13 has essentially matched best that AMD and Intel have to offer – in SPECint2006 at least. In SPECfp2006 the A13 is still roughly 15% behind."



Given this, how does SPECfp2006 handle the FP comparison between AS and Intel? [screenshot from Anandtech]

View attachment 963033
I’m not sure. Not an expert on SPECFP (though we used to use it as a benchmark when developing processors). Best guess is it just uses the maximum precision provided by whatever CPU is being tested. (It gets pretty complicated because many CPUs store intermediate results in a higher precision than final results, etc.)
 
Fortunately, Apple doesn’t cherry pick benchmarks anymore and hasn’t for a while. For example, they have been underselling their performance improvements recently. look at GPU compute. A14 seems to be around 2x faster than a12, but they only claim 30%. They probably cherry picked the worst case scenario.
The comparisons to which you are referring are those Apple is making to its own products. There, I agree, it has been conservative.

I was, by contrast, talking about a comparison Apple made to a competitor's product (more precisely, in that case, to CPU's used in competing products). That's an entirely different ballgame, and there the marketing distortions (which everyone does) continue to be on in full force.

Granted, we haven't seen CPU performance comparisons to non-Apple products from Apple in quite a while. But we have seen comparisons in other areas. Consider, for instance, that (at WWDC 2019) they called their XDR monitor the "world's best pro display", ridiculously comparing it to the $43K Sony BVM-HX310 Professional Master Monitor, which doesn't even pass the laugh test (it doesn't qualify as a mastering display, period, because it doesn't meet the minimum Dolby Vision specifications for a professional HDR mastering monitor).*

Thus, given Apple's track record when it comes to comparison with other manufacturers' products, I would view any such comparisons guardedly.

*I thought this was unfortunate, because they've lately done so much to try to regain their credibilty within the pro-creative community, and then they go and shoot themselves in the foot with a risible claim like that.
 
Last edited:
I would not at all be surprised if FP lags Intel on these machines (at least traditional, non-saturating FP). Never been a priority for Apple (nor for most CPU designs). It’s also a little tricky to compare, because IEEE FP, which I believe is what ARM supports, is quite different than Intel’s FP. And porting apps from one to the other is only easy if you don’t care about differences in results and precision.
Intel add 512 bit floating point to their CPUs, which is nice for having nice benchmark results, but is quite useless for customers. (If you need it, you should use the GPU). I wrote a little test program adding a few hundred billion square roots, auto-vectorised by Clang and using all the cores, and it ran twice as fast on an iPhone XR than on a quad core Mac.
 
This puts the iPad Pro in a weird situation, slower for single core and just slightly faster for multicore. The Air and the Pro should be updated at the same time if both are always going to get last-generation chips, while the low-cost iPad can be updated at a different time as it will use previous-generation components.
 
Nope, you still don't understand. Honestly, at this point, this is one of those things that could only be explained by a face-to-face discussion—which is one of the general limitations of forums. The only other thing I could suggest would be to find a friend who is both a native English speaker (or has very high fluency) and has a PhD in the hard sciences with a significant publication record (the rigor of getting a PhD and publishing in those fields provides good training in parsing language), and ask him or her to look over the discussion chain and explain it to you. I'm sorry I can't do any better.

I recognize the above may seem a bit presumptuous, but please remember that you're the one that challenged my understanding of the discussion. I've done my best to explain myself to you, without success.

That’s not presumptuous, just plain ignorant, considering what cmaier has already said.
 
  • Like
Reactions: brucemr
Intel add 512 bit floating point to their CPUs, which is nice for having nice benchmark results, but is quite useless for customers. (If you need it, you should use the GPU). I wrote a little test program adding a few hundred billion square roots, auto-vectorised by Clang and using all the cores, and it ran twice as fast on an iPhone XR than on a quad core Mac.

Now use Mathmul with intel c++ compiler (icc with 'qopt-matmul' flag) and you will see.

Also, there is a big overhead to transfer data to GPU, so if your matrix is too small you better have to compute it on the CPU.
 
Now use Mathmul with intel c++ compiler (icc with 'qopt-matmul' flag) and you will see.

Also, there is a big overhead to transfer data to GPU, so if your matrix is too small you better have to compute it on the CPU.

Do you actually personally write Fortran code? Seems to me one of the main purposes of Fortran the past twenty years was to make Intel look good in some benchmarks.

And, like @gnasher729 said, if you really need that, are you sure you don't want to do it in the GPU instead?
 
There are some qualifiers here, but yes, expect it to be about twice as fast. But:

  • your MBP has way more thermal headroom. It will sustain its performance for much longer. That iPad will throttle after a while. For lengthy computational tasks, an MBP is simply a better choice.
  • to be fair, that was already a two-years-old architecture by that point. Apple skipped Broadwell and went straight to Skylake (and was also a bit late on that), so 2015 wasn't a great year to get an up-to-date CPU.



The SQ2 appears to be a variant of the 8cx Gen 2 (rolls right off the tongue), which still has 4+4 Kryo 495 cores, from December 2018.

It might do a little better than the SQ1 (which is roughly on the iPhone 7 level), but probably not by much.

Ah I see...so whats the geekbench score I should expect from the new iPad Air that is sustainable for a long period of time and whats the point of getting a higher geekbench score on an iPad where you can not maintain it?
 
It’s what we’ve come to expect *from apple*, but nobody else in the industry is achieving 20% year-over-year improvement over and over again.
And yet, no one is choosing Apple as a games platform of choice. Strange that.
 
Ah I see...so whats the geekbench score I should expect from the new iPad Air that is sustainable for a long period of time and whats the point of getting a higher geekbench score on an iPad where you can not maintain it?
The most I’ve ever seen my iPad Pro throttle is about 20%. Anandtech didn’t see much throttling either. So even if the iPad throttle the new air would still be way higher than a vintage MacBook Pro. In fact, if you define throttling as how much lower the cpu go to compared to its peak frequency, then even an iPad Pro throttles less than modern intel MacBook Pros (because of very aggressive turbo modes in MacBook pros).

add some active cooling on the Apple chip and I doubt it will even budge from peak frequency. Anandtech tested that the latest tiger lake core uses 20w under load which is about half of 40w for the i9-10900k. A13 in contrast uses about 4w and they havecomparable performance.
 
Last edited:
Ah I see...so whats the geekbench score I should expect from the new iPad Air that is sustainable for a long period of time

Alas, Geekbench doesn't really do that. It's one of its weak spots, IMO.

and whats the point of getting a higher geekbench score on an iPad where you can not maintain it?

A lot of use cases involve short bursts. You don't want to, say, wait several minutes for a website to render; you want the CPU to spin up for fractions of a second, render it quickly, then spin down again.

Very few use cases actually go beyond a few minutes. Compiling a large software project, say, or transcoding video. And people who buy a MacBook Pro are more likely to care about those use cases. But even they will often benefit from short bursts.
 
The comparisons to which you are referring are those Apple is making to its own products. There, I agree, it has been conservative.

I was, by contrast, talking about a comparison Apple made to a competitor's product (more precisely, in that case, to CPU's used in competing products). That's an entirely different ballgame, and there the marketing distortions (which everyone does) continue to be on in full force.

Granted, we haven't seen CPU performance comparisons to non-Apple products from Apple in quite a while. But we have seen comparisons in other areas. Consider, for instance, that (at WWDC 2019) they called their XDR monitor the "world's best pro display", ridiculously comparing it to the $43K Sony BVM-HX310 Professional Master Monitor, which doesn't even pass the laugh test (it doesn't qualify as a mastering display, period, because it doesn't meet the minimum Dolby Vision specifications for a professional HDR mastering monitor).*

Thus, given Apple's track record when it comes to comparison with other manufacturers' products, I would view any such comparisons guardedly.

*I thought this was unfortunate, because they've lately done so much to try to regain their credibilty within the pro-creative community, and then they go and shoot themselves in the foot with a risible claim like that.

The problem here is your own expectations and what you understand by "best". The world's best pro display is the one that professionals can actually afford to buy. Someone could say that Tesla Model S is the world's best car but this does not necessary mean it is the fastest or most luxurious one, or the one with the most features. The XDR monitor from Apple is the BEST display for the vast majority of professionals, except for a few individuals/companies who actually need or can afford a Sony BVM-HX310. Overall, the XDR monitor is the world's best display for pro users, regardless of the fact that there may be some other monitor out there, with an insane price tag that provides a better feature set for a specific task, such as true HDR. Something just cant be the "best" if it is virtually unaccesible for ordinary mortals, so I consider Apple's claim as absolutely valid in this case.
 
Last edited:
The problem here is your own expectations and what you understand by "best". The world's best pro display is the one that professionals can actually afford to buy. Someone could say that Tesla Model S is the world's best car but this does not necessary mean it is the fastest or most luxurious one, or the one with the most features. The XDR monitor from Apple is the BEST display for the vast majority of professionals, except for a few individuals/companies who actually need or can afford a Sony BVM-HX310. Overall, the XDR monitor is the world's best display for pro users, regardless of the fact that there may be some other monitor out there, with an insane price tag that provides a better feature set for a specific task, such as true HDR. Something just cant be the "best" if it is virtually unaccesible for ordinary mortals, so I consider Apple's claim as absolutely valid in this case.

That’s not entirely true. Many businesses have a minimum threshold of quality in order for a product to be feasible. If your product doesn’t hit the mark it can be the best thing on this planet, yet still it’s useless to the target audience. Apples XDR display is a great example: it’s too expensive for the prosumer, yet not good enough to be used in high budget movie production, leaving it out there in the wild for people who can afford it for bragging rights. Reference monitors are all about reliability, which is why they are so expensive. Compared to the production costs those costs are peanuts, which is why the xdrs price tag doesn’t matter. I’m pretty sure Apple was aware of that, and im pretty sure apple didn’t really intend to target that audience in the first place ... this is a typical luxury good for whales
 
it’s too expensive for the prosumer, yet not good enough to be used in high budget movie production, leaving it out there in the wild for people who can afford it for bragging rights

I highly doubt Apple's intended target audience are the three podcasters who bought it for bragging rights.

Either they made some poor choices, or you're exaggerating and it's doing just fine in some professional uses.
 
Top performance, yet a 4:3 resolution when it comes to external displays. Will it be fixed in iOS 15 perhaps?
 
As soon as they announced Apple silicone I became defensive of Intel as I have loved the last 14 years of Intel processors. But now looking at this has me wanting the new MBP, maybe second gen. I have the mid 2019 macbook pro, and the thermals and battery life leave a lot to be desired.
 
Do you actually personally write Fortran code? Seems to me one of the main purposes of Fortran the past twenty years was to make Intel look good in some benchmarks.

And, like @gnasher729 said, if you really need that, are you sure you don't want to do it in the GPU instead?

I've played with Intel MKL (got very good perf on this too), not Mathmul but I know from my ex-collegue that Mathmul is crazy fast

There are a lot of use case where you better compute it on CPU than GPU
If you have to do a dot product of two square matrices of 1000 row, you will have finished to compute it on the CPU before it reaches your GPU

Remember that the data needs to go from the CPU to the GPU, be computed on the GPU, and travel back to the CPU, the overhead is huge
For Computer Vision training, it makes senses as you train in batches of huge matrix but it is not always the use case you need
In HPC, you often need to use the result from a big matrix computation to form the next one, etc.

It's always good to have a CPU able to handle this.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.