Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
That’s not entirely true. Many businesses have a minimum threshold of quality in order for a product to be feasible. If your product doesn’t hit the mark it can be the best thing on this planet, yet still it’s useless to the target audience. Apples XDR display is a great example: it’s too expensive for the prosumer, yet not good enough to be used in high budget movie production, leaving it out there in the wild for people who can afford it for bragging rights. Reference monitors are all about reliability, which is why they are so expensive. Compared to the production costs those costs are peanuts, which is why the xdrs price tag doesn’t matter. I’m pretty sure Apple was aware of that, and im pretty sure apple didn’t really intend to target that audience in the first place ... this is a typical luxury good for whales
I would disagree. In your view there is no gap between prosumers and the Hollywood high budget movie production studios, and the true is that within this gap is where the vast majority of smaller companies stand, including design studios, small producers, advertising agencies, marketing studios, local TV channels, youtubers, multimedia agencies, etc are positioned. The XDR may not be the most optimal choice if you are editing the next blockbuster for hundered of millions of dollars, but it is suitable for anything below that level. At $5k price tag, the XDR is expensive but still affordable for many companies and including freelancers while delivering superb quality. Sony's referrence monitor is probably only produced upon order and there are a few hundred units at most in the whole world. That's not Apple audience, and never was.
 
  • Like
Reactions: eulslix
There are a lot of use case where you better compute it on CPU than GPU
If you have to do a dot product of two square matrices of 1000 row, you will have finished to compute it on the CPU before it reaches your GPU

A dot product of two 1,000 x 1,000 matrices is what my ten year old iPad 2 does for breakfast. You don't need a fast CPU or a fast anything for that. It's just a million floating point operations. If you measure speed, take some real problems.
 
  • Like
Reactions: Jimmy James
iOS 14 runs pretty well on my 4 year-old A10 iPhone 7 Plus.

I can very occasionally notice a pause which I attribute to RAM. It has 3 GB RAM. SoC speed is perfectly fine though.
Did you do fresh install or just upgrade?
 
I would disagree. In your view there is no gap between prosumers and the Hollywood high budget movie production studios, and the true is that within this gap is where the vast majority of smaller companies stand, including design studios, small producers, advertising agencies, marketing studios, local TV channels, youtubers, multimedia agencies, etc are positioned. The XDR may not be the most optimal choice if you are editing the next blockbuster for hundered of millions of dollars, but it is suitable for anything below that level. At $5k price tag, the XDR is expensive but still affordable for many companies and including freelancers while delivering superb quality. Sony's referrence monitor is probably only produced upon order and there are a few hundred units at most in the whole world. That's not Apple audience, and never was.

I had the same thought, I still wonder whether it’s that easy though. I mean, we’re talking about hdr, a technology that most viewers on YouTube don’t care about. Heck, YouTube’s compression algorithm is such a piece of garbage that a switch to .h265 would bring more effective advantage over any kind of 4K hdr shenanigans. At least that was my impression when I compared my uploads to hosting the video privately with proper compression (at a smaller Bitrate...)

I’m not sure why YouTube refuses to adopt this standard, whether it’s licensing issues or average processing issues, but as long as streaming services keep providing ****** codexes watering everything down there is no point in moving to 4k or hdr. And therefore I don’t see the point for having a „casual“ hdr reference monitor, since you’re mastering for an audience that doesn’t even get to enjoy full hd to its full potential
 
I had the same thought, I still wonder whether it’s that easy though. I mean, we’re talking about hdr, a technology that most viewers on YouTube don’t care about. Heck, YouTube’s compression algorithm is such a piece of garbage that a switch to .h265 would bring more effective advantage over any kind of 4K hdr shenanigans. At least that was my impression when I compared my uploads to hosting the video privately with proper compression (at a smaller Bitrate...)

I’m not sure why YouTube refuses to adopt this standard, whether it’s licensing issues or average processing issues, but as long as streaming services keep providing ****** codexes watering everything down there is no point in moving to 4k or hdr. And therefore I don’t see the point for having a „casual“ hdr reference monitor, since you’re mastering for an audience that doesn’t even get to enjoy full hd to its full potential
You must be hiding under a rock, cuz it's common knowledge on geek forums like these that YouTube is in the process of moving from VP9 to AV1.

AV1 is roughly as efficient as h.265 HEVC, with some recent tests claiming AV1 is now actually more efficient than h.265 HEVC.


AV1 is not limited to 4K either. Here's a 1080p AV1 music video:


Screen Shot 2020-10-05 at 2.18.44 PM.png
 
Last edited:
You must be hiding under a rock, cuz it's common knowledge on geek forums like these that YouTube is in the process of moving from VP9 to AV1.

AV1 is roughly as efficient as h.265 HEVC, with some recent tests claiming AV1 is now actually more efficient than h.265 HEVC.


AV1 is not limited to 4K either. Here's a 1080p AV1 music video:


View attachment 963210

Yeah, last time I was creating content was 2 years ago, but I still noticed no improvements most lately. As a viewer in that case, I’m very glad to hear that YouTube finally decided to step up their game, I was literally sick of all the motion/skin/... artefacts.

So yeah, I’ve been living under a rock, but hey, at least we’re finally here. In that case i stand corrected, the xdr might make a lot of sense for aspiring movie creators. Less than a quality lens, so there’s that...
 
Yeah, last time I was creating content was 2 years ago, but I still noticed no improvements most lately. As a viewer in that case, I’m very glad to hear that YouTube finally decided to step up their game, I was literally sick of all the motion/skin/... artefacts.

So yeah, I’ve been living under a rock, but hey, at least we’re finally here. In that case i stand corrected, the xdr might make a lot of sense for aspiring movie creators. Less than a quality lens, so there’s that...
That YouTube AV1 playlist is from two years ago, and some of the videos are even older. :p
 
How was what I said untrue? Every year since 2015, at least, the A-series processors have increased in performance by about 20%.
Since 2015?
About 20%?

Narrowing down the scope of your statement because it's not entirely true for every instance?
 
Since 2015?
About 20%?

Narrowing down the scope of your statement because it's not entirely true for every instance?

I don’t have complete data at hand for prior to 2015, and I say “about” because it depends on what you compare to what (for example, the a12x to a12z comparison doesn’t work), and often its more than 20%

Here’s some geekbench 5 single core scores:

A7 - 258
A8 - 312 (22%)
A9 - 530 (70%)
A10 - 725 (37%)
A11 - 905 (25%)
A12 - 1100 (22%)
A13 - 1322 (20%)
A14 - 1583 (20%)

So that goes back to around 2013. Happier now?
 
  • Like
Reactions: JulianL
Increasing multi-core perf without also increasing single-core perf is useless for most users.

Not convinced about what is useless for most users. Anyway - my statement is related to my expectation, that Apple will most likely use most of the additional power headroom for integrating more cores instead of increasing the frequency significantly - which makes the expectation of a 50% increased ST performance in the Macs highly unlikely.
 
Last edited:
The SQ2 appears to be a variant of the 8cx Gen 2 (rolls right off the tongue), which still has 4+4 Kryo 495 cores, from December 2018.

It might do a little better than the SQ1 (which is roughly on the iPhone 7 level), but probably not by much.

SQ1 is 2x iPhone 7 performance or roughly A12. If you only look at a single core, it is comparable to A10.

Main difference between SQ1 and SQ2 is higher clocks on the CPUs and a larger GPU.
 
Last edited:
Not convinced about what is useless for most users. My statement is related to my expectation, that Apple will most likely use most of the additional power headroom for integrating more cores instead of increasing the frequency significantly - which makes the expectation of a 50% increased ST performance in the Macs highly unlikely.

Adding cores is comparatively easy, and if Apple wants to do that especially in higher-end Macs, they can. Getting high ST is the hard and important part, and they’re still utterly demolishing Qualcomm at that.
 
Adding cores is comparatively easy, and if Apple wants to do that especially in higher-end Macs, they can. Getting high ST is the hard and important part, and they’re still utterly demolishing Qualcomm at that.

I statement was specifically addressing the expectation, that the AS Macs would have 50% higher ST performance than the fastest Intel cores - which would roughly mean 50% higher ST performance than A14. The upcoming AS Macs will use the A14 core microarchitecture with possibly slightly increased clocks - therefore a 50% single core performance increase is not realistic - despite a drastically higher power headroom.
Main reason is, that increasing frequency has a non trivial impact on power considering you also have to increase Vcc.

I was not discussing what is the hard part, what is the important part, or how all of this is related to Qualcomm.

Speaking of Qualcomm, the relative distance in performance of Qualcomm cores (de-facto ARM Cortex-A Cores) compared to Apple Cores is relatively constant over the past few years if not even shrinking.
 
Last edited:
I statement was specifically addressing the expectation, that the AS Macs would have 50% higher ST performance than the fastest Intel cores - which would roughly mean 50% higher ST performance than A14. The upcoming AS Macs will use the A14 core microarchitecture with possibly slightly increased clocks - therefore a 50% single core performance increase is not realistic.

I don’t think 50% higher is needed, but for whatever it’s worth, this is already 12% higher.
 
I don’t think 50% higher is needed, but for whatever it’s worth, this is already 12% higher.

I do not think it is needed either - therefore i was surprised about your "useless" statement. I personally would not consider a hypothetical AS Mac with A14/iPad Air single core performance but much higher MT performance useless.
 
The most I’ve ever seen my iPad Pro throttle is about 20%. Anandtech didn’t see much throttling either. So even if the iPad throttle the new air would still be way higher than a vintage MacBook Pro. In fact, if you define throttling as how much lower the cpu go to compared to its peak frequency, then even an iPad Pro throttles less than modern intel MacBook Pros (because of very aggressive turbo modes in MacBook pros).

add some active cooling on the Apple chip and I doubt it will even budge from peak frequency. Anandtech tested that the latest tiger lake core uses 20w under load which is about half of 40w for the i9-10900k. A13 in contrast uses about 4w and they havecomparable performance.

Ahhh...my mind. Are you saying a mobile chip without fans running on battery out performs a full fledged intel CPU with fans running off an electric socket while using 1/10th of the power? My brains can't comprehend this. This is akin to saying a bicycle is faster than a car.

How did you measure when your iPad throttles and by how much?
 
A dot product of two 1,000 x 1,000 matrices is what my ten year old iPad 2 does for breakfast. You don't need a fast CPU or a fast anything for that. It's just a million floating point operations. If you measure speed, take some real problems.

Do you know that you have data dependencies between computation?
For most program, you don't compute one big matrix computation but a lot of inter-dependant small/mid-size computation
 
I don’t have complete data at hand for prior to 2015, and I say “about” because it depends on what you compare to what (for example, the a12x to a12z comparison doesn’t work), and often its more than 20%

Here’s some geekbench 5 single core scores:

A7 - 258
A8 - 312 (22%)
A9 - 530 (70%)
A10 - 725 (37%)
A11 - 905 (25%)
A12 - 1100 (22%)
A13 - 1322 (20%)
A14 - 1583 (20%)

So that goes back to around 2013. Happier now?
That's really interesting but one observation, actually more of a question since having seen many of your posts over the years I'm very aware just how immense is the difference in our expertise in this area, but...

Would it be fair to say that there's a disjunction in that progression because of what Geekbench measures? The disjunction I'm thinking of is the point at which Apple started adding additional functional units into the A-series outside of the processing cores, the biggest example being the Neural Engine. To me that seems to make comparing increases in Geekbench scores slightly unfair to the Apple engineering prowess (it might be understating it) since it completely ignores the utility of all the transistors in the SoC that, rather than being used to possibly eke out a bit more core performance (bigger caches, bigger branch prediction buffers, more of various types of op units within the core, other stuff to better limit the effect of pipeline stalls, etc, etc, etc.), are used to implement those other functional units.

One could argue that the processing cores have always been losing potential transistors to another functional unit, the GPU, but there is no real disjunction in the series progression there because that effect has always been there so is reflected in the Geekbench scores for the first A-series generation onwards.

Do you see any validity in the above observation?
 
Last edited:
I don’t have complete data at hand for prior to 2015, and I say “about” because it depends on what you compare to what (for example, the a12x to a12z comparison doesn’t work), and often its more than 20%

Here’s some geekbench 5 single core scores:

A7 - 258
A8 - 312 (22%)
A9 - 530 (70%)
A10 - 725 (37%)
A11 - 905 (25%)
A12 - 1100 (22%)
A13 - 1322 (20%)
A14 - 1583 (20%)

So that goes back to around 2013. Happier now?
So now it's - single core geekbench 5 scores from 2013 being about 20%, but just ignore the A10 and A9 and A12X to A12z.

because it depends on what you compare to what (for example, the a12x to a12z comparison doesn’t work), and often its more than 20%
Of cause it does. That's why generalistic statements like the ones in your above posts do not work. As the data does not fit that generalisation.

Your specific data is correct. I'm not arguing that. My point is don't make sweeping broad generalisation as often (including your case) they are wrong.
 
  • Disagree
Reactions: eulslix
So now it's - single core geekbench 5 scores from 2013 being about 20%, but just ignore the A10 and A9 and A12X to A12z.


Of cause it does. That's why generalistic statements like the ones in your above posts do not work. As the data does not fit that generalisation.

Your specific data is correct. I'm not arguing that. My point is don't make sweeping broad generalisation as often (including your case) they are wrong.

My generalization isn’t “wrong.” A12x and z are in a line of products that don’t update on a yearly cadence, so obviously 20% per year couldn’t apply. And if you’d like multi core scores, you’ll see the data works for that, too (i’m not your personal google. All this data is out there). And I’m not sure what your point is re a9 and a10. I never suggested that Apple achieved *only* 20% per year. (and i said 2015, which would have made the a9 the baseline, so the 70% improvement wouldn’t be on the list.

Being overly pedantic doesn’t advance the conversation.
 
That's really interesting but one observation, actually more of a question since having seen many of your posts over the years I'm very aware just how immense is the difference in our expertise in this area, but...

Would it be fair to say that there's a disjunction in that progression because of what Geekbench measures? The disjunction I'm thinking of is the point at which Apple started adding additional functional units into the A-series outside of the processing cores, the biggest example being the Neural Engine. To me that seems to make comparing increases in Geekbench scores slightly unfair to the Apple engineering prowess (it might be understating it) since it completely ignores the utility of all the transistors in the SoC that, rather than being used to possibly eke out a bit more core performance (bigger caches, bigger branch prediction buffers, more of various types of op units within the core, other stuff to better limit the effect of pipeline stalls, etc, etc, etc.), are used to implement those other functional units.

One could argue that the processing cores have always been losing potential transistors to another functional unit, the GPU, but there is no real disjunction in the series progression there because that effect has always been there so is reflected in the Geekbench scores for the first A-series generation onwards.

Do you see any validity in the above observation?

Well that’s always the issue with benchmarks. They only test what they test. So if you have hardware on the CPU that can do something very fast, but the benchmark doesn’t exercise it, you don’t get credit for it. And that’s certainly true of geekbench in this case.

When engineers benchmark their designs (or their potential designs) they rely on intuition, knowledge gained by experience as to where the bottlenecks are in the real world, and benchmarks. As far as benchmarks go, practices vary. But in my experience I’ve never relied on just one benchmark - it’s always a suite of them, ranging from old standards (spice, specFP/specInt, etc) to things like geekbench, but also things we built ourselves by capturing real data from running real operating systems and software - we can extract critical instruction/data streams, for example everything that happens when you boot windows, or everything that happens when you recalculate a spreadsheet, and turn those into reproducible benchmarks.

In the end, we don’t know what Apple is actually optimizing for, but I imagine that their designers are more concerned with things like AR and ML than is reflected in what geekbench tests. Because, as you note, apple could remove their neural engine and replace it with a few more integer cores and then they’d have a much higher multi core score. But in the real world the chip would be worse for a lot of workloads.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.