iOS 14 runs pretty well on my 4 year-old A10 iPhone 7 Plus.Just in time for the new IOS to slow it down.
I can very occasionally notice a pause which I attribute to RAM. It has 3 GB RAM. SoC speed is perfectly fine though.
iOS 14 runs pretty well on my 4 year-old A10 iPhone 7 Plus.Just in time for the new IOS to slow it down.
I would disagree. In your view there is no gap between prosumers and the Hollywood high budget movie production studios, and the true is that within this gap is where the vast majority of smaller companies stand, including design studios, small producers, advertising agencies, marketing studios, local TV channels, youtubers, multimedia agencies, etc are positioned. The XDR may not be the most optimal choice if you are editing the next blockbuster for hundered of millions of dollars, but it is suitable for anything below that level. At $5k price tag, the XDR is expensive but still affordable for many companies and including freelancers while delivering superb quality. Sony's referrence monitor is probably only produced upon order and there are a few hundred units at most in the whole world. That's not Apple audience, and never was.That’s not entirely true. Many businesses have a minimum threshold of quality in order for a product to be feasible. If your product doesn’t hit the mark it can be the best thing on this planet, yet still it’s useless to the target audience. Apples XDR display is a great example: it’s too expensive for the prosumer, yet not good enough to be used in high budget movie production, leaving it out there in the wild for people who can afford it for bragging rights. Reference monitors are all about reliability, which is why they are so expensive. Compared to the production costs those costs are peanuts, which is why the xdrs price tag doesn’t matter. I’m pretty sure Apple was aware of that, and im pretty sure apple didn’t really intend to target that audience in the first place ... this is a typical luxury good for whales
There are a lot of use case where you better compute it on CPU than GPU
If you have to do a dot product of two square matrices of 1000 row, you will have finished to compute it on the CPU before it reaches your GPU
Did you do fresh install or just upgrade?iOS 14 runs pretty well on my 4 year-old A10 iPhone 7 Plus.
I can very occasionally notice a pause which I attribute to RAM. It has 3 GB RAM. SoC speed is perfectly fine though.
iOS 14 was just a regular upgrade for my iPhone 7 Plus.Did you do fresh install or just upgrade?
I would disagree. In your view there is no gap between prosumers and the Hollywood high budget movie production studios, and the true is that within this gap is where the vast majority of smaller companies stand, including design studios, small producers, advertising agencies, marketing studios, local TV channels, youtubers, multimedia agencies, etc are positioned. The XDR may not be the most optimal choice if you are editing the next blockbuster for hundered of millions of dollars, but it is suitable for anything below that level. At $5k price tag, the XDR is expensive but still affordable for many companies and including freelancers while delivering superb quality. Sony's referrence monitor is probably only produced upon order and there are a few hundred units at most in the whole world. That's not Apple audience, and never was.
You must be hiding under a rock, cuz it's common knowledge on geek forums like these that YouTube is in the process of moving from VP9 to AV1.I had the same thought, I still wonder whether it’s that easy though. I mean, we’re talking about hdr, a technology that most viewers on YouTube don’t care about. Heck, YouTube’s compression algorithm is such a piece of garbage that a switch to .h265 would bring more effective advantage over any kind of 4K hdr shenanigans. At least that was my impression when I compared my uploads to hosting the video privately with proper compression (at a smaller Bitrate...)
I’m not sure why YouTube refuses to adopt this standard, whether it’s licensing issues or average processing issues, but as long as streaming services keep providing ****** codexes watering everything down there is no point in moving to 4k or hdr. And therefore I don’t see the point for having a „casual“ hdr reference monitor, since you’re mastering for an audience that doesn’t even get to enjoy full hd to its full potential
You must be hiding under a rock, cuz it's common knowledge on geek forums like these that YouTube is in the process of moving from VP9 to AV1.
AV1 is roughly as efficient as h.265 HEVC, with some recent tests claiming AV1 is now actually more efficient than h.265 HEVC.
![]()
AV1 Beta Launch Playlist
The first videos to receive YouTube's AV1 transcodes. Support for AV1 in MP4 within Media Source is available in Chrome 70, and Firefox 63 builds newer than ...www.youtube.com
AV1 is not limited to 4K either. Here's a 1080p AV1 music video:
View attachment 963210
That YouTube AV1 playlist is from two years ago, and some of the videos are even older.Yeah, last time I was creating content was 2 years ago, but I still noticed no improvements most lately. As a viewer in that case, I’m very glad to hear that YouTube finally decided to step up their game, I was literally sick of all the motion/skin/... artefacts.
So yeah, I’ve been living under a rock, but hey, at least we’re finally here. In that case i stand corrected, the xdr might make a lot of sense for aspiring movie creators. Less than a quality lens, so there’s that...
That YouTube AV1 playlist is from two years ago, and some of the videos are even older.![]()
Since 2015?How was what I said untrue? Every year since 2015, at least, the A-series processors have increased in performance by about 20%.
Since 2015?
About 20%?
Narrowing down the scope of your statement because it's not entirely true for every instance?
Increasing multi-core perf without also increasing single-core perf is useless for most users.
The SQ2 appears to be a variant of the 8cx Gen 2 (rolls right off the tongue), which still has 4+4 Kryo 495 cores, from December 2018.
It might do a little better than the SQ1 (which is roughly on the iPhone 7 level), but probably not by much.
Not convinced about what is useless for most users. My statement is related to my expectation, that Apple will most likely use most of the additional power headroom for integrating more cores instead of increasing the frequency significantly - which makes the expectation of a 50% increased ST performance in the Macs highly unlikely.
Adding cores is comparatively easy, and if Apple wants to do that especially in higher-end Macs, they can. Getting high ST is the hard and important part, and they’re still utterly demolishing Qualcomm at that.
I statement was specifically addressing the expectation, that the AS Macs would have 50% higher ST performance than the fastest Intel cores - which would roughly mean 50% higher ST performance than A14. The upcoming AS Macs will use the A14 core microarchitecture with possibly slightly increased clocks - therefore a 50% single core performance increase is not realistic.
I don’t think 50% higher is needed, but for whatever it’s worth, this is already 12% higher.
The most I’ve ever seen my iPad Pro throttle is about 20%. Anandtech didn’t see much throttling either. So even if the iPad throttle the new air would still be way higher than a vintage MacBook Pro. In fact, if you define throttling as how much lower the cpu go to compared to its peak frequency, then even an iPad Pro throttles less than modern intel MacBook Pros (because of very aggressive turbo modes in MacBook pros).
add some active cooling on the Apple chip and I doubt it will even budge from peak frequency. Anandtech tested that the latest tiger lake core uses 20w under load which is about half of 40w for the i9-10900k. A13 in contrast uses about 4w and they havecomparable performance.
A dot product of two 1,000 x 1,000 matrices is what my ten year old iPad 2 does for breakfast. You don't need a fast CPU or a fast anything for that. It's just a million floating point operations. If you measure speed, take some real problems.
That's really interesting but one observation, actually more of a question since having seen many of your posts over the years I'm very aware just how immense is the difference in our expertise in this area, but...I don’t have complete data at hand for prior to 2015, and I say “about” because it depends on what you compare to what (for example, the a12x to a12z comparison doesn’t work), and often its more than 20%
Here’s some geekbench 5 single core scores:
A7 - 258
A8 - 312 (22%)
A9 - 530 (70%)
A10 - 725 (37%)
A11 - 905 (25%)
A12 - 1100 (22%)
A13 - 1322 (20%)
A14 - 1583 (20%)
So that goes back to around 2013. Happier now?
So now it's - single core geekbench 5 scores from 2013 being about 20%, but just ignore the A10 and A9 and A12X to A12z.I don’t have complete data at hand for prior to 2015, and I say “about” because it depends on what you compare to what (for example, the a12x to a12z comparison doesn’t work), and often its more than 20%
Here’s some geekbench 5 single core scores:
A7 - 258
A8 - 312 (22%)
A9 - 530 (70%)
A10 - 725 (37%)
A11 - 905 (25%)
A12 - 1100 (22%)
A13 - 1322 (20%)
A14 - 1583 (20%)
So that goes back to around 2013. Happier now?
Of cause it does. That's why generalistic statements like the ones in your above posts do not work. As the data does not fit that generalisation.because it depends on what you compare to what (for example, the a12x to a12z comparison doesn’t work), and often its more than 20%
So now it's - single core geekbench 5 scores from 2013 being about 20%, but just ignore the A10 and A9 and A12X to A12z.
So now it's - single core geekbench 5 scores from 2013 being about 20%, but just ignore the A10 and A9 and A12X to A12z.
Of cause it does. That's why generalistic statements like the ones in your above posts do not work. As the data does not fit that generalisation.
Your specific data is correct. I'm not arguing that. My point is don't make sweeping broad generalisation as often (including your case) they are wrong.
That's really interesting but one observation, actually more of a question since having seen many of your posts over the years I'm very aware just how immense is the difference in our expertise in this area, but...
Would it be fair to say that there's a disjunction in that progression because of what Geekbench measures? The disjunction I'm thinking of is the point at which Apple started adding additional functional units into the A-series outside of the processing cores, the biggest example being the Neural Engine. To me that seems to make comparing increases in Geekbench scores slightly unfair to the Apple engineering prowess (it might be understating it) since it completely ignores the utility of all the transistors in the SoC that, rather than being used to possibly eke out a bit more core performance (bigger caches, bigger branch prediction buffers, more of various types of op units within the core, other stuff to better limit the effect of pipeline stalls, etc, etc, etc.), are used to implement those other functional units.
One could argue that the processing cores have always been losing potential transistors to another functional unit, the GPU, but there is no real disjunction in the series progression there because that effect has always been there so is reflected in the Geekbench scores for the first A-series generation onwards.
Do you see any validity in the above observation?