Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
single_core_scores.png
 
The way defects work is that they are randomly distributed across area, with clustering of certain types. The bigger the area of a structure the higher the likelihood of a defect in that structure.
IIRC, Gordon Moore wrote a paper defending his Moore's Law hypothesis by claiming that semiconductor wafer defects were not randomly distributed.
 
IIRC, Gordon Moore wrote a paper defending his Moore's Law hypothesis by claiming that semiconductor wafer defects were not randomly distributed.

I don’t use “random” here to mean stochastic. I use it in the colloquial sense of “unpredictable.” I noted that they are clustered.

Of course it depends on the type of defect, though.
 
🤷‍♂️ The battery life on my 11 Pro Max was significantly better than that of my current 12 Pro Max (both 512GB). My wife has my hand-me-down 11 Pro Max, and I’m plugging in much earlier in the evening than she is.
could there be a slight chance that you're on the phone more often than her?
 
Is 10% increase noticeable?. Is this the beginning of mini performance increases every year?. At this rate it will take more than 6 years to duplicate the current chips performance. Is good, more reasons to not spend money every year on a new phone.
But this makes me wonder about many features that have not included in older devices because they supposedly don’t have enough performance. For example, what is the excuse for not supporting cinematic video on iPhone 12?, 10% less cpu?, really?. Many other features are supported with that 10% extra CPU power?. Multi core slight improvements make cinematic video possible?. Mmmmhhh, I think we can have those features on iPhone 11 and 12. Maybe a little lower res but definitely could be supported if Apple did not restrict the new great features to the new phone every year and capped the older models. I need a class action!
The GPU got a huge boost 55% is a lot. Same with Neural engine all needed for prores and cinematic video
 
What is the TDP for the A15 Bionic?

EDIT: I will answer my own question. The TDP for the A15 is 8.5 watts. The TDP for the A14 Bionic is 7 watts.

don't trust cpu-monkey. No one tested the TDP of these chips yet.
 
There are now more reasons than ever to get the Pro vs. regular. 120Hz. Boosted graphics. All cameras are better. Longer battery life. Not to mention all the same advantages as before (more RAM, stainless build, zoom lens, pro camera formats, etc.). Yet, I still find myself feeling loyal to the Mini. I just don’t want small premium phones to die! Somebody promise me you’ll order a Mini if I get the Pro this time 😂
I promise!
I’ve decided:
iPhone 13 Mini 512GB light blue (I’ll miss the dark blue of my current 12 mini) purchasing outright!
- new line next week or in 2weeks new carrier no more prepaid = 13 Mini 512 GB #2 possibly or if next fall no more mini I’ll get one as a spare for longevity ;)

iPad Mini 6 256GB this will be for my biking adventures and learning to draw, and journal entries (personal growth) and when travelling this Xmas.

Xmas/January MBP 14” mid tier/maxed out depends what the design and performance and connections bring.

How does having 1 additional GPU core (+25%) result in +34% higher Metal score?

well … software. Everyone has been focusing so much on the A-series chips and even the M-series and yet we’re all forgetting to those NeXT software gods Tev, Forstall, and many others that worked at NeXT coming to Apple had the genius and foresight to bring the MACH Kernel with such future growth possibilities that enable and keep adjusting to what these chips and offer with renowned stability due to its posix like nature.

Android has the speed has advanced but not reliable or stable in use daily more than a year or so - always hearing complaints of freezing or slow performance.

Windows OS (not the servers) hasn’t really improved on its performance using the same kernel on x86 I’ve the decades. Maybe Win11 changes all of that.
 
Last edited:
No but is there any reason to believe it doesn’t? I’d bet big it does.

Also I’m beginning to think at this point, as long as it has any of the latest Qualcomm modems, the apple antenna design is more important. Hopefully they made improvements over the 12.

I just want to confirm so it wouldn't become a known fact, and not all the guess work. Even though it should be X60 given the additional band support. Then there is the question of Samsung 5nm Foundry.
 
It seems like most people here are missing the whole point. First of all yes the CPU cores themselves seem to be close to identical to the A14, but they are getting better throughput just because of the clock increase. Secondly, the multi core performance is significantly higher. This is probably a result of the doubling of the system cache. The GPU speed is a lot higher in the pro models especially to accommodate for Promotion but that raw power is also there. The Neural Engine is 40% faster which itself last year was 200% faster so in two years the Neural Engine has sped up over 280%. And that’s used for all kinds of things on the phone. And the ISP and encode an encoder engines are all new as well.

Theres a LOT more to A15 than just the CPU cores. There’s whole SIP design, RAM, cache, coprocessing, node improvements… people saying the two chips are identical or close to it is like saying Google doesn’t watch what you search. It’s asinine and contrary to the facts.

Nothing you said discounts what I said
I was specifically speaking about the CPU cores, which is the focus of the Anandtech article I linked to, and the MacRumors article, which was discussing CPU/GPU performance.
And while I understand your point that there's (much) more to an SoC than just raw CPU/CPU performance, nothing you said really backs up that statement in regard to A15 vs A14 ... Performance going up because of more cache is obvious, more cache generally does that and isn't usually considered a "new design," on its own. Same with adding an extra GPU core, or higher frequencies on the CPU/GPU, of course performance went up. Neural Engine up 40% is more interesting, but compared to 200% definitely appears to be more of an iterative improvement.

None of this is to say it didn't take a fair amount of engineering effort to make these things happen. Increasing frequency, adding cache, adding an additional GPU core are all things that take engineering effort to achieve or else they would have been done in the previous design. That said, the fact that the new phones are noticeably heavier points to perhaps them just trading more weight for more power (and battery to make it happen.)

Anyway, I'm confident that when reputable outlets like Anandtech do deep dives on the A15 it will probably end up looking more like an A14+ than a brand new SOC, but I'm happy to be wrong (maybe its the first commercial implementation of ARMv9, doubtful but who knows)
 
50% increase in a year's time is ludicrous , what year is it 1993?
but how does this reflect in real usage scenario? For example when playing a game looks the same on 10 12 and 13 probably

Yeah, I believe the non-pro chip has a GPU core disabled…probably chip binning to maximize yields/profits.

this is completely evil, this means you actually paid for the same chip but they won't let you use it
 
50% increase in a year's time is ludicrous , what year is it 1993?
but how does this reflect in real usage scenario? For example when playing a game looks the same on 10 12 and 13 probably



this is completely evil, this means you actually paid for the same chip but they won't let you use it
No it doesn’t.

When you fly on a plane and pay for coach, it’s the same plane as first class and they won’t let you use it.

But in each case you got what you paid for.
 
this is completely evil, this means you actually paid for the same chip but they won't let you use it

This is how chip industry has been operating for decades. You know that GeForce RTX 3080 and 3090 are the same chip, as are Intels i5 and i7, right?

And how is it evil? Not all chips turn out perfectly. By disabling parts of „worse“ chips and selling them cheaper they can ship more devices and cut the costs. Want to have a perfect 5-core GPU on the non-pro iPhone? Be prepared to pay well over $1000 and wait for months.
 
  • Like
Reactions: firewood
this is completely evil, this means you actually paid for the same chip but they won't let you use it
So every chip vendor is evil then? As every chip vender can run their same chips faster in the lab than in any system "they" will sell you. A most of the storage you buy has more bits/bytes of storage than "they" let you use. Same for all safety systems and a huge fraction of mass produced consumer products. So nothing to see here.
 
So every chip vendor is evil then? As every chip vender can run their same chips faster in the lab than in any system "they" will sell you. A most of the storage you buy has more bits/bytes of storage than "they" let you use. Same for all safety systems and a huge fraction of mass produced consumer products. So nothing to see here.

Yes, we are evil.

But you weren’t supposed to find out.

(One time, when I was designing chips at a place I won’t mention, we considered having programmable features, so that someone using a computer with our chip could, on-the-fly, pay to unlock features/performance, etc. We decided that would be too evil, even for us).
 
Yes, we are evil.

But you weren’t supposed to find out.
One story is about a mainframe vendor, where if you paid the monthly rental for a faster computer model, the service tech would come and remove a small hidden jumper. Story is that at one of the national labs, they found out. Removed the jumper, but put it back and reset the counters before any company techs were allowed back thru security. Or so the story goes.
 
  • Like
Reactions: cmaier
This is how chip industry has been operating for decades. You know that GeForce RTX 3080 and 3090 are the same chip, as are Intels i5 and i7, right?

And how is it evil? Not all chips turn out perfectly. By disabling parts of „worse“ chips and selling them cheaper they can ship more devices and cut the costs. Want to have a perfect 5-core GPU on the non-pro iPhone? Be prepared to pay well over $1000 and wait for months.

So every chip vendor is evil then? As every chip vender can run their same chips faster in the lab than in any system "they" will sell you. A most of the storage you buy has more bits/bytes of storage than "they" let you use. Same for all safety systems and a huge fraction of mass produced consumer products. So nothing to see here.

Yes, we are evil.

But you weren’t supposed to find out.

(One time, when I was designing chips at a place I won’t mention, we considered having programmable features, so that someone using a computer with our chip could, on-the-fly, pay to unlock features/performance, etc. We decided that would be too evil, even for us).

So what you are saying is that they makes chips and those with faulty cores are sold as the lower tier and those that come out perfect sold as higher tier?

or are they all fully functional but they purposefully shutting the cores down? This is similar to a guy buying a 3bedroom house but the real estate agent locked 2 rooms because he paid less although he owns the full house
 
So what you are saying is that they makes chips and those with faulty cores are sold as the lower tier and those that come out perfect sold as higher tier?

or are they all fully functional but they purposefully shutting the cores down? This is similar to a guy buying a 3bedroom house but the real estate agent locked 2 rooms because he paid less although he owns the full house

Both of these things happen.

But there are other metaphors. Like, when you buy a seat for coach on a plane, the airline won’t let you sit in first class even if there is a seat free (unless you pay). Or you buy a Tesla model S, which has the hardware and software to do full self driving, but unless you pay $10,000, the feature remains locked. This kind of stuff happens all the time. Just because the hardware is there, doesn’t mean you are entitled to it.

(By the way, regardless of whether the core is broken or not, the manufacturer blows a fuse on the chip to ensure that it can’t be used, so it’s not like you could somehow enable the extra core or whatever, after you buy it.)
 
I wonder if the lack of comparison to A14 is because M1X/M2 (whatever they decide to call it) is coming and that comparison is going to blow this one out the water.

If the next Macs are twice as fast as the current ones, a 10% increase in phones across the same generations looks pretty pathetic.
 
I wonder if the lack of comparison to A14 is because M1X/M2 (whatever they decide to call it) is coming and that comparison is going to blow this one out the water.

If the next Macs are twice as fast as the current ones, a 10% increase in phones across the same generations looks pretty pathetic.

I think you’ll see 14% single core improvement. Multicore will be much more of an improvement.
 
Pro mac marketing tends to focus on multicore performance and graphics performance.

If you are talking about the CPU performance difference between the Intel 16“ Mac and the upcoming Apple Silicon 16“ Mac, they should be in the ballpark of 80-100% (both single and multicore). Maybe higher, depending on what Apple delivers.
 
If you are talking about the CPU performance difference between the Intel 16“ Mac and the upcoming Apple Silicon 16“ Mac, they should be in the ballpark of 80-100% (both single and multicore). Maybe higher, depending on what Apple delivers.
You mean in synthetic benchmarks like Geekbench, or real-world app performance?

If it's the former, that would mean (for single core) 1088 =>≈2000 – 2200 (rounding to nearest 100).

Considering GB's crude benchmark, I noticed (as I'm sure others have) that the single-core diff. between the A14 (1584) and M1 (1707) can be mostly accounted for by the diff. in clock speed (3.0 GHz vs 3.2 GHz). So I was wondering if it would be the same if they do an A15-based chip in the MBP's—say going from 1750 @ 3.2 GHz for the A15, to 3.5 – 3.6 GHz in an A15-based MBP (=> 1900 – 2000).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.