IIRC, Gordon Moore wrote a paper defending his Moore's Law hypothesis by claiming that semiconductor wafer defects were not randomly distributed.The way defects work is that they are randomly distributed across area, with clustering of certain types. The bigger the area of a structure the higher the likelihood of a defect in that structure.
IIRC, Gordon Moore wrote a paper defending his Moore's Law hypothesis by claiming that semiconductor wafer defects were not randomly distributed.
could there be a slight chance that you're on the phone more often than her?🤷♂️ The battery life on my 11 Pro Max was significantly better than that of my current 12 Pro Max (both 512GB). My wife has my hand-me-down 11 Pro Max, and I’m plugging in much earlier in the evening than she is.
The GPU got a huge boost 55% is a lot. Same with Neural engine all needed for prores and cinematic videoIs 10% increase noticeable?. Is this the beginning of mini performance increases every year?. At this rate it will take more than 6 years to duplicate the current chips performance. Is good, more reasons to not spend money every year on a new phone.
But this makes me wonder about many features that have not included in older devices because they supposedly don’t have enough performance. For example, what is the excuse for not supporting cinematic video on iPhone 12?, 10% less cpu?, really?. Many other features are supported with that 10% extra CPU power?. Multi core slight improvements make cinematic video possible?. Mmmmhhh, I think we can have those features on iPhone 11 and 12. Maybe a little lower res but definitely could be supported if Apple did not restrict the new great features to the new phone every year and capped the older models. I need a class action!
don't trust cpu-monkey. No one tested the TDP of these chips yet.What is the TDP for the A15 Bionic?
EDIT: I will answer my own question. The TDP for the A15 is 8.5 watts. The TDP for the A14 Bionic is 7 watts.
Apple A15 Bionic - Benchmark, Test and Specs
Apple A15 Bionic - Benchmark, Geekbench 5, Cinebench R20, Cinebench R23, Cinebench R15 and FP32 iGPU (GFLOPS) benchmark resultswww.cpu-monkey.com
I promise!There are now more reasons than ever to get the Pro vs. regular. 120Hz. Boosted graphics. All cameras are better. Longer battery life. Not to mention all the same advantages as before (more RAM, stainless build, zoom lens, pro camera formats, etc.). Yet, I still find myself feeling loyal to the Mini. I just don’t want small premium phones to die! Somebody promise me you’ll order a Mini if I get the Pro this time 😂
How does having 1 additional GPU core (+25%) result in +34% higher Metal score?
No but is there any reason to believe it doesn’t? I’d bet big it does.
Also I’m beginning to think at this point, as long as it has any of the latest Qualcomm modems, the apple antenna design is more important. Hopefully they made improvements over the 12.
It seems like most people here are missing the whole point. First of all yes the CPU cores themselves seem to be close to identical to the A14, but they are getting better throughput just because of the clock increase. Secondly, the multi core performance is significantly higher. This is probably a result of the doubling of the system cache. The GPU speed is a lot higher in the pro models especially to accommodate for Promotion but that raw power is also there. The Neural Engine is 40% faster which itself last year was 200% faster so in two years the Neural Engine has sped up over 280%. And that’s used for all kinds of things on the phone. And the ISP and encode an encoder engines are all new as well.
Theres a LOT more to A15 than just the CPU cores. There’s whole SIP design, RAM, cache, coprocessing, node improvements… people saying the two chips are identical or close to it is like saying Google doesn’t watch what you search. It’s asinine and contrary to the facts.
Yeah, I believe the non-pro chip has a GPU core disabled…probably chip binning to maximize yields/profits.
No it doesn’t.50% increase in a year's time is ludicrous , what year is it 1993?
but how does this reflect in real usage scenario? For example when playing a game looks the same on 10 12 and 13 probably
this is completely evil, this means you actually paid for the same chip but they won't let you use it
this is completely evil, this means you actually paid for the same chip but they won't let you use it
So every chip vendor is evil then? As every chip vender can run their same chips faster in the lab than in any system "they" will sell you. A most of the storage you buy has more bits/bytes of storage than "they" let you use. Same for all safety systems and a huge fraction of mass produced consumer products. So nothing to see here.this is completely evil, this means you actually paid for the same chip but they won't let you use it
So every chip vendor is evil then? As every chip vender can run their same chips faster in the lab than in any system "they" will sell you. A most of the storage you buy has more bits/bytes of storage than "they" let you use. Same for all safety systems and a huge fraction of mass produced consumer products. So nothing to see here.
One story is about a mainframe vendor, where if you paid the monthly rental for a faster computer model, the service tech would come and remove a small hidden jumper. Story is that at one of the national labs, they found out. Removed the jumper, but put it back and reset the counters before any company techs were allowed back thru security. Or so the story goes.Yes, we are evil.
But you weren’t supposed to find out.
This is how chip industry has been operating for decades. You know that GeForce RTX 3080 and 3090 are the same chip, as are Intels i5 and i7, right?
And how is it evil? Not all chips turn out perfectly. By disabling parts of „worse“ chips and selling them cheaper they can ship more devices and cut the costs. Want to have a perfect 5-core GPU on the non-pro iPhone? Be prepared to pay well over $1000 and wait for months.
So every chip vendor is evil then? As every chip vender can run their same chips faster in the lab than in any system "they" will sell you. A most of the storage you buy has more bits/bytes of storage than "they" let you use. Same for all safety systems and a huge fraction of mass produced consumer products. So nothing to see here.
Yes, we are evil.
But you weren’t supposed to find out.
(One time, when I was designing chips at a place I won’t mention, we considered having programmable features, so that someone using a computer with our chip could, on-the-fly, pay to unlock features/performance, etc. We decided that would be too evil, even for us).
So what you are saying is that they makes chips and those with faulty cores are sold as the lower tier and those that come out perfect sold as higher tier?
or are they all fully functional but they purposefully shutting the cores down? This is similar to a guy buying a 3bedroom house but the real estate agent locked 2 rooms because he paid less although he owns the full house
I wonder if the lack of comparison to A14 is because M1X/M2 (whatever they decide to call it) is coming and that comparison is going to blow this one out the water.
If the next Macs are twice as fast as the current ones, a 10% increase in phones across the same generations looks pretty pathetic.
Multicore will be much more of an improvement.
Pro mac marketing tends to focus on multicore performance and graphics performance.
You mean in synthetic benchmarks like Geekbench, or real-world app performance?If you are talking about the CPU performance difference between the Intel 16“ Mac and the upcoming Apple Silicon 16“ Mac, they should be in the ballpark of 80-100% (both single and multicore). Maybe higher, depending on what Apple delivers.