Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
That isn't really applicable to Intel and their "efficient" cores. Their aren't really tiny and impoverished. Basically these are more space optimized broadwell (Gen 5) cores (that stop at AVX2 ) level of performance on a process shrunk node. It makes them relatively efficient. But mainly just smaller.




Apple's aren't worth much. Intel's are substantively more powerful. They are just relatively small compared to the AVX-512 and the kitchen sink large cores.




A P core can "race to sleep" pretty fast if assigned only E core level work on a mostly idling SoC. On lightweight work it isn't necessarily the "burn power like no tomorrow" option for that kind of workload.

The problem with the Intel P (performance ) cores is that they are bulky big. To match the instruction set coverage in P and E in Alder lake they just have "dead" AVX-512 dangling in there.

Intel is trying to match AMD on the core count but isn't on 7nm. So Intel largely isn't trying to shrink the E cores too small. Just small enough to make up the gap "losing" because behind on process density.

For mobile Gen 12 (Alder Lake) mobile, Intel is going to cap out the P cores at 6. ( down from desktop 8). That is in part to save space as well as power. The E cores aren't being cut. ( not as big of a space hog in addition to lower power).




That is mostly dependent upon the Windows 11 scheduler and application workload delegation.
High end Ryzen desktop Intel is competing against 16 cores. Stopping at 8 cores is a problem when it comes to trying to match in Multithread performance.

On mobile Ryzen is caps at 8 ( 16 threads ) , but Intel is more limited to 6 P. and 2C on the low end TDP mobile models) But if throw 8 "E" at the problems they should be able to pass the AMD. Just has to be about the same power the Ryzen mobile. Intel was probably also nervous about AMD maybe switching up and pushing mobile onto smaller nodes first ( instead of last ... so that could go past 8 cores ).

In short, I doubt Intel would be on this path at all if they were still 1-2 years ahead of everyone on process node fabrication skills.

Both Intel and AMD aren't throwing as much transistor budget (and bandwidth) at the iGPU as Apple is. That is another reason why Intel has a bigger budget for E cores.
On some future iterations where Intel wants to throw a substantially bigger iGPU tile/chiplet into the package then I suspect we'll see some shrinkage in E core count.

An Apple E-core is worth about 1/3 of a P core.
We do not know what a Gracemont core will be worth. We do know
- Intel claims they'll be worth substantially more than an SMT thread. That seems, uh, unlikely.
(Claim that 4x Tremont can run at 1.8x the throughput of 2 Willow Cove)

- Intel made similar unlikely claims for Tremont (worth about an SMT thread) that were demolished by the Lakefield benchmarks which showed that, rather than being worth about 60% of a large core, they were worth about 30%.

There is so much bobbing and weaving in the Intel claims (what are the cooling conditions? how much are the large and small cores being throttled by overheating?) that it's impossible to actually understand where the strengths and weaknesses are. One might have expected/hoped that the small cores would not throttle much, even when all running together, which would be a strength and boost throughput relative to the large cores. But we did not see that effect being of much value for Lakefield.

If it were any other company (AMD, ARM, QC) I'd be willing to trust and accept Intels' claims for Gracemont. But Intel have destroyed all their credibility. When we have independent benchmarks of Gracemont, then we can analyze their strategy. Until then, Intel's claims for Gracemont are as worthless as their claims for process dates have been over the past 5+ years.
 
How about the OpenCL compute benchmark score which is more interesting than CPU or is OpenCL still broken on Apple Silicon since Big Sur 11.5? Anyhow, Geekbench is kind of garbage compared to real world workload.
Dude, GB5 results have been shown to correlate well with most other results, whether it's browser tests or SPEC.
All you're doing by making claims like this is showing your ignorance.
 
ummm, math? Even on Geekbench's site where they don't throw out outliers where people are not running the benchmark standalone, the M1 is 7400. If I double that, lets see 7400 * 2 = 14,800 not 11,542. am I missing something?
Yes you are.
An E core is worth about 1/3 of a P core. So let's do some ridiculously naive math scaling.
7400/(4+4/3)*(8+2/3)=12,000. Pretty darn close.
 
  • Like
Reactions: Santiago
I thought the M1 max might have had a higher single core score but it’s almost the same as the M1. Obviously the multi core score is higher due to 8 performance cores vs 4 on the M1 although not double the M1”s multi core score.
Of course it's the same score! It's the same core (Firestorm) on essentially the same process!

Apple will probably run it at the same frequency. In theory they could maybe run it a few percent higher, but they probably prefer not to do that and leave more headroom for all the cores to run at a high frequency without appearing to throttle much.
 
  • Like
Reactions: Santiago
How about the OpenCL compute benchmark score which is more interesting than CPU or is OpenCL still broken on Apple Silicon since Big Sur 11.5? Anyhow, Geekbench is kind of garbage compared to real world workload.

geekbench is pretty decent.

and 5xxx amd gpu still crush apple silicon in metal scores. let alone 6xxx gpu

we’ll see if apple decides to improve gpu or if they don’t care cuz no games anyway

this is part of why I will miss macOS running on x86. I'm not a laptop guy so I don't care about heat or fan noise. I use my computer at a desk for Logic Pro, some light Microsoft office stuff for my day job, web browsing/email, screwing around with different operating systems, and when I have some time, a bit of gaming

it was nice well it lasted over the last almost 20 years to be able to build or buy whatever computer I wanted and run more or less any and all operating systems on it. seems like we are regressing back to the powerpc vs intel days. not sure if I'll buy a new Mac in a few years or just run logic on the hack I have now and build a new pc, or go all Mac + console. just don't know. but it was nice to be able to do everything on one machine for so long
 
Last edited:
  • Like
Reactions: brucemr
please correct me if I'm wrong. but isn't the cheapest new MacBook Pro more expensive than the cheapest last intel MacBook Pro?
True statement...
here we go the classic Intel is better argument comparing a full desktop with higher TDP (already shown and comparisons during the keynote) that costs more and currently no M1 comparison for desktop.

wait until Xmas next year than Apple with have custom silicon for a desktop architecture that compares. Sry year the same comparison occurs on every Apple release but fails to compare all aspects just what suits wintel only. Also Apple doesn’t upgrade the chipset every year just because.
Yes of course the M2 Pro/Max the will have improvements over M1 Pro/Max. However it was Apple that made it comparison to PC/laptops in terms of performance.
 
  • Like
Reactions: DeepIn2U
Why would you possibly need something like the M1 Pro/Max in a phone, or even an iPad? There is such as diminishing returns beyond which there is no perceptable improvement.

The SoCs have far too high a TDP in any case - thermal management and battery life are far more important in mobile devices.
Well not really. Thermal cooling has been performed on Android phones to minimize heat draw. So it can be done. I expect the A16 to hit M1 performance next year. If not I find it disappointing with 15-20% improvements year to year. So in your case, you don’t want M1 performance in phones? Diminishing returns? So 20% improvements in speed are acceptable in smartphones?That’s why Apple has an amazing R&D team to figure out thermal power vs efficiency. If they did it for M1 Pro/Max they can perform this for A series. Battery life? Solar technology has been around for years. With thermal cooling and solar technology we can go past the limits of battery and CPU/GPU performance.
 
Well not really. Thermal cooling has been performed on Android phones to minimize heat draw. So it can be done. I expect the A16 to hit M1 performance next year. If not I find it disappointing with 15-20% improvements year to year. So in your case, you don’t want M1 performance in phones? Diminishing returns? So 20% improvements in speed are acceptable in smartphones?That’s why Apple has an amazing R&D team to figure out thermal power vs efficiency. If they did it for M1 Pro/Max they can perform this for A series. Battery life? Solar technology has been around for years. With thermal cooling and solar technology we can go past the limits of battery and CPU/GPU performance.

i think the point is what would you need that sort of performance from your phone for?

macrumors is only going to load up as fast as your network will serve it. texts and emails have been “human instantaneous” for years

battery life sure i guess. my 12 mini lasts all day. i suppose if you live in a situation where you can’t charge your phone for days on end that might be helpful
 
  • Like
Reactions: brucemr
once my hackintosh needs replacing i would certainly consider a console if i had enough time for gaming to justify having two machines. i will certainly miss being able to boot nearly any operating system on any consumer computer hardware. sort of the end of a neat era



haven’t seen the numbers for these yet but the gpu benchmarks on the m1 were pretty lacklustre. not intel integrate gpu bad, but not great either

the Intel Xe integrated graphics didn’t fair any better than M1 but it only shined because of the directx 12 support of games.

maybe just Maybe Apple has to invite even bend over back worlds or have their elite coders to work with top tier game developers more closely which May help entice them to learn more and spend the effort.

only action and time will tell what we’ll see going forward. I’d like not to run games in Rosetta2+Parallels for Win11 as much as possible.
PS: I hate Win11 more than Win8, Win7 initially the first 2yrs and more than Win95 or WinSsrver 2000.
 
The first Geekbench compute result is out:

OpenCL Score: 60,167
That's quite a disappointing score.

It only slightly better than half what the RTX 3070 laptop GPU gets:

1634700415185.png


Basically similar to a 5700 XT, Vega 56 or Vega 64X:

1634700483678.png
 
  • Like
Reactions: mi7chy
the Intel Xe integrated graphics didn’t fair any better than M1 but it only shined because of the directx 12 support of games.

maybe just Maybe Apple has to invite even bend over back worlds or have their elite coders to work with top tier game developers more closely which May help entice them to learn more and spend the effort.

only action and time will tell what we’ll see going forward. I’d like not to run games in Rosetta2+Parallels for Win11 as much as possible.
PS: I hate Win11 more than Win8, Win7 initially the first 2yrs and more than Win95 or WinSsrver 2000.

intel integrated graphics aren’t the bar though. intel integrated graphics are only a fallback if you don’t want/need a gpu. amd and nvidia graphics are the bar. unless apple is going to put amd and/or nvidia graphics in to the soc/logic board/whatever you want to call it, they are going to have to massively improve their own gpu if they want to claim it’s a full on powerful modern computer



win 11 doesn’t bother me at all. just seems like win 10 with a linux de skin.

not that i like windows at all. i only boot it if i need to for something
 
Last edited:
i think the point is what would you need that sort of performance from your phone for?

macrumors is only going to load up as fast as your network will serve it. texts and emails have been “human instantaneous” for years

battery life sure i guess. my 12 mini lasts all day. i suppose if you live in a situation where you can’t charge your phone for days on end that might be helpful
You’re response sounds like you enjoy paying for the same product every year without performance gains. Yes I want my phone to be as fast as an M1. Why not? Phones cost over $1k going upwards of $2k it better be fast. Again you state there is no difference in loading pages and as fast as your network. Do you notice a difference between 3G vs LTE? So an iPhone X is not needed over an iPhone 13? Why do you need a faster MBP processor every year?

My son has the iPhone 12 and the battery life is average at best. Go with higher brightness and watch Netflix and use for YouTube and talk on the phone. You are lucky to get 6 hrs of continuous usage. I have an iPhone 12 Pro Max and my battery health is at 87%. I’m lucky to get 6 hrs of continuous usage.
 
In the meanwhile, more results were uploaded on Geekbench, and they are more aligned Apple claim of a 70% CPU performance increase. Most of the results are about 12500 now in multi core.
 

Attachments

  • 9B957D51-B608-4588-A7A4-813C908E7C1E.png
    9B957D51-B608-4588-A7A4-813C908E7C1E.png
    282.3 KB · Views: 68
ummm, math? Even on Geekbench's site where they don't throw out outliers where people are not running the benchmark standalone, the M1 is 7400. If I double that, lets see 7400 * 2 = 14,800 not 11,542. am I missing something?
Things don't scale up linearly - there are synchronizing overheads: global lock loops, system semaphores, and other overheads which eat into performance.

IOW, there are times when you have to disable certain actions to insure that two processes are not trying to modify some critical system resource at the same time.
 
In the meanwhile, more results were uploaded on Geekbench, and they are more aligned Apple claim of a 70% CPU performance increase. Most of the results are about 12500 now in multi core.

So, scores on par with an i9-7940X, which has 14 cores and a TDP of 165W, compared to ~30W for 10 cores on the M1.
 
Yes starting at $6999...I don't call this affordable by any means...

The current Mac Pro's are starting at $5999

Invest 0.1% of your revenue to buy the tools for you to make money. That's the target market Apple is aiming at when they release a $6000 machine. The ones that makes $100,000+ a month doing high end production.
 
1700 single is great but
11,000 multi is no where near what's possible on the Mac Pro 2019

Are these scores (at 24 hz) legit?
 
Invest 0.1% of your revenue to buy the tools for you to make money. That's the target market Apple is aiming at when they release a $6000 machine. The ones that makes $100,000+ a month doing high end production.
That seems to be a favorite practice of those who want the disparage a product - pump it up to stratospheric levels waaayyy beyond what any normal consumer needs and then complain about the price.

Tell me ... do you currently have an 8 TB 7.4GB/sec SSD in your laptop?

Why not?

How about 64 GB of 400GB/sec RAM?

No?

Then why did you feel the need to spec this unit up to those heights, when you didn't do that with the unit you own?
 
  • Like
Reactions: brucemr and name99
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.