Actually that is specifically what Cinebench R23 on Apple silicon doesn't do.Yes, it is not properly optimzed for Apple silicon. But it still is a great platform for pushing devices to their max power.
Actually that is specifically what Cinebench R23 on Apple silicon doesn't do.Yes, it is not properly optimzed for Apple silicon. But it still is a great platform for pushing devices to their max power.
True.Actually that is specifically what Cinebench R23 on Apple silicon doesn't do.
I think the remarks are that Cinebench is heavily optimized for Intel x86-64 and not so much for Apple silicon. I'm not sure why you think those remarks are controversial?
People aren't saying it's horrible, they're saying it's not properly optimized for Apple Silicon, and thus should not be used as a cross-platform comparator. If we compare it to more platform-neutral benchmarks like SPEC or Geekbench, it appears that Cinebench suffers about a 10% AS-specific penalty.
There was no one person I was directing it to, just my general observation that when cinebench is mention, there are people that seem to want to come in and poo-poo the idea of it.I believe @maflynn is directing his comments at me
As they should if Cinebench is mentioned as a cross-platform comparison. Trying to correct misleading information is a good thing and not something to be ridiculed.when cinebench is mention, there are people that seem to want to come in and poo-poo the idea of it.
Cool, that would be a welcome change. Thanks for sharingM2 Max is going to have more Ram than 64 gigabytes
How could the multi core performance reach upto 24-25k? Are you talking about geekbench?Another source said the single-core performance is 2k+ while the multi-core performance is 24-25k+. Which one should I trust?
Yes, we already know it will have up to 96GB (4 x 24GB).M2 Max is going to have more Ram than 64 gigabytes
No, you just don't get it. cinebench is simply not optimized for apple silicon, and therefore results are not meaningful, period. It is one thing to be able to run on apple silicon, it is a completely other thing to be optimized for it. an example, one clown on macforums claimed that Apple silicon was not as good at running Handbrake because he refused to use the optimized tools via Videotoolbox becuase they were hardware encoder/decoders, but what this clown didn't factor in is that Intel silicon also has embedded hardware encoders/decoders which he allowed to be used. so optimized for intel, not optimized for Apple silicon - whoah that is meaningful - Not, Lakshimash.Cue the remarks about Cinebench being a horrible product, 3,2,1![]()
That's Good. I am happy you know that.Yes, we already know it will have up to 96GB (4 x 24GB).
I very much doubt that Apple will go with less than the current M1 Max (32GB) but 64GB and 96GB is a given. Perhaps offering a 48GB option.M2 Max will offer 24 Gb, 32 Gb, 48 Gb, 64 Gb, and 96 Gb ram options. According to my sources.
I wrote 24Gb by mistake. Please don't mind it.. 😐M2 Max will offer 24 Gb, 32 Gb, 48 Gb, 64 Gb, and 96 Gb ram options. According to my sources.
My stupid brain is confused between the 2. Yes. It is the Geekbench.How could the multi core performance reach upto 24-25k? Are you talking about geekbench?
M2 Max may use TSMC's 3nm Technology.anything we don’t know… like mmh new SOC features?
Agreed, but I'll up the ante. I am perfectly willing to say that Cinebench is a horrible benchmark. It measures one very narrow and mostly irrelevant thing: how fast Intel's Embree raytracing library renders exactly one canned scene (supplied as part of the benchmark).People aren't saying it's horrible, they're saying it's not properly optimized for Apple Silicon, and thus should not be used as a cross-platform comparator. If we compare it to more platform-neutral benchmarks like SPEC or Geekbench, it appears that Cinebench suffers about a 10% AS-specific penalty.
This seems plausible. The M2 base is also 1398 MHz. And for memory, the M1 Max has the same LPDDR5 and the M2 Max supposedly does. However if it's the same clock speed, I'm worried it's not the ray-tracing GPU from the A16. They probably just copied the clock speed from M2, they couldn't measure it.
Integrated Graphics Apple M2 Max GPU GPU Boost Clock 1398 MHz
Not sure this is accurate.
Fabrication process 5 nm
instruction set ARMv8
I wouldn't take anything from a site like that at face value. It could all be made up, how could you know?I'd be disappointed. Where's ARMv9 with SVE? Although to be fair, they'd probably use the Sawtooth cores from the A16.