Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
If I was writing something like Rosetta, I probably wouldn’t bother preserving single core execution and instead try to optimize by executing across multi core if possible as part of my dynamic recompile.. Are we sure Rosetta isn’t doing that?
Single core execution? Must be really, really old applications. I’d be concerned if I had an application that didn’t support modern (as in 10-year old) technology and was still running. How about security, do the developers also ignore that. Maybe I’m missing your point
 
I'm talking about multicore score / number of cores. Multicore is the only measure of relevance here.

EDIT: Like this:

Mac mini (Late 2020)
Apple M1 @ 3.2 GHz (8 cores) 7643

Mac mini (Late 2018)
Intel Core i7-8700B @ 3.2 GHz (6 cores) 5476

7643 / 8 = 955
5476 / 6 = 913

A 5% difference for a machine that is 2 years newer. So what is the hoopla about, again??
No, you will see the real life perf and differences. Hint: is far better than 5% in anything you will do
 
No, you will see the real life perf and differences. Hint: is far better than 5% in anything you will do
Based on what? Those measly 16 GB will make it much slower in practice for real life performance, that is, encoding video and such.
 
  • Like
Reactions: wumpalumpa
Based on what? Those measly 16 GB will make it much slower in practice for real life performance, that is, encoding video and such.
Based on intel mac mini 16gb ram vs dev kit mac mini with 16 gb ram and a12z
You will see. Dont take my words for it, you need to see for yourself, unified memory is blazing fast no need for ext bus or anything else
But if you know you need 32 or 64 gb ram , no wonder apple kept the intel mac mini for that
 
I'm talking about multicore score / number of cores. Multicore is the only measure of relevance here.

EDIT: Like this:

Mac mini (Late 2020)
Apple M1 @ 3.2 GHz (8 cores) 7643

Mac mini (Late 2018)
Intel Core i7-8700B @ 3.2 GHz (6 cores) 5476

7643 / 8 = 955
5476 / 6 = 913

A 5% difference for a machine that is 2 years newer. So what is the hoopla about, again??

So you know it's only 4 performance cores right? And they're doing it with less power.

Your Mac Mini still works fine but it's 2020 now, I don't know why you don't find this exciting.
 
Why do people keep spreading that message? Linux has ARM support and the M1 can easily be supported. Microsoft could choose to support Windows on it. The T2 allows disabling its boot volume signature checks. It’s no more “closed” than the PPC version of Macs were.
so will it be possible to dual boot linux?
 
Perhaps I'm going back a bit too far, as I was amazed, as was all of the industry, at the Voodoo 1 add on PC card for 3D Graphics.
I bought a Voodoo 1 and then, following that a Voodoo 2.
I remember those days as well, although my first GPU was I believe a Rage Pro(?) in a PM 6500 for video capture, which I guess came with a Rage II GPU onboard as well.

Something I am wondering though. If we take the latest RTX 3080 GPU's from Nvidia, they have 28 billion transistors just for this monster GPU, whilst the whole M1 chip has only around half that for everything, coming in at 16 billion.

Now I know numbers are not everything, but these products seem worlds apart.
They probably seem like worlds apart because they are?

Again, the M1 is a 4-core chip with a TDP of 15-20W, targeted at ultralights and $700 computers at the bottom of Apple's range; the RTX 3080 FE lists for $700 by itself and has a TDP of 320W. Two entire Mac Minis draw less than that at peak rated power.

The M1's GPU cores are competing with Inte's Iris Xe, not a high-end dedicated power-hungry GPU targeted at serious gamers. They basically have nothing to do with each other, and while you can certainly run the same benchmark on both, it's not particularly meaningful.

There's a reason the M1 isn't going into the 16" MBP or 27" iMac or iMac Pro--they are different machines with different needs. The M1X or P1 or whatever ends up in them will have a different thermal envelope than the M1, will have a different onboard GPU, and may support an external GPU as well.

We'll have an idea of where Apple is with low-power GPU performance once we see more M1 benchmarks, but no one in their right mind would expect the M1 to be competing with a 320 Watt GPU any more than they'd be expecting it to compete in multi-core performance with a 280 Watt 64-core Threadripper.

Given that I just bought an iMac that has a built-in GPU with 16GB of dedicated graphics RAM, thousands of shading units, and a TDP of 130W, and Apple sells Pro computers with significantly higher performance GPUs than that, there's a reason the M1 isn't going into the higher-end Macs. For the same reason that Apple isn't having the M1 replace a 10+ core CPU (although the performance is impressively close thanks to the staggering single-core performance), or trying to have it satisfy users in the market for computers that can handle 128GB of RAM, it's not directly competing with bigger-iron GPUs.

It's a pretty safe bet that whatever ends up in the high-end Macs will have at least comparable GPU performance. I personally suspect that at least for the foreseeable future that's going to involve an outboard NVIDA or AMD GPU, but Apple has more money in the couch cushions than AMD's entire market cap, so they are certainly capable in theory of cranking out a chip or series of chips with an exotic architecture and vast number of GPU cores if that was the goal.
 
Wow... I have haven't been this excited for new MacBooks in years. Will be extremely exciting to try it! :D (At my company we have ordered some MacBook Pro M1 and Mac mini M1 -- we need test devices for our iOS apps :p )
 
  • Like
Reactions: throAU
I'm talking about multicore score / number of cores. Multicore is the only measure of relevance here.

EDIT: Like this:

Mac mini (Late 2020)
Apple M1 @ 3.2 GHz (8 cores) 7643

Mac mini (Late 2018)
Intel Core i7-8700B @ 3.2 GHz (6 cores) 5476

7643 / 8 = 955
5476 / 6 = 913

A 5% difference for a machine that is 2 years newer. So what is the hoopla about, again??
I dunno $400 bucks cheaper for a comparable config is a good reason to start with.

Graphics performance better than an RX570 or GTX1050ti is another.

The multicore performance is only going to useful in certain Apps that can avail of it, single core is far more meaningful for the vast majority of applications in everyday use and it gains massively.

And this is only the lower end Quad i3 Mini replacement I'd guess seeing as thats what they were comparing it to.
 
  • Like
Reactions: pshufd
I depend on windows xp applications to do my job. It’s obviously not my choice, but I would much rather run a virtualized XP than keep a physical XP machine around. Sometimes you have a million dollar machine that can’t easily be replaced that happens to run windows XP and spit out data in a proprietary format. Life’s complicated.

I also use AutoCAD frequently. I’m in the small minority of users who have very specialized software needs that determine my flexibility. I’ve loved the long run of intel macs, but even before that I was running windows on PPC using virtualPC (very, very slowly). I hope something like virtualPC comes along and let’s me keep running my windows XP code.
Time to get an account on Azure and create a Windows VM there. You only pay for it when it is switched on. You can use it with your Mac and your iPad. Or with a browser. Or even Android. From anywhere.

Also, if you depend on Windows XP apps, some people at your company are simply not doing their job.
 
111168.png


119329.png
 
Last edited:
those benchmarks are clearly tricked...

on single core... ok ok

then where are my other 7 cores?

M1 is an amazing chipset but do not try to scam people please

amazing chipset for a mobile/tablet
 
Time to get an account on Azure and create a Windows VM there. You only pay for it when it is switched on. You can use it with your Mac and your iPad. Or with a browser. Or even Android. From anywhere.

Also, if you depend on Windows XP apps, some people at your company are simply not doing their job.

You can have a Windows system and just VNC into it when you need to run Windows. Or you can run Windows and VNC into an M1 system.
 
I'm talking about multicore score / number of cores. Multicore is the only measure of relevance here.

EDIT: Like this:

Mac mini (Late 2020)
Apple M1 @ 3.2 GHz (8 cores) 7643

Mac mini (Late 2018)
Intel Core i7-8700B @ 3.2 GHz (6 cores) 5476

7643 / 8 = 955
5476 / 6 = 913

A 5% difference for a machine that is 2 years newer. So what is the hoopla about, again??
Why do not compare with Tiger Lake instead of 2years old machine?

would be the fair comparison
 
Based on what? Those measly 16 GB will make it much slower in practice for real life performance, that is, encoding video and such.
Memory has little to do with encoding. It appears from your posts you have a pretty minimal understanding of computational methods that modern computers use for different scenarios. Knowing how integrated memory, TDP, and “GPU” cores are used for optimizing common scenarios would be helpful if you endeavor for an improved appreciation.

One needs only look at what’s possible on an iPad Pro then extrapolate to the higher TDPs available on an M1 (then extrapolate more to the TDPs on common desktop and server configurations) to see how big of a deal this is.
 
  • Like
Reactions: Jouls
Fastest Tiger Lake Geekbench 5 is 1551/5473.

Update: on the first page. They aren't sorted. This is a fairly high score. Most scores are lower.
well.. you are right

Tiger Lake-H (the 8cores tiger lake chipset is not yet on the market), will release between January-March along Rocket Lake

but dev benchmarks had shown the chipset already
 
”width”: what width are you referring to? There is nothing unusual about the execution width. It’s, in fact, identical to that used in, say Athlon-64 and Opteron. (I know, because I owned the integer execution unit for the first of those designs)

I thought Kai was the owner of the EX blocks on these projects.
 
Your personal attacks are uncalled for. Feel to disagree with me, but based on my more than a decade of designing CPUs I was telling everyone this would happen for a year, and everyone disagreed.

Now that there is proof, they’ve fallen back to either not believing the proof or complaining about my attitude.

You seem to know your stuff. How would they scale up this now? Seeing as it's already got 8 cores for the next step up would they just look to go to 16 cores and then 32 cores for the high end machines? Or would multiple processors be the way forward? When you scale up cores like that, apart from heat are there other foreseeable problems or reasons why they shouldn't? I assume if they're still running this at 5w, then double cores only makes it 10w and easily heat controllable still, so no issues there for say 24" iMacs - and then 32 cores for desktop and large iMacs. Or do you see another way for them to ramp the power up?
 
  • Like
Reactions: EdT
What about games? If I run my blizzards Diablo 3 (intel)launcher will trigger rosseta translation as well?
 
well.. you are right

Tiger Lake-H (the 8cores tiger lake chipset is not yet on the market), will release between January-March along Rocket Lake

but dev benchmarks had shown the chipset already

Interesting that the benchmarks showed up early this morning. Maybe in response to Apple benchmarks instead of AMD benchmarks.

On the MacBook Pro 13, Apple is giving you a faster CPU, two fewer USB ports, and no RAM upgrade option and the cost is $400 less. I think that Apple doesn't need several other chips now as well. How will Intel fare with this new pricing regime? Keep in mind that this will be Apple's low-end chip. These 8-core Tiger Lake CPUs aren't Intel's low-end, right?
 
Wow. It's like we went into the future by 5 years with these performance gains.
No, it is more like we have finally caught up. Apple was often criticized for its poor performance compared to its peer group in the desktop space. Intel dramatically influenced Apple's ability to compete in the Mac space by the release schedule of the x86 processors. Often, Intel would release a high power version of their new chip before they released a power optimized version. Since Apple does not use the high power devices they were always left waiting and put behind the performance curve because of the wait.

With the new Apple Silicon, all of Apple's hardware design innovations can really shine because they will compare and compete more directly without being handicapped by being behind the performance curve out of the starting block. This is why M1 is so game changing For Apple.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.