This 16GB RAM is used much more efficient; soon we know how much more!This is crazy performance. Too bad there's only 16 GB RAM max for now ...
This 16GB RAM is used much more efficient; soon we know how much more!This is crazy performance. Too bad there's only 16 GB RAM max for now ...
Single core execution? Must be really, really old applications. I’d be concerned if I had an application that didn’t support modern (as in 10-year old) technology and was still running. How about security, do the developers also ignore that. Maybe I’m missing your pointIf I was writing something like Rosetta, I probably wouldn’t bother preserving single core execution and instead try to optimize by executing across multi core if possible as part of my dynamic recompile.. Are we sure Rosetta isn’t doing that?
No, you will see the real life perf and differences. Hint: is far better than 5% in anything you will doI'm talking about multicore score / number of cores. Multicore is the only measure of relevance here.
EDIT: Like this:
Mac mini (Late 2020)
Apple M1 @ 3.2 GHz (8 cores) 7643
Mac mini (Late 2018)
Intel Core i7-8700B @ 3.2 GHz (6 cores) 5476
7643 / 8 = 955
5476 / 6 = 913
A 5% difference for a machine that is 2 years newer. So what is the hoopla about, again??
Based on what? Those measly 16 GB will make it much slower in practice for real life performance, that is, encoding video and such.No, you will see the real life perf and differences. Hint: is far better than 5% in anything you will do
Based on intel mac mini 16gb ram vs dev kit mac mini with 16 gb ram and a12zBased on what? Those measly 16 GB will make it much slower in practice for real life performance, that is, encoding video and such.
I'm talking about multicore score / number of cores. Multicore is the only measure of relevance here.
EDIT: Like this:
Mac mini (Late 2020)
Apple M1 @ 3.2 GHz (8 cores) 7643
Mac mini (Late 2018)
Intel Core i7-8700B @ 3.2 GHz (6 cores) 5476
7643 / 8 = 955
5476 / 6 = 913
A 5% difference for a machine that is 2 years newer. So what is the hoopla about, again??
so will it be possible to dual boot linux?Why do people keep spreading that message? Linux has ARM support and the M1 can easily be supported. Microsoft could choose to support Windows on it. The T2 allows disabling its boot volume signature checks. It’s no more “closed” than the PPC version of Macs were.
I remember those days as well, although my first GPU was I believe a Rage Pro(?) in a PM 6500 for video capture, which I guess came with a Rage II GPU onboard as well.Perhaps I'm going back a bit too far, as I was amazed, as was all of the industry, at the Voodoo 1 add on PC card for 3D Graphics.
I bought a Voodoo 1 and then, following that a Voodoo 2.
They probably seem like worlds apart because they are?Something I am wondering though. If we take the latest RTX 3080 GPU's from Nvidia, they have 28 billion transistors just for this monster GPU, whilst the whole M1 chip has only around half that for everything, coming in at 16 billion.
Now I know numbers are not everything, but these products seem worlds apart.
I dunno $400 bucks cheaper for a comparable config is a good reason to start with.I'm talking about multicore score / number of cores. Multicore is the only measure of relevance here.
EDIT: Like this:
Mac mini (Late 2020)
Apple M1 @ 3.2 GHz (8 cores) 7643
Mac mini (Late 2018)
Intel Core i7-8700B @ 3.2 GHz (6 cores) 5476
7643 / 8 = 955
5476 / 6 = 913
A 5% difference for a machine that is 2 years newer. So what is the hoopla about, again??
Time to get an account on Azure and create a Windows VM there. You only pay for it when it is switched on. You can use it with your Mac and your iPad. Or with a browser. Or even Android. From anywhere.I depend on windows xp applications to do my job. It’s obviously not my choice, but I would much rather run a virtualized XP than keep a physical XP machine around. Sometimes you have a million dollar machine that can’t easily be replaced that happens to run windows XP and spit out data in a proprietary format. Life’s complicated.
I also use AutoCAD frequently. I’m in the small minority of users who have very specialized software needs that determine my flexibility. I’ve loved the long run of intel macs, but even before that I was running windows on PPC using virtualPC (very, very slowly). I hope something like virtualPC comes along and let’s me keep running my windows XP code.
Time to get an account on Azure and create a Windows VM there. You only pay for it when it is switched on. You can use it with your Mac and your iPad. Or with a browser. Or even Android. From anywhere.
Also, if you depend on Windows XP apps, some people at your company are simply not doing their job.
Why do not compare with Tiger Lake instead of 2years old machine?I'm talking about multicore score / number of cores. Multicore is the only measure of relevance here.
EDIT: Like this:
Mac mini (Late 2020)
Apple M1 @ 3.2 GHz (8 cores) 7643
Mac mini (Late 2018)
Intel Core i7-8700B @ 3.2 GHz (6 cores) 5476
7643 / 8 = 955
5476 / 6 = 913
A 5% difference for a machine that is 2 years newer. So what is the hoopla about, again??
Why do not compare with Tiger Lake instead of 2years old machine?
would be the fair comparison
Ahhh ... a 640K advocate. I wondered when we would hear this ...I think it will have something to do with the lack of the 32GB RAM that so few people actually need.
Memory has little to do with encoding. It appears from your posts you have a pretty minimal understanding of computational methods that modern computers use for different scenarios. Knowing how integrated memory, TDP, and “GPU” cores are used for optimizing common scenarios would be helpful if you endeavor for an improved appreciation.Based on what? Those measly 16 GB will make it much slower in practice for real life performance, that is, encoding video and such.
well.. you are rightFastest Tiger Lake Geekbench 5 is 1551/5473.
Update: on the first page. They aren't sorted. This is a fairly high score. Most scores are lower.
”width”: what width are you referring to? There is nothing unusual about the execution width. It’s, in fact, identical to that used in, say Athlon-64 and Opteron. (I know, because I owned the integer execution unit for the first of those designs)
Your personal attacks are uncalled for. Feel to disagree with me, but based on my more than a decade of designing CPUs I was telling everyone this would happen for a year, and everyone disagreed.
Now that there is proof, they’ve fallen back to either not believing the proof or complaining about my attitude.
well.. you are right
Tiger Lake-H (the 8cores tiger lake chipset is not yet on the market), will release between January-March along Rocket Lake
but dev benchmarks had shown the chipset already
No, it is more like we have finally caught up. Apple was often criticized for its poor performance compared to its peer group in the desktop space. Intel dramatically influenced Apple's ability to compete in the Mac space by the release schedule of the x86 processors. Often, Intel would release a high power version of their new chip before they released a power optimized version. Since Apple does not use the high power devices they were always left waiting and put behind the performance curve because of the wait.Wow. It's like we went into the future by 5 years with these performance gains.