Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
If the current rumor about the "gaming" Mac is true, then I think that makes this AMD rumor more likely.
[automerge]1581123968[/automerge]


Apple might feel otherwise. Because Intel really sucks right now.
Wouldn’t count Intel out just yet

 
So, I have done some digging, into that Amazon's lately announced Graviton2 CPU, that the debate has been going on for few last pages.

It is based on ARM Neoverse Cores. https://www.arm.com/products/silicon-ip-cpu/neoverse/neoverse-e1

And this is the point where the whole theory of ARM being faster than x86 falls apart.

Neoverse cores are 15-20% faster per core, than last gen. They are still ARMv8 architecture, the same as Graviton 1 cores. They are designed for higher scalability however, up to 128 cores in one chip, but they lack Hyper Threading/SMT.

So IPC of Graviton 2 CPUs rose at best 20-25%, with any custom work done on the CPU. Most of the benefits most likely come from dedicated hardware on chip, which Amazon, most likely included into the transistor budget - 30 bln xTors on 7 nm and only 32 cores? Thats like 2x more transistors compared to 32 Core 7 nm Threadripper 3970X. Where did they went? The pipeline is not long enough, considering the clocks of the CPU, so Amazon hasn't burned it on increasing the clocks, most likely what did they burned them for is dedicated ASIC on chip, for specific tasks liek Machine Learning or Image Processing, which is why they claim it is 20% faster than Unspecified Intel CPU in an unspecified testing methodology. What if they tested singlecore performance of 180W TDP CPU that is locked to 105W TDP, and comkpared it to a CPu that was designed to operate at that TDP. How clock speeds would behave on those CPUs, hm?

And for last thing: I suggest reading this post on Phoronix:

I hope this end this ridiculous debate about ARM vs x86. There is nothing more powerful in high-performance than x86, and for the foreseeable future there won't be. Period.
 
  • Like
Reactions: Ulfric
This would be cool.

BOING! I just got a stiffy reading the twitter tweet quoted to substantiate this article!

If Apple doesn’t make good on this then it gives its sweet blessing to the Hackintosh AMD market ;)

Apple macOS users ares still Apple users and still have other Apple gear and software that Apple and partners sell. I see a win win
 
If the current rumor about the "gaming" Mac is true,Apple might feel otherwise. Because Intel really sucks right now.

I cannot necessarily disagree with that assessment. They are stuck in a molasses (or morass, take your pick) of their own making right now and without competent apolitical management to extract themselves from said molasses.

The flip side is that they are selling pretty much all the CPUs and such that they can produce right now, so there’s that.
 
Clock speeds, Throughput of the cores, Amount of threads, memory bandwidth available, the fact that Amazon used sketchy testing methodology to selectively show the performance advantage over Intel CPU, in specific scenario, not overall, average performance. Those are the red flags that are obvious to anybody who has at least a little understanding of High-Performance.

Don't buy Amazons marketing. They have to sell their product. I haven't seen any real world, High-Performance application that actually runs on ARM cores well. And I mean: ARM cores. Not dedicated hardware for specific task that is in the SOC.

If I will see ARM cores that ACTUALLY are faster than x86, believe me I will be first to jump ship, just like I am constantly raving on this forum about AMD, because they are way faster than Intel. Im not stupid to ignore High-Performance.

But that ship hasn't sailed. It appears, based on how x86 progresses, it never will ship.


And how fast is it compared to anything x86, CORE FOR CORE?

Core for Core both A13 and Graviton 2 is already much faster than any Xeon right now.
I'm not buying there marketing. I'm buying their product. My job include validating and maximize performance per dollar using AWS EC2. I have real data running on servers. A1 was already cost effective for our Java server workload and the only problem was Amazon didn't support elastic beanstalk on them so we didn't use them.

This time M6g have much improved performance and become a full M/C/R product line instead of special a1 series and with elastic beanstalk support this will be the mainstream general purpose server product. This is super attractive for a Java server customer.

I do not think nginx/java performance is related to any AISC. And also if that's possible I will 100% appreciate that.

Right now I'm on NDA so I can not give you more info but you can apply for preview testing right now as a customer. You can see the performance yourself running whatever you want. Or just test a1 right now and see how it approaches 60% performance of a Xeon m5 instance in single thread test.

And as I told you, SMT does not increase performance. SMT is a way of optimizing utilization. lacking SMT just mean it doen't benefit from SMT. If it benefit from SMT a lot like super long pipeline Intel CPUs they will implement SMT on it even SMT4 or SMT8 on it when needed.

And there's no "Hyper threaded Thread" for Intel. All Thread are equal and they have all access to full core resources single thread test are reliable on HT enabled Intel CPUs.

And remember Intel is still on 14nm. Graviton2 and AMD EPYC2 are all 7nm. It will be hilarious if its not faster a lot than Intel's crap.

Never say never. Intel have no IPC improvement for 5 year already and you suggest others will never catch up with them?
 
Last edited:
  • Like
Reactions: 09872738
Apple should have bought AMD a couple of years ago when they were at $2 per share. Would have given them even more control over the Macs graphics and now (possibly) processors.

and make a hugeprofit to the PC market as well ;)
[automerge]1581129801[/automerge]
No. If I had to guess, Apple is rethinking that after the ARM32 -> ARM64 transition on iOS, as well as the dropping of x32 apps in Catalina. There's been a *lot* of pushback due to some people not being able to update. Not unwilling, but unable to. Currently, 0% of macOS apps are ARM compatible, there would need to be a very long transition period.

do you think 14mths will suffice?
Apple did the PowerPC to x86 in that time quite quickly with less than HALF the developers it has today!!
 
To run what games?

And high end gaming isn’t exactly a large market. I don’t see Apple upending their hardware configs just to offer an expensive gaming computer that few would buy and wouldn’t even be running MacOS.

1/3 of the market is @ 1440p or higher today.

That number is only going to increase. 1440p cards are $350, which is a mid-range card. 1440p monitors are $200. A lot of 4K 60Hz monitors are $350. There was a 6 month period in the 10.2 era when Apple took gaming seriously. It could happen again.
 
No. Apple and Nvidia are locked in a very important struggle for both of them. Nvidia wants direct access to the hardware and Apple won't allow that. Apple wants Nvidia to use Apple's frameworks only. Neither show any signs of giving in on their respective positions.

This is good for all customers.
We need some big name to say no to NVIDIA B.S. that's named CUDA.

CUDA became a requirement for GPU computing is purely caused by those researchers with a huge budget. NVIDIA ride on the momentum to eliminate all open standards. This is really bad for customer. Other solution are at least equally capable and you have full control of your code instead of vendor lock in.
 
You said no they don’t and then yes they did, and the explained that they beat them but not by much, so what is it then?
I didn't contradict anything. I said they decimate Intel in mult-threaded work ... and single core is only marginally better with Intel ... but for gaming ... anything 2K and above is negligible. Your reading comprehension is not my problem.
[automerge]1581131235[/automerge]
I am not argueing, but there are people who say that ARM chips are just as fast as desktop chips but has the advantage of long life battery and cooler operation. Don't shoot the messenger.
They're wrong. I will withhold shooting you sir.
 
So, I have done some digging, into that Amazon's lately announced Graviton2 CPU, that the debate has been going on for few last pages.

It is based on ARM Neoverse Cores. https://www.arm.com/products/silicon-ip-cpu/neoverse/neoverse-e1

And this is the point where the whole theory of ARM being faster than x86 falls apart.

Neoverse cores are 15-20% faster per core, than last gen. They are still ARMv8 architecture, the same as Graviton 1 cores. They are designed for higher scalability however, up to 128 cores in one chip, but they lack Hyper Threading/SMT.

So IPC of Graviton 2 CPUs rose at best 20-25%, with any custom work done on the CPU. Most of the benefits most likely come from dedicated hardware on chip, which Amazon, most likely included into the transistor budget - 30 bln xTors on 7 nm and only 32 cores? Thats like 2x more transistors compared to 32 Core 7 nm Threadripper 3970X. Where did they went? The pipeline is not long enough, considering the clocks of the CPU, so Amazon hasn't burned it on increasing the clocks, most likely what did they burned them for is dedicated ASIC on chip, for specific tasks liek Machine Learning or Image Processing, which is why they claim it is 20% faster than Unspecified Intel CPU in an unspecified testing methodology. What if they tested singlecore performance of 180W TDP CPU that is locked to 105W TDP, and comkpared it to a CPu that was designed to operate at that TDP. How clock speeds would behave on those CPUs, hm?

And for last thing: I suggest reading this post on Phoronix:

I hope this end this ridiculous debate about ARM vs x86. There is nothing more powerful in high-performance than x86, and for the foreseeable future there won't be. Period.

This is garbage. I designed many commercial CPUs, with several different architectures, and I designed the x86-64 integer 64-bit instructions. There is nothing inherently better about x86 than ARM that causes it to be any faster. You pick an architecture, then you design the CPU, and if you are a good designer and targeting high speed, you get high speed.

Not everyone is a good designer.
 
Apple dumped Motorola and IBM 68000 CPUs like a hot potato when they couldn't keep up with Intel. There's no reason to think Apple wouldn't do the same to Intel if they can't keep up.

just some clarity:

aplle dumped the 68000 cpu when it couldn’t keep up to the competition. Motorola and IBM formed a major partnership - PowerPC (G3-G4) and Apple stayed in partnership for purchasing those CPU’s for over a decade. Motorola dropped the cpu business or separated it from their main focus. Apple stayed with IBM for PowerPC until Intels Core CPU’s offered better power per watt back in 2004.

you left out a huge middle there.
 
This would be good but not great, but it does kind of make sense since they use AMD graphics. We need ARM processors in our macs though. With an the A12x/A13x processors performing as well as 7th and 8th generation Intel processors, imagine how they would perform with increased power and heat dissipation!!

Oh, and go to nVidia. AMD makes good processors, but Nvidia makes better graphics.
 
  • Like
Reactions: 09872738
Isn't this what sank the Itanic? Itanium was native VLIW and my understanding was that x86 compatibility was dreadfully difficult to achieve while maintaining performance of the VLIW engine.
That's why you don't emulate, you recompile.

The hard issue with Itanium was detecting data dependencies. VLIW is kind of strange. It is single threaded, but it is explicitly parallel. You are lining up a large number of instructions to all execute at the same time. Does the third instruction need data from the second? Move the third instruction back a few batches, so the dependency can be resolved. The other issue is generational. The current version of the chip might do 32 operations at the same time. The next one might do 48. This is why you need to do the last and most complex stage of the compile on installation. Luckily, compilers have become much smarter in the last few years.
 
This would be good but not great, but it does kind of make sense since they use AMD graphics. We need ARM processors in our macs though. With an the A12x/A13x processors performing as well as 7th and 8th generation Intel processors, imagine how they would perform with increased power and heat dissipation!!

Oh, and go to nVidia. AMD makes good processors, but Nvidia makes better graphics.

No we don't need ARM in our Macs. ARM doesn't touch what AMD Zen Series processors.
[automerge]1581144936[/automerge]
and make a hugeprofit to the PC market as well ;)
[automerge]1581129801[/automerge]


do you think 14mths will suffice?
Apple did the PowerPC to x86 in that time quite quickly with less than HALF the developers it has today!!

NEXTSTEP was x86 since 1993. OS X beta was x86 out of the gate. This is different.
 
No we don't need ARM in our Macs. ARM doesn't touch what AMD Zen Series processors.
[automerge]1581144936[/automerge]
Yes it does. ARM is an architecture. Zen is an implementation. It’s perfectly possible to make an implementation of ARM that blows Zen away. I designed processors at AMD, so I know the ladies and gentlemen who could do it :)
 
  • Like
Reactions: jpn
Yes it does. ARM is an architecture. Zen is an implementation. It’s perfectly possible to make an implementation of ARM that blows Zen away. I designed processors at AMD, so I know the ladies and gentlemen who could do it :)
Are they at Apple now or still at AMD?
 
Just an FYI to anyone unaware of the "hackintosh" (running macOS on standard "PC" hardware) scene, its been possible to run macOS on AMD Ryzen CPUs with very little extra work (vs Intel) for over a year now. The performance is good and everything generally works about as well as an Intel hackintosh aside from a few edge cases (things that require the Intel QuickSync video encoder/decoder for example, but this is already being replaced in Macs by Apple's T series ARM coprocessors.)

Basically, for anyone worried about an architecture change/fragmentation/old software not working etc, that's not what this is. Intel and AMD both produce x86-64 chips that are mutually compatible. This is in no way like the move from PPC to x86 or like a prospective move from x86 to ARM would be.
I bought a pc with Ryzen solely to run Solidworks a year ago, and recently installed OS X just to try it and it runs beautifully. The installation was a breeze. It can’t run Adobe CC (it requires Intel) but I don’t need it, and besides that it feels just like a real Mac.
 
Apple should have bought AMD a couple of years ago when they were at $2 per share. Would have given them even more control over the Macs graphics and now (possibly) processors.

Unlikely as buying a nearly dead, PC focus company doesn't sound smart to the investors. Also doing in-house will never be as cheap as have multiple suppliers.
 
  • Like
Reactions: Andres Cantu
Unlikely as buying a nearly dead, PC focus company doesn't sound smart to the investors. Also doing in-house will never be as cheap as have multiple suppliers.
But if you have chipmakers in-house they don‘t need to raise their prices in order to survive themselves ... see Apple A-chips.
 
Willingly switching from first place to second place. That’s what’s moronic about it.

You are stuck in the past, if you think AMD is 2nd place with CPUs. It would be stupid for Apple not to switch, since they are now faster and also cheaper than Intel.
[automerge]1581152512[/automerge]
Wouldn’t count Intel out just yet


1.84 TFLOPs seems very slow tbh. If Apple wants high graphical performance, they'll add a dedicated GPU like the just announced "RX 5700M", which does 7.93 TFLOPS for mobile devices.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.