Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
So what dos an announcement like this mean? Could there be new iMacs to preorder in two weeks or dos an announcement like this suggest that a new set of chips will be marked ready in a month, quarter of a year, half a year?

Actually, it will take several months to use it on Mac, unlike the MacBook series. And Intel is also working on the 11th gen 14nm CPU in this year.
 
£ for £, AMD is where it's at now. If you have an Intel 2500k you can overclock to still compete pretty well with everything upto the 9th Gen. Every generation of Ryzen has brought real change, with Ryzen 4000 being a huge upgrade.

I bought a Ryzen 1600 AF Zen+ 12nm CPU a couple of weeks ago for £85. £85 for 6C/12T and can overclock to 3.8Ghz on the supplied cooler with no issue. The only Intel offering around the same price point worth talking about is the i3 9100, a 4C4T 14nm chip with absolutely no future. The 6 core (6 thread) i5 9400 is £190! To find an Intel chip with at least 12 threads you are looking at around 4x the price of the AMD part.

AMD would be perfect for Apple right now to fill the gap until the ARM chips are fully upto spec. With the buying power Apple has, the prices of the chips direct from AMD would be a fraction of retail pricing.

The current entry level iMac 27" comes with an i5 8500, with the 8600 and 9600K being upgraded parts. The jump from 8600 to 9900K (8C/16T 16MB cache) is a £500 (£480 retail price) upgrade. You can pickup a Ryzen 3800X (8C/16T 36MB cache) for £285 on Amazon, so direct pricing will probably be close to £200.

Across the board AMD offer far more for your money. Intel seem to have just given up on everything, and are quite happy regurgitating out the same old rubbish and cashing in from the likes of HP and Dell selling overpriced and underpowered laptops to people through high street retail channels.
Just a question... I regularly see people commenting on AMD being better performance than Intel for cheaper. So how come Apple doesn't use AMD chips? Is there a good reason?
 
How do you know? AMD is cheaper, as fast as or faster, less heat and power, and Apple supposedly has a good relationship with them on the video card side. They also do custom CPUs which should intrigue Apple. I'd say there are Ryzen Macs in Apple....



LOL. The OG Ryzen was close to Intel in IPC metrics. Close enough not to notice. The latest ones closed the gap and pulled ahead. For a modern OS the core count advantage is much better. I've go an OG Ryzen 7 and it can run @ 100% doing Folding@Home and I don't notice it. It also blows through video encoding and anything else I throw at it. Reliability has been stellar at 24/7 since Nov 2017.

Dude, read your own message. Anything that has the words "close enough" and "closed the gap". Stop being a fan boy, and listen to your own words you're using. You're telling the truth, you just chose not to see the reality.
[automerge]1588296194[/automerge]
Just a question... I regularly see people commenting on AMD being better performance than Intel for cheaper. So how come Apple doesn't use AMD chips? Is there a good reason?

Because it's not the truth. This has been the same AMD "selling point" for years ... it's always a "cost factor" when mentioning AMD. Apple should stick to the best components, period. Since Apple should be high-end. It's just too bad they aren't using Nvidia products; that's a bit of a fail in my books, but it's most likely that AMD is the more willing partner to provide custom configurations for Apple.
 
  • Disagree
Reactions: freedomlinux
Dude, read your own message. Anything that has the words "close enough" and "closed the gap". Stop being a fan boy, and listen to your own words you're using. You're telling the truth, you just chose not to see the reality.
[automerge]1588296194[/automerge]


Because it's not the truth. This has been the same AMD "selling point" for years ... it's always a "cost factor" when mentioning AMD. Apple should stick to the best components, period. Since Apple should be high-end. It's just too bad they aren't using Nvidia products; that's a bit of a fail in my books, but it's most likely that AMD is the more willing partner to provide custom configurations for Apple.
Are you living in a parallel universe without Ryzen?
 
  • Like
Reactions: Val-kyrie
I gotta say, ever since throwing an AMD CPU in my gaming PC, **** Intel. Really wish apple would experiment with an AMD Mac.

lol. I recall so many stated this waaay back during the AMD Athlon 1Ghz cpu days when Intels P4 was stuck at 933mhz for almost a year. Then suddenly Intel P4 took over and kept leading for over a decade!

just know that between two major cpu manufacturers for professional and consumer products, like Microsoft and Apple being that for the same in terms of software ... either takes the lead in cycles.
 
  • Haha
  • Like
Reactions: itguy06 and Rednow
Just a question... I regularly see people commenting on AMD being better performance than Intel for cheaper. So how come Apple doesn't use AMD chips? Is there a good reason?

I like people who ask question, they question the thing they read. That is good. You learn something from it. ;)

To answer your question, Lots of Reasons. Whether they are *good* reason depends on your perspective.

1. You dont break up relationship with your *partner* / supplier or whatever you want to use to describe Intel just because they have a few hiccups. Remember these relationship building in business takes time. It would be a lot easier to understand this if you have some business experience in life. So if you are young this may be a little hard to comprehend right now.

2. We dont actually know if AMD is cheaper for Apple. Intel could be giving lots of discount to Apple without us knowing it. And judging from AMD's quarterly report, it seems AMD is determined not to play the old AMD game again where AMD's perception equals cheaper / good value, they want price and margin. They are the leading edge and they wants the world to knows that. ( Whether you agree with them or not is entirely different matter ) i.e AMD wont lower price just to win Apple over.

3. There are quite lot of code / optimisation written specifically with Intel in mind. You cant even run certain software ( *cough* Adobe ) without first patching them. But this problem so far seems to be mute because the hackintosh communities seems to have no problem patching, or translating those instructions. But for Apple, these requires a long process of work making sure everything is working as intended. So the switching cost to AMD, despite both being x86, is not entirely free.

4. Thunderbolt - Apple is too invested into Thunderbolt. Leaving the Intel platform isn't as simple as puling the plug even if they wanted to. The only thunderbolt host controller currently sold on market are all by Intel. And you will need your platform or part of the platform to be certified by Intel. So while the *specification* of Thunderbolt 3 is now open, the certification is still not. But Intel is working towards it, I guess it is one way of delaying Apple's switching to whatever they want, or Intel simply has too much to worried about and didn't get enough attention to TB yet. And before you ask, we dont know enough about the situation on USB4 and TB. As I have stated before, TB is not a mandatory part of USB4.

5. ARM - Why waste energy and effort switching over to AMD when Apple intends to switch to ARM on Mac. Assuming this is true, which we wont know because we have been saying the same thing for nearly a decade now. ( I think it started in 2011 ). I still believe the cost of switching to ARM is not worth the effort as long as Intel could provide enough incentives.

6. And it has only been a few months since the dust settled on Apple buying the Intel Modem. You dont want ANYTHING to be in the way of that, it is one of the biggest component of iPhone, which representing ˜50% of Apple'e revenue. Everything else can wait, including Mac. Once they have all the Intel Modem sorted, ( it is still fabbed by Intel ), things will hopefully change. ( That is why the 2019 , 2020 ARM Mac rumours never make any sense, why would you cut tie with a company when one of your component are being held hostage, especially that was in the mist of Qualcomm legal battle )

These are the few I could come up with on top of my head, and possibly many more reasons.
 
These aren't appropriate for iMacs. The power consumption and heat production is massive just to keep up with AMD in single core calculations. It gets pulverized in Multi-Core and AMD's chips produce a fraction of the heat.
 
  • Like
Reactions: Val-kyrie
Possible but I think that’s still a little premature considering we haven’t heard anything official about ARM in Macs yet. But who knows, maybe they will surprise us with an ARM model at WWDC.

My initial guess will be that by calendar Q3'21 or Q1'22 (i)Mac* will be ARM and (i)Mac* Pro will be Intel. Universal binaries for Apps (which they may announce at WWDC a give folks a year to get up on the latest SDK/APIs which should automatically build for both Intel/ARM, but needing any core C/C++ libraries to possibly be tweaked... with an extensions for really big players like Adobe who probably have some core hand crafted low-level code).
 
  • Like
Reactions: AppleTO
These aren't appropriate for iMacs. The power consumption and heat production is massive just to keep up with AMD in single core calculations. It gets pulverized in Multi-Core and AMD's chips produce a fraction of the heat.

AMD is not part of the equation when it comes to the iMac. These Intel CPU's will make their way in or Apple will wait for the 11th gen.
 
Because without using better processing such as 10nm or 7nm, all they can do is increasing the performance while they have to suffer its temperature and power consumption. Intel really hates to show both aspects when they advertise in real life.

Yes, you can still increase the clock speed and cores but in return, it will consume much more powers and increase the temperature dramatically. This is what I call cheating. AMD can also do that but they dont wanna sacrifice the power consumption and temperature.

View attachment 910993
Intel 14nm has to increase its power up to 94W to match AMD Ryzen with 35W. This is why 7nm, 10nm, or 5nm matters. The clock speed doesnt represent the overall performance.
Yes because you think it’s fair to compare an i7 with a Ryzen 9, even if AMD itself is marketing Ryzen 9 as in the same class of i9.
Temperature wise, even the more advanced Ryzen aren’t so much better. They are better if you speak about power consumption, but on a desktop I don’t care so much about it.
And please stop using Cinebech as a global benchmark just because it is one of the most favorable conditions to show AMD advantage. Using other applications the situation is quite different (number crunching and gaming, for instance) and Intel solutions are still valid.
I’m not downplaying AMD, don’t get me wrong. They are doing a great job after years of mistakes, and they are pushing Intel to wake up.
But on tech forums people are exaggerating things towards Intel because it is cool to speak against the giant today.
[automerge]1588324321[/automerge]
Well, you said "Those “14nm old CPU” can reach an higher clock than the marvelous “7nm CPU” made by AMD", which is technically true but quite misleading. AMD's design will simply be more efficient (due in part to the smaller process node), so the same kind of clock isn't needed.
Nope. They can’t reach it, because of their architectural limits.
Try to over clock a Ryzen or an i7 and see the difference by yourself.
A smaller process node is a good thing, but it is not everything.
 
Just a question... I regularly see people commenting on AMD being better performance than Intel for cheaper. So how come Apple doesn't use AMD chips? Is there a good reason?

My guess is it’s a combination of
  • adding a CPU vendor increases support overhead in hardware and software. Likely not by much, but for a long time.
  • it’s only fairly recently that AMD has taken impressive leads in areas Apple cares about
  • Apple looks at long-term roadmaps. They probably know stuff we don’t. Maybe Intel is looking better a few years from now. Maybe both Intel and AMD look poor compared to Apple themselves.
  • we don’t know what volume AMD is able to deliver, and we don’t know what rebates AMD and Intel respectively offer to Apple
[automerge]1588325716[/automerge]
Nope. They can’t reach it, because of their architectural limits.
Try to over clock a Ryzen or an i7 and see the difference by yourself.
A smaller process node is a good thing, but it is not everything.

Why do they need to reach high clock speeds?
 
By then Apple may just go with ARM.

Far more likely than relying on a company that often isn't even a blip on Intel's radar. AMD do well very often, beating Intel with the Athlon 64 and before that going head to head with the Athlon and Athlon XP. Ryzen is very strong in many work loads too. But Intel are huge and can push volumes that AMD can't match(Mostly for historical anti competitive behaviours: Intel dropping people that so much as mention having a AMD line). AMD also have much stronger iGPU's than Intel. But it is likely not possible, Apple didn't switch to x86, they switched to Intel with a multiyear deal with Intel(And we are not privy to all the details there, we can speculate using a competitor is not allowed).
 
Ditch Intel for God's sake. Another 14nm.

Process node is not as important as IPC and overall performance. Yes node shrinks can produce performance gains, but it can also increase the heat produced by the chip(set) by cramming more transistors into a smaller area.


lol Intel is far from dead. AMD is still second fiddle. If you look at benchmarks with open eyes, and real-world application performance, it takes twice the cores to match an Intel CPU and AMD CPU's as per usually, have less IPC.

Fanyboys unite again around the underdog ;)

Actually, if you look at the benchmarks, AMD has achieved higher IPC. It is for this reason that AMD performs on par or better than Intel with lower clock frequencies. AMD uses more cores to offset its lack of QuickSync technology.
 
Why do people keep calling that "throttling"? Intel isn't saying that the CPU is intended for sustained 5.3 GHz. It's intended for boosts of 5.3 GHz.

Likewise, the 9900K is not designed to maintain 5.0 GHz. Its clock rate is 3.6 GHz. It will only do 5.0 GHz in short bursts, and only with up to 2 cores of its eight running.

Throttling would imply it could not do base clock for sustained workflows. Turbo Boost should be viewed as a bonus and achieving a sustained turbo boost on Intel S-chips usually requires liquid cooling.

I read an article somewhere that talked about the use of the term “throttling.” In sum, it is used two different ways: (1) to reduce clock speed to a lower frequency and (2) to reduce clock speed below claimed lowest frequency. Both are legitimate and I tend to think of throttling in terms of its first use.

The boost is hit all the time, for short bursts. If you need high performance for sustained periods (you probably don’t), get a Xeon.

My problem with Intel is that it’s CPUs Boost long enough to score well for short single threads, but few reviewers test for sustained speeds. Intel looks the part, but if it’s CPUs are pushed, they really slow down. AMD’s IPC improvements allow its new(er) CPUs to outperform Intel as an overall package and to run cooler doing so.
 
Last edited:
Dude, read your own message. Anything that has the words "close enough" and "closed the gap". Stop being a fan boy, and listen to your own words you're using. You're telling the truth, you just chose not to see the reality.

The 2017 Ryzen 7 was very close to intel in IPC. Where it demolished was in multi cores as you got more cores than Intel. And in a multitasking OS (like any modern OS) it's better to have more cores than less. Why? Because processes can get spread over them. I've got an i7 laptop in addition to this Ryzen 7 machine. When doing things like photo editing and culling, with modern software the Ryzen hardly breaks a sweat. It can schedule processes across 16 threads and hardly ever chokes. Even rendering video it in Resolve it will ramp up and stay there. Work can go on. That is why IPC matters little in a modern system.

Also less than 10% performance and your user will never notice. Ryzen has been in that same league since it came out.

Because it's not the truth. This has been the same AMD "selling point" for years ... it's always a "cost factor" when mentioning AMD. Apple should stick to the best components, period. Since Apple should be high-end.

Sure cost is an important factor. Look at a good Ryzen box vs Intel. You get a better, faster, more future proof machine with AMD. You get more cores and lower power consumption. Which means as apps evolve you are in a good position to not need an upgrade. And everyone loves less power and heat.

If Apple truly were "high end" the Mac Pro would have bene done with Threadripper or EPYC. Those CPUs are what Pros need and want.


3. There are quite lot of code / optimisation written specifically with Intel in mind. You cant even run certain software ( *cough* Adobe ) without first patching them. But this problem so far seems to be mute because the hackintosh communities seems to have no problem patching, or translating those instructions. But for Apple, these requires a long process of work making sure everything is working as intended. So the switching cost to AMD, despite both being x86, is not entirely free.

That's some BS right there. The whole x86-64 architecture is AMD's extension. Nothing needs to be recompiled for AMD. No patches are needed to run stuff on AMD. There are some software that has been heavily optimized for Intel (some math stuff) and some things that works better with Intel's extensions but the software will run fine on AMD. I've been using AMD since the 486 days and never a compatibility issue. There are Hackintoshes running on Ryzen so it's very easy to do.


4. Thunderbolt - Apple is too invested into Thunderbolt. Leaving the Intel platform isn't as simple as puling the plug even if they wanted to. The only thunderbolt host controller currently sold on market are all by Intel. And you will need your platform or part of the platform to be certified by Intel. So while the *specification* of Thunderbolt 3 is now open, the certification is still not. But Intel is working towards it, I guess it is one way of delaying Apple's switching to whatever they want, or Intel simply has too much to worried about and didn't get enough attention to TB yet. And before you ask, we dont know enough about the situation on USB4 and TB. As I have stated before, TB is not a mandatory part of USB4.

You are forgetting 2 things:
1. Apple worked with Intel on TB. IIRC they still own a bunch of the patents on it. So they could definitely get it done with TB.
2. ASrock has an AMD motherboard with TB certified by Intel. ASRock x570 Phantom . If a small OEM can do it, surely Apple can get it done too.

I'd also wager that the majority of users, Mac included don't care about TB. They more care about USB than the edge case uses for Thunderbolt.
 
  • Like
Reactions: Azrael9
How long is Intel going to milk 14nm and Skylake architecture? I see nothing here that will sway me from going Ryzen next gaming PC build.

I guess a good thing is that (probably thanks to AMD) Intel offers a lot more cores in its CPUs.
 
  • Like
Reactions: Azrael9
Process node is not as important as IPC and overall performance. Yes node shrinks can produce performance gains, but it can also increase the heat produced by the chip(set) by cramming more transistors into a smaller area.




Actually, if you look at the benchmarks, AMD has achieved higher IPC. It is for this reason that AMD performs on par or better than Intel with lower clock frequencies. AMD uses more cores to offset its lack of QuickSync technology.

Part of AMD's success has been the improved IPC. It's been a big part of why AMD has been able to 'get back in the ring...' with Intel. And they do so with slightly lower frequencies. So they're definitely punching above their weight in clock speed compared to Intel. Intel seem to be at the edge of the frequency and process debate. And they're losing the argument in value, efficiency and performance per buck.

Azrael.
[automerge]1588343082[/automerge]
The 2017 Ryzen 7 was very close to intel in IPC. Where it demolished was in multi cores as you got more cores than Intel. And in a multitasking OS (like any modern OS) it's better to have more cores than less. Why? Because processes can get spread over them. I've got an i7 laptop in addition to this Ryzen 7 machine. When doing things like photo editing and culling, with modern software the Ryzen hardly breaks a sweat. It can schedule processes across 16 threads and hardly ever chokes. Even rendering video it in Resolve it will ramp up and stay there. Work can go on. That is why IPC matters little in a modern system.

Also less than 10% performance and your user will never notice. Ryzen has been in that same league since it came out.



Sure cost is an important factor. Look at a good Ryzen box vs Intel. You get a better, faster, more future proof machine with AMD. You get more cores and lower power consumption. Which means as apps evolve you are in a good position to not need an upgrade. And everyone loves less power and heat.

If Apple truly were "high end" the Mac Pro would have bene done with Threadripper or EPYC. Those CPUs are what Pros need and want.




That's some BS right there. The whole x86-64 architecture is AMD's extension. Nothing needs to be recompiled for AMD. No patches are needed to run stuff on AMD. There are some software that has been heavily optimized for Intel (some math stuff) and some things that works better with Intel's extensions but the software will run fine on AMD. I've been using AMD since the 486 days and never a compatibility issue. There are Hackintoshes running on Ryzen so it's very easy to do.




You are forgetting 2 things:
1. Apple worked with Intel on TB. IIRC they still own a bunch of the patents on it. So they could definitely get it done with TB.
2. ASrock has an AMD motherboard with TB certified by Intel. ASRock x570 Phantom . If a small OEM can do it, surely Apple can get it done too.

I'd also wager that the majority of users, Mac included don't care about TB. They more care about USB than the edge case uses for Thunderbolt.

Good catch. Sound post.


Azrael.
[automerge]1588343186[/automerge]
...he was right on a killer point too. If Apple were that serious, why not put the 64 core Threadripper in the Mac Pro?

Azrael.
 
  • Like
Reactions: itguy06
What makes you think it’s complacency? The impression I got (from a position of complete ignorance mind you) was that they hit technical probably they’re tried but really struggled to get past.

Indeed its not complacency, intel had issues at high frequencies and yield with their 10nm node. Bare in mind their 10nm is comparable with TWSC's 7nm in density, its really descended into marketing unfortunately. Of course 7nm is substantially shorter than the de Broglie wavelength so wouldn't work as a gate length. Its closer 40nm I think from memory.
[automerge]1588345122[/automerge]
It is out dated severely and the bezels are insane.

the thermal handling is also crap (much better in the IMac Pro). This is a serious problem as part of the benefit of intel chips is the higher single core performance / turbo boost. This is often throttled to hell in the laptops do it sucks for DAWs etc.
[automerge]1588345829[/automerge]
Process node is not as important as IPC and overall performance. Yes node shrinks can produce performance gains, but it can also increase the heat produced by the chip(set) by cramming more transistors into a smaller area.

Absolutely - I'm told this is why you don't see much overclocking of Ryzen. Lower power density is one of the few advantages of a lower FET density. I hope apple sort out the iMac cooling to fully exploit these i9's, they are notorious for throttling. The process node thing is woefully inconsistent across fabs (it used to mean gate length when I was young, ahem). If the gate length was 7nm electrons would tunnel straight through the channel and the FETs wouldn't work at all.


Actually, if you look at the benchmarks, AMD has achieved higher IPC. It is for this reason that AMD performs on par or better than Intel with lower clock frequencies. AMD uses more cores to offset its lack of QuickSync technology.

Also agree. Ryzen is impressive and I would be building a PC with one with it had decent single core performance and latency. Apparently intel is still king here which is what matters is music production. Annoying Intel have been having fab issues but writing them off is plain daft. Last thing I heard heard is they were dumping 10nm (which is basically 7nm, god I hate marketing), for 5nm.
 
Last edited:
How long is Intel going to milk 14nm and Skylake architecture? I see nothing here that will sway me from going Ryzen next gaming PC build.

Until they get a decent yield on the next node. I don't see what choice they have tbh.
 
Process node is not as important as IPC and overall performance. Yes node shrinks can produce performance gains, but it can also increase the heat produced by the chip(set) by cramming more transistors into a smaller area.

Are you gonna compared Intel 22nm to AMD 5nm? Gosh.
[automerge]1588347858[/automerge]
Yes because you think it’s fair to compare an i7 with a Ryzen 9, even if AMD itself is marketing Ryzen 9 as in the same class of i9.
Temperature wise, even the more advanced Ryzen aren’t so much better. They are better if you speak about power consumption, but on a desktop I don’t care so much about it.
And please stop using Cinebech as a global benchmark just because it is one of the most favorable conditions to show AMD advantage. Using other applications the situation is quite different (number crunching and gaming, for instance) and Intel solutions are still valid.
I’m not downplaying AMD, don’t get me wrong. They are doing a great job after years of mistakes, and they are pushing Intel to wake up.
But on tech forums people are exaggerating things towards Intel because it is cool to speak against the giant today.

More power = more energy. Even on Desktop, it's important.

Sadly, the pic I showed is not the only result that AMD is better than Intel. Think again.
 
lol. I recall so many stated this waaay back during the AMD Athlon 1Ghz cpu days when Intels P4 was stuck at 933mhz for almost a year. Then suddenly Intel P4 took over and kept leading for over a decade!

just know that between two major cpu manufacturers for professional and consumer products, like Microsoft and Apple being that for the same in terms of software ... either takes the lead in cycles.
Your history is inaccurate. Pentium 4 was a big flop that Intel even abandoned its NetBurst structure, and the brand of Pentium became low-cost.
 
  • Like
Reactions: itguy06
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.