Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

THIS is actually latest LAptop with Latest AMD APU. You should update your data more often.

What I linked is a Zen+ 2nd gen U series which is the same set of cores used in the Ryzen 4000 series. What you seem to misunderstand is the difference between a U series processor and an H series processor ie AMD Ryzen 7 4800HS with a GTX2060.


So the GPU is going to be a wash since we're comparing H series chips. Ok so lets compare H to H and U to U


According to the Geekbench listing, the Ryzen 7 4700U was shown to score 4,910 in single-core and 21,693 in multi-core. The APU looks to have been tested in a Lenovo laptop with a base clock of 2 GHz and a boost clock of 4.19 GHz.

Do note that a single Geekbench record does not really tell the whole story as a range of scores are possible depending on the laptop's configuration and thermals. Still, a quick comparison can be done with the highest scores from Intel's 15W offerings for perspective's sake. The Core i7-1065G7 scores about 5752 points in single-core and about 18,772 points in multi-core in Geekbench 4. The Comet Lake-U Core i7-10710U, on the other hand, scores about 5,366 in single-core and 22,182 in multi-core. Taking these into account, the Ryzen 7 4700U's score looks to be pretty much on par with Intel's 15W offerings.

The Core i7 1065G7 is a quad core Icelake
The Core i7 10710U is a 6 core Comet Lake

So a quad core processor matches AMD's 8c/8t? Lol. The old hex core beats it which is a design from 2016 and the H only goes to 4.3ghz vs Intel's 5ghz. How is this better? Going further:


In addition we will soon see 10c/20t H series chips arriving at exactly the same time as AMD's. So again Intel has similar IPC, and has more cores on Comet Lake with a higher boost clock on the older uArch. We're not even talking about Tiger Lake which has a full 35%+ IPC gain over Comet Lake. How to me how is Renoir better?


AMD has interesting options but not on a laptop. As I said from the very beginning, would Renior make an interesting Apple TV Pro gaming console? Sure, the added GPU is much better than Xe and that would make sense for Apple to investigate this. But when you're going to use an APU as just a display driver and pair it with a discrete card its pointless especially since the base CPU is inferior to Intel's but waste more power.
 
Last edited:
What I linked is a Zen+ 2nd gen U series which is the same set of cores used in the Ryzen 4000 series. What you seem to misunderstand is the difference between a U series processor and an H series processor ie AMD Ryzen 7 4800HS with a GTX2060.
Renoir based APUs(4000 Series) are based on Zen 2 Architecture. Not on Zen +, like 3000 Series were.

What you don't understand about Renoir based laptops is that you get Comet Lake performance. But in much lower Thermal Packages. I think you are unaware of the fact that mythical 28W APUs from Icelake are actually drawing almost 50 to 65W of power under load. What If AMD can do the same performance but in half the power? Considering that Zen+ APUs were drawing up to 30W under load, why would it be impossible for Renoir, which is manufactured on much more efficient Process?

You talk about Tiger Lake. When does it come out? Can Intel yield it? ;) Where are the laptops? Or is it vaporware?
 
Renoir based APUs(4000 Series) are based on Zen 2 Architecture. Not on Zen +, like 3000 Series were.

What you don't understand about Renoir based laptops is that you get Comet Lake performance. But in much lower Thermal Packages. I think you are unaware of the fact that mythical 28W APUs from Icelake are actually drawing almost 50 to 65W of power under load. What If AMD can do the same performance but in half the power? Considering that Zen+ APUs were drawing up to 30W under load, why would it be impossible for Renoir, which is manufactured on much more efficient Process?

You talk about Tiger Lake. When does it come out? Can Intel yield it? ;) Where are the laptops? Or is it vaporware?

Categorically false, U series can be configured up or down according to vendor spec and the base spec is 15W and that is what the comparison I provided was.

Second point I did not claim Renior was not an equal to Comet Lake, I said it absolutely was. Again it isn't in lower thermal packages, it is at the same. AMD's boost clocks are just as aggressive as Intel's base clocks are what the TDP is rated for.

Third point again show me sustained loads on Icelake CPUs alone of 50-60W of power, you won't find one. The issue with Ice Lake wasn't power consumption it was it's ability to ramp to scale with Sky Lake on a clock speed basis. This was fixed with Tiger Lake while adding additional IPC and AVX-512.

Again what if? I just gave you hard numbers they can't. Full stop. The only advantage they have is that they have a much better included GPU. They could have built a CPU with more cores or a higher clock speed and TDP but this is what they chose and it is slower. Period.

You can armchair and say well its a technical victory because of the APU nature of the design. Ok but who actually is going to use the APU? The Asus laptop linked before has a 2060 in it. Why would you buy this if you are going discrete? It makes absolutely no sense to argue this point.

Tiger Lake releases Q2 - Q3 just before the back to school period, vendors already have CPUs.

 
Renoir based APUs(4000 Series) are based on Zen 2 Architecture. Not on Zen +, like 3000 Series were.

What you don't understand about Renoir based laptops is that you get Comet Lake performance. But in much lower Thermal Packages. I think you are unaware of the fact that mythical 28W APUs from Icelake are actually drawing almost 50 to 65W of power under load. What If AMD can do the same performance but in half the power? Considering that Zen+ APUs were drawing up to 30W under load, why would it be impossible for Renoir, which is manufactured on much more efficient Process?

You talk about Tiger Lake. When does it come out? Can Intel yield it? ;) Where are the laptops? Or is it vaporware?
Tiger Lake is set to release 2H 2020
 
There's no 5nm AMD chips yet.

Thats true! But AMD is using 7nm chiplets in their APU & CPU's! While Intel is still fighting its way into 10nm.

I'm sure before the end of the year we will see 5nm AMD processors! Then Intel will be 3 years behind AMD in development!

In recent weeks we've seen how a smaller company can bet the pants off a much larger one. Just look at Boeing and SpaceX! Smaller aggressive teams tend to do better than larger companies which is too sprawled out. Even Apple is now getting too big! They need to breakup the development in to smaller units and setup wall definitions, that is how their device will interplay with others. These wall specs are all encompassing not only what it needs to do today by the vision moving forward. This is the secret sauce Apple has failed to build to make them get to the next level! The don't have anyone driving the overall vision! That is what Steve did quite well.
 
Last edited:
  • Like
Reactions: PC_tech
Why? ARM is catching up fast and we are still in mobile market with no fans. Imagine what would proper chip + cooling do.

Intel is slacking and has been for some time which must be frustrating for Apple. AMD is still nowhere in Apple Macs and history showed us that when progress is stagnant Apple is happy to make a transition which worked really well with Intel. Now we are at that point again.
So, the question is : When will Apple pull the trigger? Obviously they are preparing for it with their approach 'one app works everywhere' and just like Rosetta did the trick for the transition period Apple will have something similar.
Naive you think? I give it max 5 years - :)


Enough with this. Not happening. Not across their entire line.
 
  • Like
Reactions: MikeZTM
Cores are meaningless. Tiger Lake quads are meaningfully faster than current Coffee Lake hex. Rocket Lake next year will back port Tiger Lake to 14nm so you're looking at 8-10 core chips near 5ghz which would completely decimate Ryzen.

I find it hard to believe that Intel will meet any schedule for any 10nm non-EUV chips. They made a strategic decision to not use EUV on 10nm and they've failed (flailing?) at it. It looks too hard, and it seems they've only really been successful at shipping volumes of underwhelming 14nm toks. Tiger Laker is still non-EUV and the only thing Intel has proven on 10nm non-EUV is low yields, shortages and delays. Why would anyone expect that to change? Maybe they'll get back on track with whatever EUV 7nm design follows Tiger Lake.

TMSC has already shown the world that EUV will happily extend Moore's law a few more generations. Apple's A13 and AMDs imminent Zen2 (Ryzen 4xxx) are happily swimming in that pool. On top of that, Apple's 5nm A14(!) make the roadmap to 5nm Zen 4 and inherently better thermals much more believable.
 
Why? ARM is catching up fast and we are still in mobile market with no fans. Imagine what would proper chip + cooling do.

Intel is slacking and has been for some time which must be frustrating for Apple. AMD is still nowhere in Apple Macs and history showed us that when progress is stagnant Apple is happy to make a transition which worked really well with Intel. Now we are at that point again.
So, the question is : When will Apple pull the trigger? Obviously they are preparing for it with their approach 'one app works everywhere' and just like Rosetta did the trick for the transition period Apple will have something similar.
Naive you think? I give it max 5 years - :)

Very hopeful thinking! Sadly ARM has a long way to go before it makes sense. Just look at the ARM server chip against the other server chips POWER9 & ARM Performance Against Intel Xeon Cascadelake + AMD EPYC Rome If they can't even get into this game how can they get into the desktop & laptop space?
 
Mac OS software has gone through so many 'extinction events'

I think that's a pretty fitting description of the process.

Say Apple was already planning an extinction event for an ARM transition within 5 years anyway, they're probably already going through efforts to make that extinction event as painless as possible:
- moving all internally developed frameworks to bitcode
- working with 'strategic' partners (Adobe/MS/Docker) to make sure they're ready

If I'm planning at Apple - I'd be asking "What chip maker is the most compelling for us in this short window until we've migrated to our own ARM chips everywhere?".

No matter, would a move to AMD be a true extinction event? It's still x86-64. Wouldn't that be a more minor extinction event?
 
  • Like
Reactions: jpn
Who knows what the future will bring? But there’s no reason to think Apple pays 2-3x for Intel rather than AMD CPUs.

Apple doesn’t pay even 5% more than they have to, let alone 100-200% more. That would be insane, and Tim Apple, the very stable supply chain genius, isn’t crazy 🤣

Tim Cook is not a genius and remembers how Apple overpaid as it did with Qualcomm modem 😄 Therefore, it's foolish to think it won't pay more for Intel that might announce tiger lake is not coming out until Q2 2021
 
I think that's a pretty fitting description of the process.

Say Apple was already planning an extinction event for an ARM transition within 5 years anyway, they're probably already going through efforts to make that extinction event as painless as possible:
- moving all internally developed frameworks to bitcode
- working with 'strategic' partners (Adobe/MS/Docker) to make sure they're ready

If I'm planning at Apple - I'd be asking "What chip maker is the most compelling for us in this short window until we've migrated to our own ARM chips everywhere?".

No matter, would a move to AMD be a true extinction event? It's still x86-64. Wouldn't that be a more minor extinction event?

Bitcode is arch dependent. So that's actually not a great idea. But just think about how Apple did it 15 year ago and how iOS running same kernel and almost same framework on ARM64 you can imagine a future transitioning will work out.
 
I think that's a pretty fitting description of the process.

Say Apple was already planning an extinction event for an ARM transition within 5 years anyway, they're probably already going through efforts to make that extinction event as painless as possible:
- moving all internally developed frameworks to bitcode
- working with 'strategic' partners (Adobe/MS/Docker) to make sure they're ready

If I'm planning at Apple - I'd be asking "What chip maker is the most compelling for us in this short window until we've migrated to our own ARM chips everywhere?".

No matter, would a move to AMD be a true extinction event? It's still x86-64. Wouldn't that be a more minor extinction event?

I think the idea of an Extinction Event is being over dramatic!

Lets use something that has evolved over time like the horse and buggy to the car and then from petrol to electric. What pushed us away from the horse was the mess it left in the cities! Piles of $%$@! Which was not only a health threat it just smelled fowl! So why are we now turning away from petrol to electric? Surprisingly for the same reasons! Just not as visible as it now effects us globally.

This is the same construct within CPU and system development. In this case the cost in building more powerful systems need for electricity and cooling.

AMD is the next logical step and just like how Diesel was once thought as a replacement for Gasoline it just wasn't in the cards. I think ARM is in the same boat here. Yes, ARM is a great design but it does have its limitations.
 
There is no way in hell AMD can scratch Intel in laptop market anytime soon. AMD saw an opening in desktop and they took a swing at it only cause Intel went all in on laptop chipsets. I can see some AMD in iMacs and some low end laptops but that's about it. Intel is way too ahead in the game and now that they have been battle tested in desktop market they will just grip even tighter in laptop.

Intel is really bad in game and on its desktop CPU(9700k) was beaten by AMD's new laptop CPU(4800H) in games.

9900k o.c. 5GHz with DDR4 4000 is 50% slower than a 8086k o.c. 5GHz with DDR4 4000 in PUBG . That performance regression is a shame for Intel. Its ring bus can not go any longer as the latency penalty becomes too high.

Intel right now rely on stupidly high frequency + high price RAM to reach that gaming advantage. Those are overclocker's only kit and they are not that easy to use-- turn on XMP 99% chance it will not boot/post at all.
 
Renoir based APUs(4000 Series) are based on Zen 2 Architecture. Not on Zen +, like 3000 Series were.

What you don't understand about Renoir based laptops is that you get Comet Lake performance. But in much lower Thermal Packages. I think you are unaware of the fact that mythical 28W APUs from Icelake are actually drawing almost 50 to 65W of power under load. What If AMD can do the same performance but in half the power? Considering that Zen+ APUs were drawing up to 30W under load, why would it be impossible for Renoir, which is manufactured on much more efficient Process?

You talk about Tiger Lake. When does it come out? Can Intel yield it? ;) Where are the laptops? Or is it vaporware?

Agreed. It's better for Apple to use some of its products like MBA with AMD 4000 Mobile APU and ZEN 3 is already completed.
 
I find it hard to believe that Intel will meet any schedule for any 10nm non-EUV chips. They made a strategic decision to not use EUV on 10nm and they've failed (flailing?) at it. It looks too hard, and it seems they've only really been successful at shipping volumes of underwhelming 14nm toks. Tiger Laker is still non-EUV and the only thing Intel has proven on 10nm non-EUV is low yields, shortages and delays. Why would anyone expect that to change? Maybe they'll get back on track with whatever EUV 7nm design follows Tiger Lake.

TMSC has already shown the world that EUV will happily extend Moore's law a few more generations. Apple's A13 and AMDs imminent Zen2 (Ryzen 4xxx) are happily swimming in that pool. On top of that, Apple's 5nm A14(!) make the roadmap to 5nm Zen 4 and inherently better thermals much more believable.
 
Agreed. It's better for Apple to use some of its products like MBA with AMD 4000 Mobile APU and ZEN 3 is already completed.
MacBook Air will get IceLake-Y chips.
The chips already are in 10.15.3.

Intel lied time and time again about their roadmaps. There is very little substance to those claims also, especially considering that they have delayed by another 6 months release of 10 nm server chips to late 2020.

Why they delayed? Well I don't know, maybe Yield rates are still not good enough?


This is the guy who was bang on correct over past 4 years about Intel 10 nm woes. So don't believe that miraculously Intel saved they day when they couldn't do it for past 4 years with this 10 nm process.
 
Again, more garbage. First, all else being equal you don’t want to increase frequency to improve performance, you prefer to increase IPC, because P=CV squared f.

second, you can increase both frequency and IPC, if you are willing to increase power.

third, because you don’t have to dedicate pipe stages to microcode sequencing and a complex instruction decoder, you are actually better off with ARM where you can essentially take any x86 design, remove 20 percent of the core which is dedicated to that stuff, twiddle the instruction decoder a bit and now you have an ARM chip that is identical to an x86 chip but with fewer pipe stages and this higher IPC.

Fourth, you are confusing ARM’s designs with ARMs instruction set architecture. Apple does not use ARMs design. They don’t even use ARMs micro architecture. They implement the whole thing themselves. Of course you can’t take an ARM design with a microarchitecture targeted at servers or mobile and get magical results.

but I can turn any x86 design into an arm design by ripping out a ton of unnecessary circuitry, So how can it possibly be that ARM ISA is inherently slower? That’s something only someone who never designed commercial products would say.

hell, I designed several risc chips that were faster than their contemporary x86 competition. Sparc, PowerPC, and even f-risc which nobody ever heard of. DEC used to blow away intel. HP did it with PA-RISC, which intel acquired and which influenced itanium. Arm is just another risc architecture.

As a Ultra SPARC T1 user I hate Intel not innovating back in 2000s.
And thanks for the details. I never thought about chopping off the x86 compatibility stuff could just make a Intel CPU an ARM one.

And also a DEC Alpha machine was my childhood dream that I never got.
 
Last edited:
So why haven't they moved to ARM years ago, if they could do such things as you say, and why they test AMD APUs in transition to x86 AMD based Macs, instead of ARM, if everything what you say is so simple? ;)

Maybe its because x86 is for High-Performance better, instead of ARM? ;)

Intel built really good ARM CPU back in the day in the name of Xscale. At that time it was the best ARM chip just like how Apple doing it right now.
They are really good indeed and even better than their later designed Atom CPUs under same power envelope.

Intel feared ARM advantage will kill its x86 monopoly so they stop the development and sold the design to Marvell.
Intel themself feel threatened by ARM and you still blindly believe into their 40 years legacy technology.
[automerge]1581352308[/automerge]
Nothing is 100% confirm but that's a bad deal if Apple put the icelake-y that doesn't offer any major improvements.

Icelake-y doesn't looks like a possible choice for Apple. Intel 10nm fab can not meet Apple's demand for the chip.

Apple didn't went AMD back in the day when Intel did not have x64 yet and forced Apple to do the x86->x64 transition after switch to Intel because of AMD can not meet their demand.
 
Last edited:
Linus on youtube did a good vid if apple should have used AMDs vs Intel in the new Mac Pros. The benchmarks spoke for themselves.
 
Tim Cook is not a genius and remembers how Apple overpaid as it did with Qualcomm modem 😄 Therefore, it's foolish to think it won't pay more for Intel that might announce tiger lake is not coming out until Q2 2021
1) Cook is widely acknowledged as a supply chain genius, and you claiming otherwise changes nothing.

2) You don’t know what Apple is paying for Qualcomm modems, so you can’t claim Apple is overpaying.

3) Whatever chip Apple puts in MBA this year, next year or any subsequent year, there’s no reason to think Apple would overpay. Intel possibly being behind schedule doesn’t put them in a position to demand a price premium; it’s foolish to think so. If anything, it puts Apple in an excellent position to demand even further discounts.

4) You not understanding how Apple uses multiple suppliers and other strategies to minimize costs and keep selling prices lower for customers is a you problem. It doesn’t change the facts. Your not liking those facts is not relevant.
 
Last edited:
  • Like
Reactions: Zdigital2015
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.