Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It starts from $2900.

Edit: In case it escaped you, Apple sells computers. AMD sells CPUs.
Well on apple.com it's says 3100$. I can post a print screen if you want.

Nothing escapes me. AMD doesn't have to make computers in order to know in what kind of laptops their chip is going to be used. And it's definitely not +2900$ laptops.

Also you didn't answer my question, why should AMD's middle of the pack chip compete with Apple best laptop chip?
 
I'm not sure why we need to keep repeating this, but the answer is: because AMD's "middle of the pack SKU" already burns more power according to AMD's own specs than the M1 Max. The higher-end 7945HX burns way more power, so it is disqualified from the comparison.
Wrong anwer.
I keep repeating the obvious. The 4940HX is far from the fastest APU AMD announced.
What ultimately determines competition is price and performance. If the 7940HX will use let's say 14w more than the M1 Max CPU and win in performance for half the price how are they in competition? Also AMD directly confirms they aren't. Do you think a few watts will make more of a difference for end consumers that price and performance of the chip?
And you know that the 7945HX will completely trash the M2 Max in CPU performance (some people that saw it at CES said it matched the big boy desktop 16 core in a few instances) even at the base 54W. So what if it will use more power? It will perform much better. The 3090 uses more power but there are instances where it performs 5x better than the M1 Ultra GPU so the 2 aren't in competition even if Apple suggested it in a direct comparison.
 
Last edited:
Well on apple.com it's says 3100$. I can post a print screen if you want.

Nothing escapes me. AMD doesn't have to make computers in order to know in what kind of laptops their chip is going to be used. And it's definitely not +2900$ laptops.
You mean this Apple.com?

1673274835802.png



Also you didn't answer my question, why should AMD's middle of the pack chip compete with Apple best laptop chip?
Well, it’s not Apple’s fault if their top tier notebook CPU cannot be made to work in a form factor like the Macbook Pro 14”. Its easier to build CPUs when you don’t really care about power consumption don’t you agree?
 
You mean this Apple.com?

View attachment 2139487

More like this one:

Screenshot_20230109_172854_Chrome.jpg

In most countries it's +3000$ before taxes, so the range I gave is the most representative for the actual world wide price.

Well, it’s not Apple’s fault if their top tier notebook CPU cannot be made to work in a form factor like the Macbook Pro 14”. Its easier to build CPUs when you don’t really care about power consumption don’t you agree?

Fascinating, so no matter what, the important thing is to find some kind a loophole to hammer the rest of the hardware manufacturers.
AMD's top tier 6000 series APU does fit in 14inch laptops, for example the ROG Zephyrus G14, Razer Blade 14 and even Intel's i9-12900H found it's way in 14inch laptops like the Zenbook Pro 14 Duo or the ROG Flow Z13.

You can rest assured than AMD's 7845HX(that's the 12 core 24 threads APU) will see 14inch laptops and the total CPU performance will be at least 1 generation above the M2 Max.
 
You got proof or just spreading FUD? Get this nonsense out of here.

But instead, no, y’all gotta attack, when this should be a great thing that drives competition and doesn’t allow Apple to sit around and make idle improvements.
It is obvious that they are targeting some benchmarks that are not apples-to-apples and we can see what they are not telling us. If the new chips were competitive for general purpose workloads, they would have shown benchmarks to demonstrate that instead of just picking mostly fixed function operations to benchmark against. Those fixed function operations are mainly benchmarking the process node and it is possible AMD has a fixed function pipeline or two that is better then the M1's, but that isn't a very genuine benchmark since it isn't the primary function of the processor. Telling a half-truth about a chips capabilities doesn't promote competition. It is no doubt that these AMD chips are probably better than the M1 for ray traced rendering since the M1 just doesn't have fixed function hardware for that at all, but I doubt they are actually very competitive at performance per watt for any general purpose workloads. Beyond their choice of benchmarks, AMD is boosting the clock speed to 5.2 GHz. That is certainly not a sustainable for long running tasks and would have a really bad performance per watt. A true M1 competitor would have a clock speed closer to 3.2 GHz. Clock speed is a debatable thing for desktop processors where you are looking at different tradeoffs, but we are looking at a processor aimed at mobile devices where clocking this high is disingenuous because it will need a substantial cooling solution and significantly reduced battery life whenever it ramps that high. They completely leave out the performance per watt from the benchmarks except for one fixed pipeline benchmark which doesn't count.

Apple will soon be competing on raytracing and will be moving to a smaller process node, so even any small advantages will be short lived. Apple has been working on adding fixed-function raytracing hardware for years. When it is released (maybe with M3?), based on what is known from Apple patents, it will likely blow every other hardware raytracing implementation out of the water while using less power. Based on patents it will also be able to use most of the compute resources of the chip for raytracing with a much more energy efficient fixed function component that batches up that work for efficient processing and uses a lot less die space making room for more compute units. AMD's implementation uses a lot of power and can't offload some of that work to its GPU compute units.

Apple has been doing the performance per watt thing for more than a decade and it is just a lot better at it than everyone else.
 
Last edited:
  • Like
Reactions: jdb8167 and gusmula
I really don't understated what's the huge mystery for so many users here. What AMD showed is very simple and logical.
Many complained how Apples graphs was "vague" and not enough. I think some of us (at least me) is arguing the same here on AMD.
First lets establish something, so everything is clear:

The Ryzen 7940HX IS NOT AMD'S HIGHEST END laptop SKU.
So? Neither is M2 the highest end chip. It's actually Apples lowest end chip on the its second generation.
They could have used the M1 Pro "only". Rather than one chip for one test and another chip for another test. Or just use both chips for both tests? Or stick with M2 for all tests.
Now that this is out of the way. The very simple reason for which AMD didn't compare their 7940HX with the M1 Max was because the M1 Max is used in a different class of laptops in terms of price in comparison to the kind of Laptops we will see the 7940HX in. Makes sense? Because it's super logical.
Are we going to see the 7940HX in a fan-less laptop? If so, what performance can we expect from that design?
Can it run at that level of performance on battery?
The suggestion that they should have used the M1 Ultra is ridiculous.
The power use for an Ultra chip is not far if any off the 7940HX. It's meant as a point of reference when they speak to how much power the AMD chip needs/uses to achieve X set of points in a benchmark. Where Apple stays we can provide this level of performance for this much power. They do state that the competition can achieve even higher levels of performance but requiring MUCH more power.
Again, I don't need AMD to prove they are better in this instance. They just need to show where they are in relation to the rest. They already are better than intel. Bringing Apple into this is only useful for comparison. So, compare.
Even if let's say the 7940HX matches the M1 Max in CPU performance there's no point for AMD to suggest that the two SKUs are in competition because they aren't.
My point above. It's just to show where they are in relation to what others are doing. Mainly what intel isn't doing and why we are better in the x86 space. And if the M1 Max is faster, show it. AMD has been honest about this in other presentations against intel and or nvidia. It doesn't hurt them to show it here. They don't compete with Apple. But those on the fence of getting an M1 anything Mac maybe thinking I don't want to switch OS's, but I do want the performance. If AMD is close enough or at least heading in that direction. I'd get to stick with Windows OS and all the apps I currently use and get more for less. This does't hurt AMD at all to do this.
A MacBook Pro maybe more expensive than a thin and light and or any x86 counter part. But, if you're going with an M1 Max version agains this AMD chip. And performance is what you're looking for. The other differences may make up your mind for you, like OS and screen size, touch screen, ports, etc.
This is why when AMD launches a new CPUs Line and even the mid-range R7 would beat Intel's highest end consumer i9, AMD still compares their highest end with Intel's highest end and the mid-range with the mid-range.
Yes, I know. And when they are not "faster" in any given category they show it. I'm NOT knocking AMD for their normal presentations. I applaud them. Just in this instance, they aren't fully clear. At least not to me. And they should be, they have a history of it. And as I stated above, it only benefits them when they are.
Also regarding the M2, it was only used to compare the AI performance. The M2 according to Apple has 40% better AI performance. So the M2 is better than the M1 Max in this regard and AMD was just being fair, there's no mystery.
Ok, then show it on the graph. That's literally all i'm saying,.
And to end it. They didn't mention a power level because their SKU's have configurable TDPs, if they mention a power level in a benchmark than that's the default expectation and their chips will get undeserved criticism for not matching this expectation, even if it's the laptop's manufacturer's fault.
Seriously. M1/2 give you all the power plugged in or not. And battery life expectations are based on what you're doing (Video, Web, Games, etc).
Anyway, AMD showed us a few really promising laptop chips that will make Windows laptops way more competitive in 2023 than they were in 2022.
YES. And I think what I said above would have made them even more compelling.
 
Are we going to see the 7940HX in a fan-less laptop? If so, what performance can we expect from that design?
Can it run at that level of performance on battery?

This. By AMD's own specs, that's not going to happen. The 7940HS (TDP: 35-54W) and especially the 7945HX (TDP: 55W) aren't even remotely suitable for something like the MacBook Air.
 
Many complained how Apples graphs was "vague" and not enough. I think some of us (at least me) is arguing the same here on AMD.
Quite pointless complains honestly. AMD compared performance pure and simple there's nothing vague about it. They didn't try to mix performance and power in order to suggest something that isn't remotely true.


So? Neither is M2 the highest end chip. It's actually Apples lowest end chip on the its second generation.
They could have used the M1 Pro "only". Rather than one chip for one test and another chip for another test. Or just use both chips for both tests? Or stick with M2 for all tests.
AMD used the M2 exclusively for AI performance comparison and I already explained why, you should go back and read my comment. If they would have used the M1 Pro in that graph the AI performance discrepancy would have been larger.

Are we going to see the 7940HX in a fan-less laptop? If so, what performance can we expect from that design?
Can it run at that level of performance on battery?
Quite an illogical question taking in consideration AMD's press release and specs of their chips.

The power use for an Ultra chip is not far if any off the 7940HX.
Yeah it is, it's ridiculous to suggest that the 2 are in the same power category.
What exactly makes you think the 7940HX alone can go up to 115W or 215W?

It's meant as a point of reference when they speak to how much power the AMD chip needs/uses to achieve X set of points in a benchmark. Where Apple stays we can provide this level of performance for this much power. They do state that the competition can achieve even higher levels of performance but requiring MUCH more power.
That's for the reviewers to show. AMD doesn't feel like they have to mainly relay on efficiency graphs to make their chips look good.

Anyway these constant suggestions about AMD's CPUs using "crazy amounts" of power shows one thing: people here have no clue what they are talking about and equate AMD CPUs to Intel CPUs because "they are both X86" so they both should be "the same" no?? They both should run Hottt. The funny thing about temperature, the newly released desktop non K 7000 Ryzen CPUs are ridiculously efficient and run very cold like 40 degrees Celsius in Prime95(if you know what this means, you probably don't) witch is absolutely crazy(I'm talking about the Ryzen 7900).


Again, I don't need AMD to prove they are better in this instance. They just need to show where they are in relation to the rest. They already are better than intel. Bringing Apple into this is only useful for comparison. So, compare.

It's a product launch, there will be plenty of reviews which will show all kinds of power numbers. One thing that is certain: the 7940HX won't get anywhere near an M1 Ultra in power usage.

And if the M1 Max is faster, show it. AMD has been honest about this in other presentations against intel and or nvidia.
The M1 Max can't be considered competition for the 7940HX, AMD made it clear. AMD hasn't done anything dishonest in relation to Apple. They don't have to compare in their launch event their mid-range chip with Apple's high-end.

It doesn't hurt them to show it here. They don't compete with Apple.
The 7940HX doesn't compete in the same price category as the M1 Max they already made it clear.

But those on the fence of getting an M1 anything Mac maybe thinking I don't want to switch OS's, but I do want the performance. If AMD is close enough or at least heading in that direction. I'd get to stick with Windows OS and all the apps I currently use and get more for less. This does't hurt AMD at all to do this.
You are quite stretching it. I doubt there's anything close to a relevant number potential AMD costumers that are "on the fence of getting an M1 anything Mac". It doesn't make sense for AMD to cater to a very small niche in a time limited launch event.


A MacBook Pro maybe more expensive than a thin and light and or any x86 counter part. But, if you're going with an M1 Max version agains this AMD chip. And performance is what you're looking for. The other differences may make up your mind for you, like OS and screen size, touch screen, ports, etc.
If performance is so important there will be 7845HX laptops in decent sizes and thinness.
The 7940HX will be used in laptops that are +1000$ cheaper and in some instances more than half the price of an M1 Max Macbooks. That's a huge price difference much larger than any performance difference.


Yes, I know. And when they are not "faster" in any given category they show it. I'm NOT knocking AMD for their normal presentations. I applaud them. Just in this instance, they aren't fully clear. At least not to me. And they should be, they have a history of it. And as I stated above, it only benefits them when they are.
If they aren't clear to you than that's OK, the most important thing is that they are logical and clear for most of the industry. I already said 7940HX vs M1 Pro = better performance in similar priced/lower priced laptops + very good efficiency. We will see the exact details and differences in full reviews. If AMD was not confident about their chip's efficiency they would have only compared it to Intel which they beat by mile, they can already offer better performance than Intel for less than half the power usage.


Ok, then show it on the graph. That's literally all i'm saying,.

If you go back to the first page and look at the graphs you will see that the M2 is only used for an AI performance comparison and that's because the M2 has 40% better AI performance than an M1 Max as is has an updated NPU.

Seriously. M1/2 give you all the power plugged in or not. And battery life expectations are based on what you're doing (Video, Web, Games, etc).
LoL that has nothing to do with what I wrote.

YES. And I think what I said above would have made them even more compelling.
Not at all.
 
  • Haha
Reactions: jdb8167
Well on apple.com it's says 3100$. I can post a print screen if you want.

Nothing escapes me. AMD doesn't have to make computers in order to know in what kind of laptops their chip is going to be used. And it's definitely not +2900$ laptops.

Also you didn't answer my question, why should AMD's middle of the pack chip compete with Apple best laptop chip?
That isn't the way to look at mobile chips. You compare it to the wattage of the chip. The M1 processor is 30 watt, so you would compare it to AMD's 30 watt chips. Chances are AMD doesn't actually have a comparable chip in both price and wattage, so wattage would certainly be the thing to prioritize between those two.
 
That isn't the way to look at mobile chips. You compare it to the wattage of the chip. The M1 processor is 30 watt, so you would compare it to AMD's 30 watt chips.
Yeah it is, no matter home many excuse are made on this thread to find loopholes to take jabs at AMD.
AMD made a short and quick announcement(I bet most users here didn't watch the clip), showed 2 CPU rendering benchmarks and an AI benchmark.
Their use of M1 Pro(although now I think it's possible they used the 10 CPU Core version) clearly suggests that their 7940HX won't compete in the same price bracket as the M1/M2 Max.

Anyway AMD published more details on their Site where they mention Up to 34% faster multithreaded performance over the competition aka the M1 Pro.
In the footnotes the say how they got that number: Testing results demonstrated in DaVinci Resolve BlackMagic , V-Ray, Blender, Cinebench R23 nT, Handbrake 1:5:1. All quite popular benchmarks.


Chances are AMD doesn't actually have a comparable chip in both price and wattage, so wattage would certainly be the thing to prioritize between those two.
It will definitely be comparable especially in price, AMD's APU almost certainly has a much smaller overall die size(similar to the M2 I would say).
In wantage, it doesn't need to be exactly the same.
 
Last edited:
Yeah it is, no matter home many excuse are made on this thread to find loopholes to take jabs at AMD.

No excuses. It’s the industry standard because it makes sense. It’s why AMD and Intel have TDP classes like H and U.
 
No excuses. It’s the industry standard because it makes sense. It’s why AMD and Intel have TDP classes like H and U.
What's the industry standard? Apple's Apple's vague CPU Power vs performance invention to show a difference as big as possible without users actually understanding what they are looking at? Never seen anybody use it.

When Apple launched the M1 Pro/Max they compared it to the i7-11800H, that was according to them the representative Windows 8 Core CPU and not the more efficient 5800H, which they also used only for the graphics test. Why not for both CPU and GPU? And users cry here about AMD's presentation and make up all kinds of scenarios.
 
Last edited:
  • Haha
Reactions: jdb8167
Wrong anwer.
I keep repeating the obvious. The 4940HX is far from the fastest APU AMD announced.
What ultimately determines competition is price and performance. If the 7940HX will use let's say 14w more than the M1 Max CPU and win in performance for half the price how are they in competition?
It's' the difference between a thin and light laptop with amazing performance. Plugged in or not. Vs a heavier laptop that has to be plugged in to get that performance. There are trade offs here. It maybe half the price, but you're trading off that price for something bigger and hotter. And once you factor in other options (screen, noise from fans, OS differences, etc.) That will end up making the choice between saving $1000+ or paying more for the benefits it provides. IF those benefits are worth it to the consumer.
Also AMD directly confirms they aren't. Do you think a few watts will make more of a difference for end consumers that price and performance of the chip?
It depends. If all you do is run Blender renders all day. You may go with the AMD laptop. Or if you running Cine, etc. But, if your into Adobe, DaVinci Resolve, or Final cut. You may think differently.
And you know that the 7945HX will completely trash the M2 Max in CPU performance (some people that saw it at CES said it matched the big boy desktop 16 core in a few instances) even at the base 54W.
Since none of us have seen a M2 Max, this is a guess at best based on leaks that can change. And even then, what tasks? Will it game better than an M2? Most likely. But, without any other good info. This is just a guess.
So what if it will use more power? It will perform much better.
That's intel's take on the story. And I'll state it again. I'm happy AMD is moving in this direction. Less watts more performance. But they are not at M1/2 yet in the performance per watt. These chips don't sell till March.
The 3090 uses more power but there are instances where it performs 5x better than the M1 Ultra GPU so the 2 aren't in competition even if Apple suggested it in a direct comparison.
It uses WAY more power. Like WAY more for a linear comparison.
 
It's' the difference between a thin and light laptop with amazing performance. Plugged in or not. Vs a heavier laptop that has to be plugged in to get that performance. There are trade offs here. It maybe half the price, but you're trading off that price for something bigger and hotter.
Why don't you actually go and look at the dimensions instead of imagining things. The M1 Pros aren't so incredibly thin and light and the 7940HX is a 4nm chip, it has different power characteristics because of the node upgrade. What exactly makes you think performance won't be "amazing" when unplugged? Base clock is 4ghz + around 13% better IPS, that means performance when unplugged should be at least +30-35% better than previous generation. And I'm taking about sustained multi-core performance.

And once you factor in other options (screen, noise from fans, OS differences, etc.)
Taking in consideration my experience with Ryzen 5000H and 6000H: for screen: OLED(enough said), fan noise: nonexistent when unplugged, CPU runs at base clock most of the time and fans barely need to work, OS Differences: Windows has plenty of advantages, a lot of people prefer it vs MacOS.

That will end up making the choice between saving $1000+ or paying more for the benefits it provides. IF those benefits are worth it to the consumer.

That's what you think. Actually a lot of users will get overall sweeter deals with Windows equipped Ryzen APUs.


It depends. If all you do is run Blender renders all day. You may go with the AMD laptop. Or if you running Cine, etc. But, if your into Adobe, DaVinci Resolve, or Final cut. You may think differently.
Actually it seems like AMD has DaVinci Resolve in the bag this generation, Adobe should also run really fast on these new APUs. Final cut is irrelevant, why mention an exclusive software if is a choice situation? If somebody relies on Final cut it's not like he will consider a Windows laptop anyway.


Since none of us have seen a M2 Max, this is a guess at best based on leaks that can change. And even then, what tasks? Will it game better than an M2? Most likely. But, without any other good info. This is just a guess.

M2 Max should be around 15-17% faster while using more power than that. It's mostly just an overclocked M1 at the end of the day.
The 7945HX is a 16 Core CPU. Performance will be between 65-80% better than AMD's previous generation as it literally doubles the cores and also improves the IPC so what's the big mystery?


That's intel's take on the story. And I'll state it again. I'm happy AMD is moving in this direction. Less watts more performance. But they are not at M1/2 yet in the performance per watt. These chips don't sell till March.

Intel's take is that they don't have a choice, their 10nm simply isn't competitive in power usage vs TSMC's 5/4nm tech so they double down on performance no matter what.
AMD's take in 70% more performance for 40% more power(this is pure as an example). Which is totally different. It would have been a fail if AMD didn't take advantage or TSMC's 4nm and double the number of cores without doubling the power.

It uses WAY more power. Like WAY more for a linear comparison.
It uses more power and performs in accordance with that increase. And the 3090 is made on Samsung's old 8nm node which wasn't even designed initially for high performance so the 3090 is not an efficiency champ, still is not bad for how much performance it outputs. The 4070Ti performans way better than the 3090Ti while using way less power.
 
Last edited:
  • Haha
Reactions: jdb8167
More like this one:
Ehh ... OK. If you say so, even tho. I have shown that it starts from $2900, straight from Apple's US Apple Store website ... and there you go moving the goalpost to other countries' cost. It still doesn't change the fact that it starts from $2900.

Fascinating, so no matter what, the important thing is to find some kind a loophole to hammer the rest of the hardware manufacturers.
AMD's top tier 6000 series APU does fit in 14inch laptops, for example the ROG Zephyrus G14, Razer Blade 14 and even Intel's i9-12900H found it's way in 14inch laptops like the Zenbook Pro 14 Duo or the ROG Flow Z13.

You can rest assured than AMD's 7845HX(that's the 12 core 24 threads APU) will see 14inch laptops and the total CPU performance will be at least 1 generation above the M2 Max.
Now the 6000 series comes in to the discussion. Well, I guess you must have invested a lot into AMD. Glad you are happy with whatever they have annouced. I think the world is big enough for all these solutions to find it's market place. So AMD gets bragging right (power consumption be damned) and you are happy. Good for you.
 
If anything, it looks like Apple Silicon has really increased competition between the chipmakers. This is a great thing, something that was badly needed a decade ago.


If the M2 max comes out and outperforms AMD’s Laptop chip, then I would consider that the competition has caught up to Apple sooner than we thought they would.

If the new M2 max comes out and it lags behind AMD‘s laptop chip, that’s pretty bad for Apple, in my opinion.
Whether or not the M2 Max outperforms the 7040 has literally nothing to do with whether or not anyone has caught up to Apple, sooner or later.
 
Ehh ... OK. If you say so, even tho. I have shown that it starts from $2900, straight from Apple's US Apple Store website ... and there you go moving the goalpost to other countries' cost. It still doesn't change the fact that it starts from $2900.
I move the goal posts? So if above 3000$ before tax is the price for most of the planet, it's not important? the US discounted price is the one that's more important? LoL

Now the 6000 series comes in to the discussion. Well, I guess you must have invested a lot into AMD. Glad you are happy with whatever they have annouced. I think the world is big enough for all these solutions to find it's market place. So AMD gets bragging right (power consumption be damned) and you are happy. Good for you.
Of course I mentioned 6000 series laptops as examples when Ryzen 7000 aren't officially available yet. The facts I mentioned hit you quite hard it seems. Anyway Power consumption for the 4nm Zen4 Ryzen 7000 APUs will be excellent and the same can be said about performance. Something like an ASUS Vivobook Pro 14X OLED with a Ryzen 7940HX will be an absolute banger, it will also be able to play games really well thanks to the new RDNA3 iGPU.
 
My iMac burned, the PSU finally died after 5+ years. Literally set itself on fire.
Terrible, very sorry. Thank you for clarifying.
Yes they did kill compatibility.
Not sure if you have been with Apple long. However, while I semi agree with you on this. They do tend to move on WAY more quickly than Microsoft. So the part that I don't agree with is that they did provide a means to migrate. And Apple has been 64bit for a long time.
Rosetta works for 64 bit apps. You didn't even seem to understand what I wrote. 32 bit applications don't work.
Sorry.
That's the problem. 64 bit windows apps and 32 bit Mac apps, both don't work now.
I can't sympathize with the Windows part. You bought a Mac. It was a nice to have ability, but it is a Mac at the end of the day. PLUS, PLUS. Apple is willing to let Windows on ARM work on AS. That ball is in Microsofts court. So, even though I wouldn't advise anyone to get a Mac for Windows work. It "may" come either in hacks or via a proper license (blessing) from Microsoft.
Both worked on the old Macbook Pro. Can you imagine if Microsoft cancelled all 32 bit apps? Terrible idea.
Yes, it would be. But, it's also what holds them back. They can't because they have too many customers that live in the past. Old applications that need to still work. We all know this. But, Apple doesn't play by those rules. They move forward, even if it piss's people off. The last 2 times they did this. They provided a means to transition. Perfect, no. But, a means none the less. G5 to intel, and intel to ARM. It is a pain if you depend on applications and or vendors that either move too slow to keep up, or maybe go away and you depend on that software. But this process isn't new for Apple.
And Apple is a big boy. They can make a laptop that can cool 35W of power consumption. That's what the Ryzen 7940HS needs. Just 35W.
I didn't see 35W. I saw 15-45W. And cooling a chip that can top 5GHz even in bursts will need cooling. Not a little either. And most likely, need to be plugged in. Or battery life will suffer.
Macbook Air can handle it. Not the M2 Macbook Air as it has garbage cooling, but that is Apple's fault, and nothing to do with the CPU.
They could literally put a cooling pad and a double layer of that aluminum foil they put in there now. And it would be fine. Still without a fan. I'll believe it when I see it that AMD's chip can operate like that and perform at any level near what they claim. We have a few months to see that for sure, as it will be put to the test on many reviewers sites. But, I'm very much leaning it's not going to be as good as they claim.
Look up the "ASUS Vivobook S 14X OLED". That's what you are missing. Double the ram and storage,
Difference being one is built in (non upgradable, but REALLY FAST). And macOS seems to handle it very well. Plus the GPU gets to use a lot of it. Can't say that for x86. Storage, sure that being stuck as well has its draw backs. But, it too is pretty darn fast. Tradeoffs.
120hz OLED.
I'm very much enjoying that on my iPhone. So, yeah this is a plus on the x86 side.
AMD's best CPU. Same price as the cheapest M2.
Apple doesn't seem the CPU by itself. So, this isn't possible to compare. We do know that Apples hardware as a whole tends to be more expensive than a x86 equivalent. But, I would state those tradeoffs as reasons to have such a price difference.
Beats the M1 Pro in performance. Stick the new one announced this month in there. That's what you will get.
In a few tests, sure. And the M1 Pro is a year and a half old. With Apple expected to announce M2 Pro and Max, and Ultra. And maybe even an Extreme. They will leap frog each other. Which is great.
 
Yes they did kill compatibility. Rosetta works for 64 bit apps. You didn't even seem to understand what I wrote. 32 bit applications don't work. That's the problem. 64 bit windows apps and 32 bit Mac apps, both don't work now. Both worked on the old Macbook Pro. Can you imagine if Microsoft cancelled all 32 bit apps? Terrible idea.
Microsoft has struggled to port their own IDE, Visual Studio, to 64-bit so of course they won't cancel 32-bit apps. Microsoft released Windows support for 64-bit back in 2002 and it wasn't until 2022 that they supported Visual Studio on 64-bit. The Mac has supported 64-bit apps since 2007 (technically since the 2003 launch of the G5 PowerMac but we'll go for x64 support) and went 64-bit only for the kernel in 2012. If an app hadn't figured out 64-bit in at least the last 8+ years then it hasn't been updated or their developer didn't care. Apple made this easy with fat binaries in 10.4! Seriously there isn't much excuse for not having a 64-bit app with over a decades notice to upgrade.
 
Also Apple doesn't sell chips they sell finished products. AMD's long term potential in chip volume is higher than Apple's and their growth is mostly limited by the production capacity TSMC is able to allocate for them. AMD right now is TSMC's biggest 7nm costumer and soon they will be TSMC's biggest 4/5nm costumer.
Apple is TSMC's top customer and AMD is third: https://www.tomshardware.com/news/amd-becomes-tsmc-third-largest-customer

Seems to still be the case recently too: https://www.taipeitimes.com/News/biz/archives/2022/12/08/2003790298

Would love to see references because near as I can tell Apple blow AMD away at TSMC?
 
Why don't you actually go and look at the dimensions instead of imagining things. The M1 Pros aren't so incredibly thin and light and the 7940HX is a 4nm chip, it has different power characteristics because of the node upgrade. What exactly makes you think performance won't be "amazing" when unplugged? Base clock is 4ghz + around 13% better IPS, that means performance when unplugged should be at least +30-35% better than previous generation. And I'm taking about sustained multi-core performance.
We will all get to see when reviewed. I'll reserve full judgement till shown. Mainly because their showing against M1 Pro and M2 in my view isn't enough to validate anything they stated. I didn't see any power usage while running blender render. But, I know for a fact it doesn't matter with the M1 Pro. Not to mention, the clock speed is as you state 4Ghz base. What is M1 Pro 3.2Ghz? That's your 30% right there. And if its hitting 5.2Ghz.. I would expect more. Again, at 45 Watts its more power for more clock. Verses a 1 and a half year old design.
Taking in consideration my experience with Ryzen 5000H and 6000H: for screen: OLED(enough said), fan noise: nonexistent when unplugged, CPU runs at base clock most of the time and fans barely need to work, OS Differences: Windows has plenty of advantages, a lot of people prefer it vs MacOS.
outside of the OLED, it's all based on the consumers preference. This is a Mac forum, you would expect to find more Mac users here than a PC or Windows based forum. I have an M2 iPad Pro. There are no fans, and it's thinner and lighter than any laptop AMD will fit into at present.
That's what you think. Actually a lot of users will get overall sweeter deals with Windows equipped Ryzen APUs.
Except for the Windows part. Many macOS users run Windows if they have to, not because they want to. It's a deal killer for some of us that don't want to run Windows OS. Others put up with it do to need. Others don't care and are happy to have both exist side by side. We are all different enough.
Actually it seems like AMD has DaVinci Resolve in the bag this generation,
Ok. 3 months is a long time to wait for release. I think we should hold judgement for Apple to release an M2 Pro and Max (or more). But you do you.
Adobe should also run really fast on these new APUs. Final cut is irrelevant,
Well, your not in the bag for AMD... Final Cut is irrelevant? Ok.
why mention an exclusive software if is a choice situation?
You've lost me.
If somebody relies on Final cut it's not like he will consider a Windows laptop anyway.
They would consider it if it can work better for them. Reason for Davinci is that they have a product that now runs even better than Final Cut (for now) on M Apple Silicon. Very sure Apple will update their software to match at least. But, since Davinci runs cross platform. IF AMD's chip can run it better. You may have those users switch over for their workflows. IF they can get great battery life, and all the extras they want and need. They would consider switching.
If you're only using Final Cut, and are in need of a new laptop. Again, it's not out of the question to take a look.
Time is money. All factors get weighed against the pros and cons from each.
M2 Max should be around 15-17% faster while using more power than that. It's mostly just an overclocked M1 at the end of the day.
This is a guess. Just like I can guess the reason it was delayed was due to it needing to be on 3nm. And selling an M2 at the current process was "good enough" for now. Maybe they could have gotten a 20% improvement on the M2 with 3nm. And its up to 30% for M2 Pro and Max? IDK, neither do you.
The 7945HX is a 16 Core CPU. Performance will be between 65-80% better than AMD's previous generation as it literally doubles the cores and also improves the IPC so what's the big mystery?
Lots of cores, lots of power, and lots of performance. Nothing confusing here. Did they compare it to an M1 Max or Ultra? If so, I'd like to see it. What's the performance per watt as it scales up is a good to know..
Intel's take is that they don't have a choice, their 10nm simply isn't competitive in power usage vs TSMC's 5/4nm tech so they double down on performance no matter what.
AMD's take in 70% more performance for 40% more power(this is pure as an example). Which is totally different. It would have been a fail if AMD didn't take advantage or TSMC's 4nm and double the number of cores without doubling the power.
Intel missed the boat. We all know that. AMD ate their lunch and is now comparing themselves to what Apple has been doing. Which again is great that they have made the leaps to not only pass intel, but to achieve some of the same improvements Apple has been doing. This will keep x86 alive for sometime to come.

I still think Apples way is better. It's got better memory. As it is shared with the CPU and GPU, so you can get WAY more video memory than any other computer on the market. It has more speed than any other option on the market. If I remember correctly up to 800GBs. Trade offs, very much un-expandable. You get it as it is and it's that way forever. Limited to external drives and port adapters. It's expensive. Having said that. I have an M1 Max Mac Studio that I don't hear the fans on (ever), and is FAST. I don't really wait for this thing at all.
It uses more power and performs in accordance with that increase. And the 3090 is made on Samsung's old 8nm node which wasn't even designed initially for high performance so the 3090 is not an efficiency champ, still is not bad for how much performance it outputs. The 4070Ti performans way better than the 3090Ti while using way less power.
I'll not comment on Nvidia's $#!Tshow of a power-hungry GPU. Whoops.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.