Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
From HP regarding the performance improvements soldered RAM brings.

"Schnell explained that this performance boost might not ever make any difference to consumers. “An increase in bus speed can have a 3-5% system performance improvement, but that improvement may not be noticeable at a customer level because system memory may not be the key limiter in system performance."

It saves space and it's cheaper. They're the reasons for the switch. I'm happy for anyone to post evidence to the contrary, which would be surprising when no manufacturer wants to stick their neck out to show verifiable numbers. Someone can claim something is an improvement if it's only a 0.1% improvement, and they'd be telling the truth. It's telling they never want to say how much a consumer- a pro or an enthusiast- might actually benefit.

Edit: Having had a look at a few expert testimonies, soldering RAM essentially shows performance gains as a theoretical hypothesis only, and not in any real world measurable way.
I would have to agree that I don't see how soldering RAM would really show a performance gain.

Integrating RAM on a CPU module certainly has been shown to drastically increase performance, though. It isn't how you attach it, it is about where.


 
How would the additional step of integrating RAM on a CPU package actually reduce production cost? That adds a non-trivial extra step in the production process as well as adding the additional cost of the RAM itself. On top of that, this method now complicates inventory, as it multiplies the number of SKUs they have to keep, as each CPU now has multiple variants as each CPU and RAM combination now becomes a separate SKU. This method costs more to manufacture and inventory, not less.

Everything I've read indicates that this CPU and RAM integration is much faster and more energy efficient, though. And the Apple Silicon Macs I have used appear to back up those claims.

To me, it finally makes Apple's lack of RAM upgradability at least have a good reason, versus just soldering RAM on a motherboard (which itself would be quite accurately described by your initial statement, in my opinion.)
I read multiple sources that claimed it was cheaper, including manufacturers. A couple of articles pegged the saving at about $1USD per unit, a combination of parts savings and production speed.

The early speed gains seen were due to the type of RAM being used, which at the time was not available in a socketed form. There's nothing that I, or seemingly anyone else, can find with verifiable numbers.
 
I read multiple sources that claimed it was cheaper, including manufacturers. A couple of articles pegged the saving at about $1USD per unit, a combination of parts savings and production speed.

The early speed gains seen were due to the type of RAM being used, which at the time was not available in a socketed form. There's nothing that I, or seemingly anyone else, can find with verifiable numbers.
Well, I would like to hope that those multiple sources were not talking about manufacturing a CPU, but were instead talking about manufacturing a full computer. That could apply to Apple, for instance, but still doesn't remove the extra overhead of inventory management. So if you wanted to say Apple saves money in the long run on a specific unit, I probably couldn't disagree, but that completely ignores supply chain management, which has a substantial cost.

But there is no way that Intel adding the cost of RAM to a CPU makes that CPU cheaper to manufacture, as that itself is self-evident. If it does, I want a CPU with as much RAM as possible on board, as it will obviously be the cheapest. ;)

As for there being no true speed gains, if you believe having Unified Memory with RAM more closely coupled to a CPU doesn't make a real difference, would you honestly be willing to buy a CPU with no on-board cache? They haven't spent the last 30 years continually pushing more memory closer to the CPU because it costs less.
 
  • Like
Reactions: Analog Kid
Well, I would like to hope that those multiple sources were not talking about manufacturing a CPU, but were instead talking about manufacturing a full computer. That could apply to Apple, for instance, but still doesn't remove the extra overhead of inventory management. So if you wanted to say Apple saves money in the long run on a specific unit, I probably couldn't disagree, but that completely ignores supply chain management, which has a substantial cost.

But there is no way that Intel adding the cost of RAM to a CPU makes that CPU cheaper to manufacture, as that itself is self-evident. If it does, I want a CPU with as much RAM as possible on board, as it will obviously be the cheapest. ;)

As for there being no true speed gains, if you believe having Unified Memory with RAM more closely coupled to a CPU doesn't make a real difference, would you honestly be willing to buy a CPU with no on-board cache? They haven't spent the last 30 years continually pushing more memory closer to the CPU because it costs less.
The caches are a bit different...

I was clearly comparing total build cost with the dollar saving, not a CPU without RAM vs a CPU with RAM. It was so commonly mentioned I'm sure a cursory Google will confirm this? I'm not sure why you'd think supply chain management would be impacted so adversely by this as to offset that.

Also, if the numbers look good, why are companies not keen to post their numbers? If the best is one company saying that integrating the RAM might hypothetically see a 3 or 4% gain, while admitting it would be hard to show as that isn't where the system bottlenecks lie.
 
The caches are a bit different...

I was clearly comparing total build cost with the dollar saving, not a CPU without RAM vs a CPU with RAM. It was so commonly mentioned I'm sure a cursory Google will confirm this? I'm not sure why you'd think supply chain management would be impacted so adversely by this as to offset that.

Also, if the numbers look good, why are companies not keen to post their numbers? If the best is one company saying that integrating the RAM might hypothetically see a 3 or 4% gain, while admitting it would be hard to show as that isn't where the system bottlenecks lie.
Can you clarify exactly how caches are different? Do you believe they used them because of anything other than necessity? They didn’t keep adding multiple levels of cache over the last thirty years because it is better than one level. They have been trying to get the memory as close to CPU as possible for years. They would have put it onboard the CPUs at the start if they could have.

Yes, if you are manufacturing most things, the tradition is that more integration will eventually reduce the overall cost (although often with an initial increase in cost as you develop the tools and process but the unit cost should eventually be cheaper). Assumably even Apple integrating their own 5G will eventually save them money... it is just hard to wrap my head around that after the last 5(?) years.

Intel posted their numbers. Lots of benchmarks on Apple Silicon. How do you propose posting the difference on a hypothetical identical CPU without that integrated RAM?

Edit: Sorry, forgot the supply chain management part. You know how it is a pain with Apple that you have to decide how much RAM and storage you are ever going to want. Well, if you're Intel, you now have to carry at least a couple versions of every CPU that has RAM onboard. Nobody wants the 16 GB version? Well then you have a shortage of 32GB and surplus of 16GB versions, which you never had to worry about before. Sounds minor, but if you are producing hundreds of thousands of units, it is not.

Edit 2: Here is a 9 year old conversation on an engineering forum that is interesting (or maybe not, but I found it interesting, as they discuss lots of this time-worn stuff).

 
Last edited:
  • Like
Reactions: Analog Kid
Title of post talks about GPU performance.

End of article shifts to CPU performance, misleading readers to think Intel's GPU is comparable to Apple Silicon.

It won't be.

What happened to Snapdragon's prerelease hype? Failed to live up to expectations.
 
  • Like
Reactions: Homy
For whatever it's worth:

OpenCL Score
Apple A17 Pro27051
Apple M228426
Apple M330327
Apple M453188
Intel Core Ultra 5 228V25064
Intel Core Ultra 7 268V29316

(I'm surprised the M4 is such a leap.) As has been pointed out, iOS numbers aren't OpenCL scores at all, and therefore not comparable.
 
Last edited:
Can you clarify exactly how caches are different? Do you believe they used them because of anything other than necessity? They didn’t keep adding multiple levels of cache over the last thirty years because it is better than one level. They have been trying to get the memory as close to CPU as possible for years. They would have put it onboard the CPUs at the start if they could have.

Yes, if you are manufacturing most things, the tradition is that more integration will eventually reduce the overall cost (although often with an initial increase in cost as you develop the tools and process but the unit cost should eventually be cheaper). Assumably even Apple integrating their own 5G will eventually save them money... it is just hard to wrap my head around that after the last 5(?) years.

Intel posted their numbers. Lots of benchmarks on Apple Silicon. How do you propose posting the difference on a hypothetical identical CPU without that integrated RAM?

Edit: Sorry, forgot the supply chain management part. You know how it is a pain with Apple that you have to decide how much RAM and storage you are ever going to want. Well, if you're Intel, you now have to carry at least a couple versions of every CPU that has RAM onboard. Nobody wants the 16 GB version? Well then you have a shortage of 32GB and surplus of 16GB versions, which you never had to worry about before. Sounds minor, but if you are producing hundreds of thousands of units, it is not.

Edit 2: Here is a 9 year old conversation on an engineering forum that is interesting (or maybe not, but I found it interesting, as they discuss lots of this time-worn stuff).

This debate went round and round in other threads, with supposed electrical engineers taking differing points of view. I don't have the energy. I look forward to any future developments that demonstrate a meaningful difference. 😊
 
  • Like
Reactions: bgillander
From HP regarding the performance improvements soldered RAM brings.

"Schnell explained that this performance boost might not ever make any difference to consumers. “An increase in bus speed can have a 3-5% system performance improvement, but that improvement may not be noticeable at a customer level because system memory may not be the key limiter in system performance."

It saves space and it's cheaper. They're the reasons for the switch. I'm happy for anyone to post evidence to the contrary, which would be surprising when no manufacturer wants to stick their neck out to show verifiable numbers. Someone can claim something is an improvement if it's only a 0.1% improvement, and they'd be telling the truth. It's telling they never want to say how much a consumer- a pro or an enthusiast- might actually benefit.

Edit: Having had a look at a few expert testimonies, soldering RAM essentially shows performance gains as a theoretical hypothesis only, and not in any real world measurable way.

I know you've said we'll not be communicating anymore, 😔, but despite your belligerence it's worth touching on each point for anyone else who cares to understand.

Power:​

Intel has published a 40% power savings due to on package RAM. You keep saying nobody is publishing numbers, but there it is. Intel provides media packs and http://intel.com/performanceindex where they provide support for many of their claims. If you don't find what you're looking for there I suggest you reach out to their PR department and ask them to support their public claim.

I'm not sure why you're so unwilling to accept the physical reality of the situation though. As I said:
Shorter lines and no sockets, means less contact resistance, less trace capacitance, less I^2R losses, less fCV^2 losses, and lower power drivers.
The physics here are pretty clear.



Performance:​

You’re referring to throughput. Short, predictable, controlled impedance lines without sockets means lower latencies. The difference in latencies is probably not huge, but it's there. It also makes routing a much wider bus more manageable.
It'll depend on the workload, as all these things do, but it's an improvement you get at the same technology level.

Again, this is a pretty straightforward problem to analyze. When routing to DDR, you need to length match your lines. The wider the bus, the harder that is and it means all lines must be the same length as the longest line. Every 15mm of length you add to the line you're forced to make longest, adds a full clock of access latency at LPDDR5x-8533. That's bigger than I'd anticipated, actually. If your RAM is just 15cm from your processor, 6 inches, you've added 10 clocks of access delay.

This, by the way, is the reason that DDR bursts. It's not truly random access, you typically get the data you want as part of a full page of data because it's a better bet to risk sending a bunch of useless data than to wait for each word to be individually requested through all that and other sources of latency. All that extra data traffic just burns more power and your bursts are limited to depths of 16 in DDR5, so you're not going to hide it all and it won't help you in truly random reads or writes. Like I said, workflow dependent.

Using a typical fly-by wiring for the command lines, a standard DIMM adds almost 15cm of routing length just on the DIMM itself. An SO-DIMM manages to cut that in half. But in both cases that's just the module, before even accounting for the routing needed to reach the module and delays through the connector. The on package memory is not only closer to the controller, but also configured as dice stacked vertically rather than full packages aligned laterally. The height of a die is far less than the length of a package and pins.

An increase in bus speed can have a 3-5% system performance improvement

That's quite significant for a single design change. There has been many generations of Intel processors where the aggregate improvement is 10%.

but that improvement may not be noticeable at a customer level because system memory may not be the key limiter in system performance."

It wasn't a quote from HP and you're omitting the punchline: " if the storage SSD is the system bottleneck, then a small improvement in memory performance won’t matter." That's just a dumb thing for Schnell from Dell to say. He may as well have said "it may not be noticeable at a customer level because the user might be asleep or looking out the window when the operation happens." He's probably trying to make the point that your mileage may vary depending on your workflow and other bottlenecks in the system, but that goes without saying (a faster CPU doesn't help if you're memory constrained, a faster SSD doesn't help if your algorithm takes too long to process the data, etc, etc). It just has the unfortunate consequence of leading people like you to think that "experts have divided opinions".

3-5% is a significant improvement without having to wait for a new RAM technology and it's based on solid physical grounds.



Reliability:​

If you think that a well designed RAM slot allows free movement of RAM inside it, you have a funny idea of what they're like.

If you think long plastic rails, springs, clips, pressure contacts and hand assembled removable boards mounted at one edge don't move, you have a funny idea of what they're like.



Cost:​

Who exactly do you think is saving any money here? Intel? They now need to buy memory they didn't have to before. The OEM? They now have a single source they're forced to buy their memory from and need to guess the demand for various memory configurations when ordering parts ahead of a scheduled build rather than installing modules on its way out the door. There's more to the cost of a product than the raw material costs-- inventory management and delivery delays play a role, as do competition in pricing and multiple sources to mitigate shortages from any one supplier.

From the same article you cherry picked the earlier quote from, Schnell from Dell says "There is no effect on manufacturing as the DRAM packages are standard. There is no impact on the retail price of our laptops". That's followed by a quote from Lenovo: "Both soldered and socketed RAM designs are now quite mature. As a result, we see no impact on the manufacturing process and, therefore, the cost to the consumer." Which leads the author to the conclusion: "It seems that, although beneficial to the manufacturing process, the cost and the ease of production don’t play a big part in companies choosing soldered RAM."

And those quotes were before Intel became their sole source of memory.


Size:​

what kind of RAM sticks were they comparing to to save 250mm2? 😅😅
Here, you and I agree. It's a laughably small estimate. An SO-DIMM, not including the actual socket itself, is 2030mm^2. The entire Lunar Lake package, RAM, and Foveros stack all included, is 742mm^2. That looks like more than 1300mm^2 of board space reduction and much more than that once routing and sockets and thermals are accounted for. That's before even looking at the vertical dimension.




So, why are laptops moving to soldered RAM? Because it produces a smaller, faster, more reliable and lower power product and, who knows, might be a little cheaper too.
 
Last edited:
For whatever it's worth:

OpenGL Score
Apple A17 Pro27051
Apple M228426
Apple M330327
Apple M453188
Intel Core Ultra 5 228V25064
Intel Core Ultra 7 268V29316

(I'm surprised the M4 is such a leap.)

What benchmark is being used for the "OpenGL Score"?

For that matter, who is generating OpenGL scores for the A17/M2/3/4? I assume those are from systems running iOS/iPadOS but I thought OpenGL was not/is no longer native to the iOS/iPadOS platform (nor macOS for that matter).

On the other hand if those are Geekbench numbers wouldn't that be comparing Metal scores versus Vulkan and/or OpenCL scores? It's unclear (to me) if those are directly comparable. But either way, if those are Geekbench numbers, the 30K for the M3 looks a little low for that chip under Metal.
 
3-5% is a significant improvement without having to wait for a new RAM technology and it's based on solid physical grounds.

A one-time 3-5% improvement is not significant to me and less so if it came at the expense of not being able to upgrade RAM not only for the life of the device but losing that ability in all future devices based on that architecture.

The purported 40% decrease in heat (energy consumption, etc) has value to me in ultrabook/etc products (e.g. MacBook Air). Less so for desktops and "pro" laptops.

Intel's approach of using "Memory On-Package" in their ultrabook-targeted chips while assuming external memory for the workstation- and server-targeted products makes a lot more sense to me. There not only do I want upgradable memory, I'm also likely to want more RAM than will fit on current MOP chips and ECC RAM at that.

On the flip side, if MOP is just a stepping stone towards future Processor-In-Memory architectures with massive numbers of CPU I will say I'm at least curious. But if so would be nice to hear that discussed more.
 
  • Like
Reactions: ric22
I'm not sure what you mean by "one time", but as RAM clocks increase this improvement scales with them.

What I mean is that in an industry where performance gains are still expected to be exponential (granted more like 10-20% rather than 40-50% every year), an architectural shift that provides 3% in one year but comes with significant end-user downsides isn't that great. If system A is 10000x an 8088 versus processor B is 10300x, I am not sure anyone cares. At this point, I am not sure how many people will care if their computer finishes a task in 29 seconds versus 30 seconds. But will notice if they can't add RAM if their needs change or even buy enough RAM to begin with (e.g. dramatic reduction in maximum capacity on the Mac Pro).

On the other hand, a 40% reduction in heat/etc in the mobile/laptop market is pretty significant and understand if integrating the memory onto the CPU is the best way to achieve that with foreseeable technology. More generally the more a change feels like a genuine engineering tradeoff the easier it is to accept the change. The more it feels like an attempt to extract value from the buyer, the more I start to look at alternatives...
 
  • Like
Reactions: ric22 and drrich2
What benchmark is being used for the "OpenGL Score"?

My bad. "OpenGL" was a typo; I meant OpenCL.

For that matter, who is generating OpenGL scores for the A17/M2/3/4? I assume those are from systems running iOS/iPadOS but I thought OpenGL was not/is no longer native to the iOS/iPadOS platform (nor macOS for that matter).

On the other hand if those are Geekbench numbers wouldn't that be comparing Metal scores versus Vulkan and/or OpenCL scores?

Oof! Yeah, big mistake on my end. I assumed Geekbench would be a little clearer on that.

The M2 and M3 numbers, as well as the (leaked/rumored) Lunar Lake numbers, are OpenCL. The A17 Pro and M4 numbers are not OpenCL. That explains the suspiciously high M4 score.

OpenCL numbers used:



 

No, Intel's latest CPUs failed. And Lunar Lake will fail too.

I hope the CEO is proud. They lost over 10 000 workers and they are further tanking.
 
What I mean is that in an industry where performance gains are still expected to be exponential (granted more like 10-20% rather than 40-50% every year), an architectural shift that provides 3% in one year but comes with significant end-user downsides isn't that great. If system A is 10000x an 8088 versus processor B is 10300x, I am not sure anyone cares.

Again, that 10-20% change is an aggregate change of multiple smaller improvements and it's exponential because it builds upon the changes of the previous generation-- it's not 10% better than generation A, it's 10% better than generation C which is 10% better than generation B which is 10% better than generation A and therefore 33% better than generation A.

So to say "I'm used to seeing 10-20%, not 3-5% so I don't care about that little thing" is missing the mark. When Intel claims a 30% improvement on a given benchmark, it is in part because of this improvement in memory (which might very well be more than the 5% Schnell from Dell reference discussing a different system). You don't get the big number without multiplying out a lot of little numbers.

And, like I said, this benefit scales with clock rate. If you double the RAM clock, you'll double the number of clocks lost to the extra trace lengths and there's nothing you can do to change the speed of light.

At this point, I am not sure how many people will care if their computer finishes a task in 29 seconds versus 30 seconds.

Sorry, have you been in these forums?

More seriously though I return to the point of aggregate improvement. Save a second here, there, and one more place and there's they 10% generational improvement you've come to expect on that 30 second workload.

But will notice if they can't add RAM if their needs change or even buy enough RAM to begin with (e.g. dramatic reduction in maximum capacity on the Mac Pro).

I think the number of people who change the ram over the life of their machine has dwindled to a relatively small number over the years. And of those, many have little trouble just planning ahead for their future needs. Especially in the laptop market.

I'll grant that it's different though in the Intel world than it is in the Apple world. In the Wintel world, system requirements are usually more granular. Apple will say "requires an Mx processor" and will make technologies available across an entire product generation. Wintel is more likely to say "must have this processor, this GPU and this much RAM".

On the other hand, a 40% reduction in heat/etc in the mobile/laptop market is pretty significant and understand if integrating the memory onto the CPU is the best way to achieve that with foreseeable technology. More generally the more a change feels like a genuine engineering tradeoff the easier it is to accept the change. The more it feels like an attempt to extract value from the buyer, the more I start to look at alternatives...

The extent to which things look like cynical money grabs is typically a function of people's ability to understand the technology, their willingness to learn what they don't understand, and the relative impact of the tradeoffs on them personally.

The truth of the matter is businesses need to remain competitive and extracting more money from the customer than a customer finds a product worth is going to cost customers. Extracting up to the amount of money a customer finds a product worth doesn't require gimmicks in the board layout.
 
  • Like
Reactions: bgillander

No, Intel's latest CPUs failed. And Lunar Lake will fail too.

I hope the CEO is proud. They lost over 10 000 workers and they are further tanking.
I certainly hope Lunar Lake doesn’t have those oxidation issues, since I’m pretty sure TSMC is fabbing them for Intel. No one should want that.
 
You’re referring to throughput. Short, predictable, controlled impedance lines without sockets means lower latencies. The difference in latencies is probably not huge, but it's there. It also makes routing a much wider bus more manageable.

congestion on a network also pragmatically incurs latency. 50 cores putting in different memory requests to one memory controller is going to cause a queue to build up to serially service those requests. If there are for more requests ( cars ) than controllers ( lanes on the road ) then things take longer to get through. More time to complete ... more real wall clock time latency.

Sky high concurrent requests ... relatively incrementally shorter wires isn't going to make an offsetting difference.
If the memory subsystem is more skewed toward GPU computational core counts or CPU core counts. Roughly around an order of magnitude difference in number for those two in consumer systems.
 
  • Like
Reactions: Chuckeee
congestion on a network also pragmatically incurs latency. 50 cores putting in different memory requests to one memory controller is going to cause a queue to build up to serially service those requests. If there are for more requests ( cars ) than controllers ( lanes on the road ) then things take longer to get through. More time to complete ... more real wall clock time latency.

Sky high concurrent requests ... relatively incrementally shorter wires isn't going to make an offsetting difference.
If the memory subsystem is more skewed toward GPU computational core counts or CPU core counts. Roughly around an order of magnitude difference in number for those two in consumer systems.

Car analogies, well all analogies actually, are never perfect. If they were perfect they wouldn't be analogies for something, they'd be the thing.

So let me build on yours and see if it helps. If it's still not clear I can try to construct a better analogy. For humans in the kind of stop and go traffic you're describing, you come to a stop waiting some unknown amount of time for the lane ahead of you to clear and then are able to move forward for, say, 8 seconds before needing to hit the brakes and wait again. Miserable. Now assume that your car has been designed such that it takes 10 seconds between you pressing the accelerator and your engine engaging-- that's the latency I'm referring to here. Imagine how much worse that makes everything for everyone.

It takes 18 seconds to make every 8 seconds of actual progress.

When your high speed LPDDR5x-8533 RAM sends data to you, it does it in a burst of 16 transfers (you can request less, but 16 is the highest throughput mode). Since this is double data rate the actual data transmission (moving forward in your lane) actually only takes 8 clocks (the 8 seconds above). If your RAM is 15cm from the controller on the board then it takes 10 full clocks (the 10 seconds above) for the command to travel at the speed of light down those wires and for the very first bits of data to arrive back on those wires.

It takes 18 clocks to receive every 8 clocks of useful data.

I know it's hard to imagine light taking time to travel such short distances, but that's the reality of the insane speeds we're now running at. The speed of light is the speed of information. If you make it travel further, it takes longer for new information to arrive. It's the difference between running a radio controlled robot in your back yard, on the moon, or on Mars. The further away something is, the longer it takes to talk to and hear back from it.

Of course there are a variety of other delays and challenges to navigate. If your point is that there are multiple sources of congestion and ways to badly design a memory system, then I agree. You could design a memory system badly, or impose a workflow that exercises it in a particular way, and introduce choke points at any number of places in the system-- but this latency makes every one of those choke points more painful.
 
Last edited:
  • Like
Reactions: drrich2
My bad. "OpenGL" was a typo; I meant OpenCL.



Oof! Yeah, big mistake on my end. I assumed Geekbench would be a little clearer on that.

The M2 and M3 numbers, as well as the (leaked/rumored) Lunar Lake numbers, are OpenCL. The A17 Pro and M4 numbers are not OpenCL. That explains the suspiciously high M4 score.

OpenCL numbers used:




Thanks for sharing the sources and details. Here's a new table that I think would be clearer (trying to paper over limitations in Geekbench, which is imperfect in the best of cases...but I also find helpful directionally...):

ProcessorPlatformOpenCLMetalSource
A16 BioniciPhone 14 Pro22,543
A17 ProiPhone 15 Pro27,054
M1iPad Pro32,300
M1MacBook Pro20,32432,530
M2iPad Pro45,211
M2MacBook Pro28,21745,642
M3iPad Pro45,211
M3MacBook Pro30,29747,477
M4iPad Pro53,178
M4MacBook Pro34,08754,362Extrapolated from iPad/M4 and MacBook Pro/M3/M2/M1
Intel Core Ultra 134U15,804Extrapolated from A350M and 140V
Intel Arc A350M23,529
Intel Arc 140V27,183
Intel Arc A770M88,086
Intel Arc A770106,170

What I infer from the above is that Apple's M3 GPU runs about ~ 2x the GPU in Intel's Core Ultra 134U's, the Intel processor I'd consider most comparable the M3 at the same time. On the other hand, Intel's Core Ultra 7 258V is likely to substantially close but not eliminate the gap with the M4 while also consuming somewhat more power.

P.S.I included the A770 and A770M for comparison though neither are appropriate for the ultraportable market.
 
  • Like
Reactions: Chuckeee
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.