Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Because it's not their primary focus. Intel and Nvidia prioritize raw performance first and power second (if not third). It guides how each proposed design change is evaluated and implemented. This is why the rumored top spec RTX 4000 GPU may exceed a half KW and Alder Lake lake client has ~270 watt pstates. People may not realize that Intel's ecores, unlike Apple's ecores, were not primarily included for power purposes but performance.
In terms of performance per watt M1 pounds anything that has actually been produced on the x86 side. The numbers you present just show show the same old 'throw more watts and cores at it' mentality that got x86 in this mess. Eventually they are going to get to the point where air cooling isn't going to cut it (for the real high end stuff that is already the case) and no matter how safe it is supposed to be the idea of a water cooled computer scares me silly.
 
In terms of performance per watt M1 pounds anything that has actually been produced on the x86 side. The numbers you present just show show the same old 'throw more watts and cores at it' mentality that got x86 in this mess. Eventually they are going to get to the point where air cooling isn't going to cut it (for the real high end stuff that is already the case) and no matter how safe it is supposed to be the idea of a water cooled computer scares me silly.

Apple prioritizing perf / W is ideal for their target markets - watch, phone, laptops, even rumored VR headsets. In addition to battery life customers are now demanding low fan noise. For a product like the MacBooks a majority of customers care more about low fan noise than the performance (great perf / w allows the ability to address both). Intel, and more recently Nvidia's, perf at any cost was previously acceptable before data centers became so power and cooling intensive, requiring more expensive scale out vs scale up. Data centers are fighting hard against water cooling as TCO is very high so thermal (hence power) constraints are set. BTW, I recommend everyone visit the inside of a data center to appreciate that power efficiency matters. It's quite an experience for your ears. The weight of the cables to carry power and signal is so high that floors have to be specially designed to tolerate the load.
 
Last edited:
Weren't you just arguing earlier that power efficiency doesn't matter that much?
I still remember visiting my university's local data servers (I saw four of them each at a different university over the course my education) in the 1990s and every one of them even back then turned up the AC as high as it could go to keep those servers from overheating. You wore a jacket in those old local data centers if you had any sense. One of them even had this little "Heat kills" with a picture of a server on fire poster right next to the thermostat. :p

Given how conservative universities can be with hardware (The main Ohio State campus was still trying to squeeze that last bit of use out of its old card computers in the late 1980s) they were likely using relative old equipment but you still have the whole AC/Power usage issue and the cost that comes with it. Unless x86 can find somewhat around the heat/power issue it will start to decline in popularity.

In terms of watt/performance Amazon's Graviton2 is better (80-110W) than either a EPYC 7571 (180W) or a Xeon Platinum 8259CL (210W) a detail AWS Graviton2: Arm Brings Better Price-Performance than Intel only hints at (though to be fair they were comparing the Graviton2 to a Intel Xeon Platinum 8175M, a 240W CPU).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.