Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Considering we only have Nvidia, then. Yes they are are the worst.
AMD quite literally have no card available that can compete with Nvidia.

Their 6900xt is cost double the 3080 and it would be better to go with two 3080s or a Ti
And the 6800xt is Power consuming beast equivalent to the 3070

But sure intel will be worst when they have their GPU available
Can you provide examples of the power consumption issue, and an example of the 3080 being faster than the 6900?
 
When I got my i5 Mac Mini in 2018 I picked it up with 16GB RAM. I do a lot of photo editing in my career and in 2019 picked up a 30 megapixel camera. Editing photos of that size was almost unbearable since tools like Liquify didn't work in realtime. I added an additional 32GB RAM chip assuming that was my bottleneck. It wasn't.

I was seeing all these Mini's with high Metal scores on Geekbench 5 and starting investigating. They were all running eGPUs and BlackMagic wasn't the only player in town. I managed to pick up a supported eGPU and it was true plug and play. My OpenCL & Metal scores went from ~4,700 to ~33,000 where it sits now. More importantly, I can edit my large (16-bit) image files quickly and the GPU takes a load off the processor so it stays cool & not throttled. The GPU was exactly what the Mac was missing all along.
Good argument for the getting M1 Max 32 Core, right? Not just for 3D rendering, video editing, gaming, but even for 2D graphics and digital photo processing speed, in other words?
 
Well you have these

And for power
Toms bar graphs show 1 thing, then the Efficiency vs Power table shows another. I am not sure why they wouldn't lead with the table. I don't know what to make of UserBenchmark, I've been told by other places (Reddit, Beyond3D) not to read too much into that sites scores as they tend to be overly biased against AMD hardware.
 
Good argument for the getting M1 Max 32 Core, right? Not just for 3D rendering, video editing, gaming, but even for 2D graphics and digital photo processing speed, in other words?
Yes, I think those processes could benefit from the higher-performance graphics chips.

At first glance it appears the M1 Pro is getting similar Metal & OpenCL benchmarks as my Mini with RX 570 eGPU (4GB card). Whereas the M1 Max seemingly about doubles those scores. I know which I'd chose to future-proof my Mac if budget permits.
 
Developers DON’T see value in developing for macOS. Perhaps if macOS was selling in the billions, we’d see the same thing happening there.
I think that the new-gen graphics specs on Macs will allow for more games to be ported for Mac. Devs know the minimum specs needed to run their games and porting them for Mac won't be much different than porting them for Nintendo Switch or any other system.
 
I think that the new-gen graphics specs on Macs will allow for more games to be ported for Mac. Devs know the minimum specs needed to run their games and porting them for Mac won't be much different than porting them for Nintendo Switch or any other system.
The thing is, though, the M1 Mac could currently be capable of real time ray tracing that puts Nvidia to shame. It wouldn’t matter because there’s at most, a few million of them in the world, not currently enough to expend the effort to try to custom build a Metal solution specifically for Macs. To the developers, it’s always been about “How much money might I be able to make?” and with there being so few Macs, the potential sales forecast doesn’t look great. Especially when the effort has to start with a lot of coding that won’t be able to be easily reused anywhere else.
 
The thing is, though, the M1 Mac could currently be capable of real time ray tracing that puts Nvidia to shame. It wouldn’t matter because there’s at most, a few million of them in the world, not currently enough to expend the effort to try to custom build a Metal solution specifically for Macs. To the developers, it’s always been about “How much money might I be able to make?” and with there being so few Macs, the potential sales forecast doesn’t look great. Especially when the effort has to start with a lot of coding that won’t be able to be easily reused anywhere else.
At the present time I think you're correct.

But in 5 years when a high-percentage of (M1) Macs will have the specs to run at least medium-grade games, I think Apple will figure out a way to get some titles on the Mac platform. Whether that's introducing a Game Pass/Apple Arcade type subscription or just by forming their own studios.

What I do know is that my Xbox Series S can play certain titles at 60fps at 1080p (or 1440p) and they look freakin amazing. That's a pretty low bar technically for Apple to hit with the new architecture IMO.
 
At the present time I think you're correct.

But in 5 years when a high-percentage of (M1) Macs will have the specs to run at least medium-grade games, I think Apple will figure out a way to get some titles on the Mac platform. Whether that's introducing a Game Pass/Apple Arcade type subscription or just by forming their own studios.

What I do know is that my Xbox Series S can play certain titles at 60fps at 1080p (or 1440p) and they look freakin amazing. That's a pretty low bar technically for Apple to hit with the new architecture IMO.
100% of all the Windows, Playstation and Xbox games made today are made WITHOUT using Apple’s Metal API’s though. While the models and data structures may be portable between these three, Apple’s Metal is a different beast which would require coding specifically for it. So, even with more M1 devices out there, the devs will still be factoring in how much it’s going to cost to develop and, more importantly, support bugs and problems that will happen over time.
 
  • Like
Reactions: weaztek
Care to elaborate on what those "design flaws" are exactly?
-Notch (worst thing ever happened in UI history)
-chunky design looks much older than previous models (looks like 2008 model)
-10gbe ethernet lack
-cf card? really?
-HDMI at 60hz? naaah


Nothing new besides processor and display which are indeed amazing.
 
  • Like
Reactions: sillycyber
A 10W laptop cannot do sustained 2.6TF. At least not on planet earth yet.
So disappointing that you simply disappeared after multiple people proved you wrong.

What you're missing is the price. Yes the M1 can do 2.6TF with 10W, yes the M1 Max can do 10.4TF with 60W, but the M1 Max MacBook costs 6 times the price of the PS5. The PS5 uses cheaper hardware that consumes more power and requires more cooling, but it's cheaper and hey it actually runs a decent amount of games that you'd want to play.

It's an ugly quality not to admit it when you're wrong.
 
So disappointing that you simply disappeared after multiple people proved you wrong.

What you're missing is the price. Yes the M1 can do 2.6TF with 10W, yes the M1 Max can do 10.4TF with 60W, but the M1 Max MacBook costs 6 times the price of the PS5. The PS5 uses cheaper hardware that consumes more power and requires more cooling, but it's cheaper and hey it actually runs a decent amount of games that you'd want to play.

It's an ugly quality not to admit it when you're wrong.

I own the M1 MacBook Pro, I run extensive machine learning algorithms such "Random Forest" and "Support Vector Machines". When I GPU accelerate the tasks the MacBook only does a sustained 0.9-1.3TF. That’s far from 2.4TF!

The main reason for this is that the SOC is power limited and will not hit its theoretical 2.4 limit (when all transistors are switched).
 
I own the M1 MacBook Pro, I run extensive machine learning algorithms such "Random Forest" and "Support Vector Machines". When I GPU accelerate the tasks the MacBook only does a sustained 0.9-1.3TF. That’s far from 2.4TF!

The main reason for this is that the SOC is power limited and will not hit its theoretical 2.4 limit (when all transistors are switched).

The main reason is that it is called "theoretical limit" for a reason. It is very difficult to hit it in real world code - you need a perfect sequence of FMAs with no stalls or dependencies. The only way to achieve that performance is with carefully designed microbenchmarks.

In real world code you won't be seeing 2.6TFOPS on M1 just like you won't be seeing 30TFLOPS on an 3080 RTX... and it has nothing to do with it being power limited. In fact, if you get to 1.3TFOPS in your real world code it's already a sign that you are getting good throughout (that's theoretical max for non-FMA ops).
 
I own the M1 MacBook Pro, I run extensive machine learning algorithms such "Random Forest" and "Support Vector Machines". When I GPU accelerate the tasks the MacBook only does a sustained 0.9-1.3TF. That’s far from 2.4TF!

The main reason for this is that the SOC is power limited and will not hit its theoretical 2.4 limit (when all transistors are switched).
As long as you acknowledge your test results run counter to essentially everyone else's, and thus your claims are essentially baseless. This was demonstrated multiple times in this thread which you chose to ignore, so I don't think any progress will be made here unfortunately.
 
The main reason is that it is called "theoretical limit" for a reason. It is very difficult to hit it in real world code - you need a perfect sequence of FMAs with no stalls or dependencies. The only way to achieve that performance is with carefully designed microbenchmarks.

In real world code you won't be seeing 2.6TFOPS on M1 just like you won't be seeing 30TFLOPS on an 3080 RTX... and it has nothing to do with it being power limited. In fact, if you get to 1.3TFOPS in your real world code it's already a sign that you are getting good throughout (that's theoretical max for non-FMA ops).

With the 3080 you also won’t hit the theoretical 30TFLOPS because of its 320W power limit.
Lots of user have shown that by simply undervolting their 3000 series cards they reduced the base power consumption <320W which increased the power package of the card. So when a specific task hits 320W the SOC can’t switch more transistors, but when the user slightly reduced the voltage of the transistors in use, the card still used 320W but switched more transistors because they increase the headroom and pushed more TFLOPS as a result.

I think manufacturers should be honest with their specs and publish usable TFLOPS instead of theoretical max.
 
Visit the apple website iPhone 13 AR demo. Your device heats up within seconds. Within a minute it gets HOT. That’s because your device doesn’t have proper cooling. Sure these new M1’s have proper cooling but let’s see how long sustained performance is after thorough testing.

sure these are extremely efficient SoC design but physics is real as well.
It doesn't matter how much the device heats up in the AR demo. All that matters is the sustained gaming performance, ie what it throttles to. There's plenty of evidence that you can get a very good gaming experience at that level of performance that even the M1 throttles down to in a passively cooled device.

The (passively cooled version) M1 when throttled down to its lowest level of performance under gaming load is still orders of magnitude more powerful CPU and GPU-wise than the Nintendo Switch for example.
 
With the 3080 you also won’t hit the theoretical 30TFLOPS because of its 320W power limit.
Lots of user have shown that by simply undervolting their 3000 series cards they reduced the base power consumption <320W which increased the power package of the card. So when a specific task hits 320W the SOC can’t switch more transistors, but when the user slightly reduced the voltage of the transistors in use, the card still used 320W but switched more transistors because they increase the headroom and pushed more TFLOPS as a result.

Yes, because RTX GPUs use turbo boost like technology. At the peak clock (1.7Ghz), you get maximal theoretical throughput of 1.7 (clock) * 8704 (ALUs) * 2 (FMA) ~ 29TFLOPs. At the base clock however (1.44Ghz) its closer to 25TFLOPs.

M1 doesn't really work that way. It has a nominal clock of ~ 1.26Ghz at 10W and that's where it's peak FLOPS throughout comes from. It can clock lower than that (e.g. when its thermally constrained like in the MBA), but there is no comparison to a 300W+ GPU. If you use M-series chip in a chassis with active cooling, you are probably getting maximal throughput (provided your workload is not too short so that the GPU has the chance to "warm up").

I think manufacturers should be honest with their specs and publish usable TFLOPS instead of theoretical max.

I agree, but this is tricky too, as it will wildly depend on your use case. In fact, they should probably not publish TFLOPS at all, as that is something you can trivially calculate anyway.
 
With the 3080 you also won’t hit the theoretical 30TFLOPS because of its 320W power limit.
Lots of user have shown that by simply undervolting their 3000 series cards they reduced the base power consumption <320W which increased the power package of the card. So when a specific task hits 320W the SOC can’t switch more transistors, but when the user slightly reduced the voltage of the transistors in use, the card still used 320W but switched more transistors because they increase the headroom and pushed more TFLOPS as a result.

I think manufacturers should be honest with their specs and publish usable TFLOPS instead of theoretical max.
You actually have 0 evidence that the M1 Max is power limited in that way. You seem to have no concept of power efficiency. Not one single credible source has found that the graph Apple provided where it gets the same performance as a competing high end laptop graphics chip at 100w less power is incorrect. Please explain this.

Apple: "Compared with the highest-end discrete GPU in the largest PC laptops, M1 Max delivers similar graphics performance using up to 100 watts less power."

If that is wrong then we have some almost criminally false marketing so please enlighten us as to how you went against all mainstream knowledge and single handedly discovered this.
 
So disappointing that you simply disappeared after multiple people proved you wrong.

What you're missing is the price. Yes the M1 can do 2.6TF with 10W, yes the M1 Max can do 10.4TF with 60W, but the M1 Max MacBook costs 6 times the price of the PS5. The PS5 uses cheaper hardware that consumes more power and requires more cooling, but it's cheaper and hey it actually runs a decent amount of games that you'd want to play.

It's an ugly quality not to admit it when you're wrong.
So funny. So you imagine that anyone anywhere would by a MBP as a video game console? The comparison to price is absolutely ridiculous, the devices are for completely different purposes. I mean the MBP can do great video editing, but the poor PlayStation can’t. Aw!

Seriously, the comparison was on performance only not whether one is a good video game console or not. Now what was that, something about quality. What was that?
 
-Notch (worst thing ever happened in UI history)
-chunky design looks much older than previous models (looks like 2008 model)
-10gbe ethernet lack
-cf card? really?
-HDMI at 60hz? naaah


Nothing new besides processor and display which are indeed amazing.
The Notch - mitigated by Apple and hardly a thing worth worrying about given how long it’s been around. That being said, you either get used to it or you don’t. I’m guessing you wouldn’t.

The design - is fine and it harkens back to older MacBook Pros. Professionals got what they wanted in terms of performance and functionality, they aren’t going to complain.

10GbE - Apple hasn’t had onboard Ethernet for almost 10 years. Had Apple added it, it’s more heat, more power used and a thicker chassis needed. Anyone using 10GbE is better served by a dedicated box external to the MBP.

SD Card, no CF Card - Apple’s research must have told them that enough users used it to make it worth putting back on the MBP.

HDMI at 60Hz - Apple doesn’t mean for it to be the end users primary means of attaching to a monitor. I suspect there is a lot of disdain for HDMI at Apple. It still gets the job done. Awful lot of whining by a lot of people who cannot afford either the MacBook Pro or an HDMI 2.1 monitor. An HDMI 2.0 TV or projector is covered and that’s enough.
 
My son via phone. Dad got a deal in an Nvidia GP. (Something like a 5000?). Had to get a new power intake unit.
 
So funny. So you imagine that anyone anywhere would by a MBP as a video game console? The comparison to price is absolutely ridiculous, the devices are for completely different purposes. I mean the MBP can do great video editing, but the poor PlayStation can’t. Aw!

Seriously, the comparison was on performance only not whether one is a good video game console or not. Now what was that, something about quality. What was that?
Where did I indicate any of that? I was responding to someone who was claiming it was impossible for the M1 Max to achieve the stated performance per watt because the PS5 needed more than twice the watts to do it. All I was saying is yes it is possible to get a far better performance per watt than the PS5, but the efficiency isn't free, you have to pay out the nose for it.

I never claimed Mac was a good gaming platform and I'm not sure how you came to that conclusion.
 
The thing is, though, the M1 Mac could currently be capable of real time ray tracing that puts Nvidia to shame. It wouldn’t matter because there’s at most, a few million of them in the world, not currently enough to expend the effort to try to custom build a Metal solution specifically for Macs. To the developers, it’s always been about “How much money might I be able to make?” and with there being so few Macs, the potential sales forecast doesn’t look great. Especially when the effort has to start with a lot of coding that won’t be able to be easily reused anywhere else.

A few million? There’s more than a “few million” M1-based devices out there, between the M1 MacBooks, M1 ipads, and now the M1 Pro and M1 Max macbooks. Not to mention that people in the Apple ecosystem pay for software unlike some of these other ecosystems.
 
  • Like
Reactions: Zdigital2015
A few million? There’s more than a “few million” M1-based devices out there, between the M1 MacBooks, M1 ipads, and now the M1 Pro and M1 Max macbooks. Not to mention that people in the Apple ecosystem pay for software unlike some of these other ecosystems.
One could argue the opposite if the popularity of free to play loot box games on ios are any indication of things.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.