Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
That's meaningless. 1650ti is 55W TGP and a step up in performance over 1650 so within close range of Mac Mini M1 TDP which is probably ~30W considering my MBA M1 CPU alone without iGPU peaks ~20W.
You forgot to add CPU and other stuff. M1 is SoC. It has more than GPU and CPU. I dont think M1's GPU power consumption is more than 10W base on my testing. Also, TGP does not really mean actual power consumption. Did you even check its actual power consumption?

It's more than 55W and it's 83W just in idle. While playing Battlefield 5, it consumes 220W. I attached a proof against your opinion.

Anandtech's test shows that M1's GPU power consumption is between 7~10W. Do you really dare to compare desktop 1650 to M1? 10W vs 220W already explains a lot.

You clearly have no idea what you are saying.
 
Last edited:
You forgot to add CPU and other stuff. M1 is SoC. It has more than GPU and CPU. I dont think M1's GPU power consumption is more than 10W base on my testing. Also, TGP does not really mean actual power consumption. It's more than 55W and it's around 83W just in idle. You clearly have no idea what you are saying.

You forgot to read where video editing is GPU accelerated so CPU is idle. Nobody does video editing on an underpowered M1 and wait around forever when they can cut their time by a fraction using desktop+dGPU or even laptop+eGPU except for first time users.
 
  • Haha
Reactions: killawat and sunny5
You forgot to read where video editing is GPU accelerated so CPU is idle. Nobody does video editing on an underpowered M1 and wait around forever when they can cut their time by a fraction using desktop+dGPU or even laptop+eGPU except for first time users.
220W with battlefield 5, idle 80W.

Also, M1 is just an entry level chip which is meant for iPad Pro for several years. Do people really edit videos with iPad Pro? You are comparing with iPad Pro and desktop which is out of point. And yet, you are comparing with desktop GPU alone. You are totally out of point for sure. Also, video editing still require CPU to use. I highly doubt that you actually edit videos in real life cause even in idle, CPU consume quite a lot of power and therefore, ignoring CPU power consumption is totally nonsense. Accelerating GPU does NOT mean CPU does not consume no powers. It still does and editing software still require multi core performance.
 
Last edited:
Given a power budget of about 60 to 85W TDP I would expect the next generation MacBook Pro to out perform anything from Intel and AMD period. Double the number of performance cores, double the memory subsystem, 4X the GPU cores, and you'd see some pretty interesting numbers.
 
Given a power budget of about 60 to 85W TDP I would expect the next generation MacBook Pro to out perform anything from Intel and AMD period. Double the number of performance cores, double the memory subsystem, 4X the GPU cores, and you'd see some pretty interesting numbers.
you are talking about only cpu i guess?...because Mbp gpu will not surpass amd dGpu
And if the cpu is just 20% increase in raw performance , it will outperform most of the Intel and amd but not all i think
 
If all you care about is lowest power consumption and have a lot of free time skip the expensive M1 power pig and use a $55 Raspberry Pi 4 for video editing that consumes ~6W max.

 
  • Haha
Reactions: sunny5
You forgot to read where video editing is GPU accelerated so CPU is idle. Nobody does video editing on an underpowered M1 and wait around forever when they can cut their time by a fraction using desktop+dGPU or even laptop+eGPU except for first time users.

Lol bud, you are at it again? Unfortunately I have to dispel your little fantasy. M1 goes toe to toe with an RTX 3080 for video editing in Premiere Pro (and is considerably faster using Final Cut):

 
You forgot to read where video editing is GPU accelerated so CPU is idle. Nobody does video editing on an underpowered M1 and wait around forever when they can cut their time by a fraction using desktop+dGPU or even laptop+eGPU except for first time users.
That might be true in the old CPU/GPU world but the M series chips have done away with this. Yes they have GPU cores but they also have a neural network (great for offloading filters), hardware video and audio compressors and decompressors (great for taking the load of GPU/CPU), direct memory access for all parts of the SOC (speeds ups things in general), very fast integrated SSD (great for caching inactive processes), etc.
So you can either stay on the old CPU/GPU paradigm or you start optimising tasks to dedicated areas and have incredible speed gains.
And thus you can edit 4k h265 on an M1 smoothly and without proxies in the highest quality settings. And all that because the CPU/GPU get offloaded as much as possible!
 
You forgot to read where video editing is GPU accelerated so CPU is idle. Nobody does video editing on an underpowered M1 and wait around forever when they can cut their time by a fraction using desktop+dGPU or even laptop+eGPU except for first time users.
Have you used Final Cut on an M1? As a video editor myself, It’s extremely capable and holds it’s own against some very decent desktop rigs I’ve used. That being said, I’m eagerly waiting for the M1X announcement on Monday. We are all aware the M1 is Apples low-end consumer chip and though it is very impressive for what it is, does have some limits. Let’s see what they can do with a bit more power at hand :cool:
 
That might be true in the old CPU/GPU world but the M series chips have done away with this. Yes they have GPU cores but they also have a neural network (great for offloading filters), hardware video and audio compressors and decompressors (great for taking the load of GPU/CPU), direct memory access for all parts of the SOC (speeds ups things in general), very fast integrated SSD (great for caching inactive processes), etc.
So you can either stay on the old CPU/GPU paradigm or you start optimising tasks to dedicated areas and have incredible speed gains.
And thus you can edit 4k h265 on an M1 smoothly and without proxies in the highest quality settings. And all that because the CPU/GPU get offloaded as much as possible!
Can you elaborate as far as your take on the GPU, chiplets, memory bandwidth, and so on? Inquiring minds, something something..
 
Do we think they added hardware for BVH/Intersection testing this time? They didn't for A15, do we think they are using the same GPU cores?
 
That might be true in the old CPU/GPU world but the M series chips have done away with this. Yes they have GPU cores but they also have a neural network (great for offloading filters), hardware video and audio compressors and decompressors (great for taking the load of GPU/CPU), direct memory access for all parts of the SOC (speeds ups things in general), very fast integrated SSD (great for caching inactive processes), etc.
Exactly. Apple took everything they learned with the iPhone and started applying it to the desktop paradigm. All under one roof. This is only the beginning.
 
  • Like
Reactions: amartinez1660
The 16-core GPU should be close to the 5600M, 32-core GPU should be considerably faster.
are you sure? i thought the 32-core will be around amd pro 5600M
I mean in finalcut pro it will be faster, a lot faster, but in Maya Adobe Premiere Pro and games? you still think the 32cores will be faster than the current 5600M HBM2?
Thank you
 
are you sure? i thought the 32-core will be around amd pro 5600M
I mean in finalcut pro it will be faster, a lot faster, but in Maya Adobe Premiere Pro and games? you still think the 32cores will be faster than the current 5600M HBM2?
Thank you

Apple-M1X-GPU-performance-estimates.jpg



These are just estimates derived from what is known. Also, they are assuming M1 GPU cores and not the newer A15 ones.

 
The question is if the M1x will be on par or beyond Amd 5600M HBM2 in every way
Probably for 99% of use cases yes. 5600M is about 11.6 TFLOPs at half precision and ~5.8 at single. The current M1 is about 2.6 single precision, with 4X the number of cores in the 32 core version you'd expect almost linear scaling to about 10.4. If they took the GPU from the A15 which is significantly faster (20 to 40%) you'd be looking at up to ~14 TFLOPs of FP32 performance putting it above the 3060M from Nvidia and the 6800M from AMD.

One area were I don't expect they be able to compete is on pure memory bandwidth. They will have a wider 256bit bus due to the increase in CPU cores from 4 to 8 but they'll likely still be using LD-DDR4X which is slower but much less power hungry compared to GDDR6.
 
  • Like
Reactions: Serban55
Apple-M1X-GPU-performance-estimates.jpg



These are just estimates derived from what is known. Also, they are assuming M1 GPU cores and not the newer A15 ones.

i dont trust wccftech and that panel is from dave2d, but we will see, it could be accurate
 
i dont trust wccftech and that panel is from dave2d, but we will see, it could be accurate
The estimate seems a to be a reasonable ballpark guess. 4x the cores, 2.5x the perf. At least they seem to be trying to account for bandwidth limits and the like.
 
are you sure? i thought the 32-core will be around amd pro 5600M
I mean in finalcut pro it will be faster, a lot faster, but in Maya Adobe Premiere Pro and games? you still think the 32cores will be faster than the current 5600M HBM2?
Thank you

Absolutely. One can look at the Geekbench scores as a proxy of compute performance (M1 ~ 22k, Pro 5600M ~ 45k). Or you can look at Wild Life Extreme (~ 4.5-5k for M1 and probably around 8k for 5600M Pro estimated), but please keep in mind that Wild Life performs very well on M1 because it's actually optimized for it, like few games will be.
 
  • Like
Reactions: JimmyjamesEU
If time is money, dGPU is 20x faster. That's before OpenCL or something in Big Sur broke after 11.4 OS updates so the app no longer works.

M1
51103 H/s (61.31ms)

dGPU
1057.0 kH/s (76.19ms)
 
It sounds to me like the lower end M1x (16 core gpu) will best the Xbox Series S with far lower wattage - in terms of teraflops.

The 32 core variant could beat the PS5…
 
If all you care about is lowest power consumption and have a lot of free time skip the expensive M1 power pig and use a $55 Raspberry Pi 4 for video editing that consumes ~6W max.

Raspberry Pi 4's GPU performance sucks. Are you trolling or what? M1's GPU consume around 10W and yet, it performs around GTX 1050~1060. You clearly dont know anything about M1 and yet, you keep bringing false and idiotic info to this thread.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.