Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
For its performance and power consumption, it's a revolutionary GPU which AMD and Nvidia can't even make. Mobile RTX 3070's performance with only 40W is almost impossible. Mobile RTX 3070 itself consumes up to 120W but only 40W is somewhat amazing... if it actually works. We'll see.
Aren’t the Amobile RTX series manufactured on Samsungs 9nm node? I recall during the run up to the 3k series release there was confusion whether it would be tsmc 7nm or Samsung 9nm.

iirc they went with Samsung because they thought they could bully tsmc into lower prices.
 
@leman

Will all these new graphics cores require more of the RAM allocation?

Essentially, do you think that the new 16GB models will be more memory constrained, like the current 8GB M1s when processing intensive graphics, because of all the extra graphics cores?

I don't do intensive graphics, I'm just curious.
 
@leman

Will all these new graphics cores require more of the RAM allocation?

Essentially, do you think that the new 16GB models will be more memory constrained, like the current 8GB M1s when processing intensive graphics, because of all the extra graphics cores?

I don't do intensive graphics, I'm just curious.

The GPU, no matter how big, per se does not need more RAM. The amount of RAM depends on the problem you are trying to solve. With a bigger GPU, you can choose to solve the same problem faster, a larger problem in the same amount of time or some mix thereof.

For example, in games it is common to use higher quality assets with a faster GPU. This will obviously require more memory. But the increase in RAM need is not because of the GPU itself, but because you want to take advantage of this faster GPU to produce better quality result.
 
I suspect that higher monitor resolutions and multiple desktops on multiple monitors uses up more memory than just a single low resolution monitor. You need to hold the BMPs somewhere. I'm pretty sure that 32 GB will be enough for me. I think that 16 is a bit small for me though I'm pretty sure that 24 would be enough.
 
I suspect that higher monitor resolutions and multiple desktops on multiple monitors uses up more memory than just a single low resolution monitor. You need to hold the BMPs somewhere.

That absolutely, but it’s not that much data. An 8K frame will take around 130MB or close to it. A 4K frame is „only“ around 30MB. It’s a bigger issue for memory bandwidth than it is for RAM capacity. That said, even an 8GB machine can keep multiple 4K frames in flight without any problem.
 
Last edited:
That absolutely, but it’s not that much data. An 8K frame will take around 130MB or close to it. A 4K frame is „only“ around 30MB. It’s a bigger issue for me pry bandwidth than it is for RAM capacity. That said, even an 8GB machine can keep multiple 4K frames in flight without any problem.

I moved my mini to drive a WSXGA + QHD setup instead of 4k + QHD. I have 3 4k displays on my Windows desktop with a discrete card to drive them. Just a way to save a bit of RAM/Swap while I wait for M1X. I am hoping that the M1X mini will have a 64 GB option. I like lots of RAM on my systems.
 
That absolutely, but it’s not that much data. An 8K frame will take around 130MB or close to it. A 4K frame is „only“ around 30MB.

The bandwidths are substantively different scales. 4K to 8K isn't a nominal linear doubling of bandwidth as the 2* of the numbers before the 'K's suggest. .


Unified memory cuts both ways if trying to walk and chew gum a the same time. 100MB more allocated to display buffer space is another 100MB may pull away from disk cache is get to the point there is overlap between different consumption objectives.


It’s a bigger issue for memory bandwidth than it is for RAM capacity. That said, even an 8GB machine

But pragmatically Apple has likely coupled capacity to bandwidth. If Apple uses 8 memory channels and two custom RAM packages to service the M1 then the "M1X' would needs something on the order of 16 memory channels and four custom RAM packages. If the minimal RAM package building block was 4GB then the 'floor' the M1X would go to 16GB. And if doubled up the GPU cores again, then will probably on track to doubling up the 'floor' on the SoC soldered RAM also.
 
Yep, it's a best case scenario. Forgive me, I'm just daydreaming. I've learnt the hard way with Apple not to get expectations up too high...
Just wondering if there is a computer manufacturer or any manufacturer that gives a product that matches all the pie in the sky rumors(people making things up that they want without knowing anything about how impossible it is to do them) before it was introduced?
 
Hard to deduce performance since native M1 gaming landscape hasn't improved with only like six native games that are mostly mobile ports and nothing recent.

Under Rosetta current M1 is around $100 GT1030 so farfetched to think M1X/M2 will reach 6800 or even 5700.

M1 (forward to 7:52)

GT1030
 
Hard to deduce performance since native M1 gaming landscape hasn't improved with only like six native games that are mostly mobile ports and nothing recent.

Under Rosetta current M1 is around $100 GT1030 so farfetched to think M1X/M2 will reach 6800 or even 5700.
I bet it’ll match in fcp, maybe Resolve.
lol, games
 
  • Like
Reactions: - rob -
lol, games
I also would be surprised if Apple highlighted gaming on any of the machines announced Monday.

I'd expect the focus of graphics performance will mostly be how it allows faster media creation and live rendering in FCP, perhaps while doing multi-display (including multiple XDR Pros) from a single machine with no eGPU.

The Blackmagic eGPUs were in part marketed for their ability to offer VR experiences, though I don't see Apple talking that up. Maybe high-compute stuff related to ARKit stuff announced at the past WWDC.

I am still expecting Apple to make a major push in to AI / ML but I still think that will be part of a Mac Pro announcement, with a new Apple Silicon Afterburner card focused on that category.
 
I bet it’ll match in fcp, maybe Resolve.
lol, games

What's so funny about gaming? dGPUs are capable at both and performance is generally translatable between gaming and GPU compute.

M1 on Davinci Resolve is around half of a GTX 1650 dGPU so still wishful thinking M1X/M2 will reach 6800 or 5700 level since 5700 is around 2x and up to nearly 3x faster than 1650.

 
Last edited:
Hard to deduce performance since native M1 gaming landscape hasn't improved with only like six native games that are mostly mobile ports and nothing recent.
Eve was recently made M1 native. https://www.macrumors.com/2021/10/13/eve-online-now-available-for-macs-apple-silicon/
Under Rosetta current M1 is around $100 GT1030 so farfetched to think M1X/M2 will reach 6800 or even 5700.

I think M1X 32 core gpu should be around the rtx 3060 mobile
M1 on Davinci Resolve is around half of a GTX 1650 dGPU so still wishful thinking M1X/M2 will reach 6800 or 5700 level since 5700 is around 2x and up to nearly 3x faster than 1650.
Yes, the M1 GPU is not powerful but its very efficient compared to 1650.

1634175213254.png
 
What's so funny about gaming? dGPUs are capable at both and performance is generally translatable between gaming and GPU compute.

M1 on Davinci Resolve is around half of a GTX 1650 dGPU so still wishful thinking M1X/M2 will reach 6800 or 5700 level since 5700 is around 2x and up to nearly 3x faster than 1650.

The Dell laptop with a 1650 is the same render time as the M1 mini. Not sure what you are looking at? So you are comparing at 25 W TDP system to a multi 100W
desktop?
 
Last edited:
  • Like
Reactions: JMacHack
The Dell laptop with a 1650 is the same render time as the M1 mini. Not sure what you are looking at? So you are comparing at 25 W TDP system to a multi 100W
desktop?

OP referred to desktop GPUs plus M1 Mini is desktop. For video editing, desktop is the go to anyway.
 
A desktop with a tablet chip.

Nobody who does video editing cares. #1 metric is how soon they can finish their job. Since video editing is GPU accelerated they're only spending $150 to 200 on a 1650 dGPU to put in an old/existing system that will complete the job in half the time of a Mac Mini M1 with soldered RAM and storage that costs upwards of $1700. Sounds like a user problem.
 
OP referred to desktop GPUs plus M1 Mini is desktop. For video editing, desktop is the go to anyway.
Even then your source video is not up to snuff. It’s 5 months old. Both premiere and resolve have had M1 related improvements since it was made. Premiere is native instead of running under Rosetta. Resolve 17.3 that has significant improvements for the M1.
 
Nobody who does video editing cares. #1 metric is how soon they can finish their job. Since video editing is GPU accelerated they're only spending $150 to 200 on a 1650 dGPU to put in an old/existing system that will complete the job in half the time of a Mac Mini M1 with soldered RAM and storage that costs upwards of $1700. Sounds like a user problem.
Still doesn't change the fact that M1 is a tablet or mobile chip which is far from desktop grade CPU and GPU.
 
Still doesn't change the fact that M1 is a tablet or mobile chip which is far from desktop grade CPU and GPU.

That's meaningless. 1650ti is 55W TGP and a step up in performance over 1650 so within close range of Mac Mini M1 TDP which is probably ~30W considering my MBA M1 CPU alone without iGPU peaks ~20W.
 
Last edited:
  • Haha
Reactions: sunny5
  • Wow
Reactions: amartinez1660
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.