Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

macman4789

macrumors 6502
Original poster
Hi,

I wondered if anyone can help. I wondered what the affect of higher resolution screens has on overall GPU performance. When a a screen which requires greater bandwidth like a 6K resolution utilises one of the thunderbolt ports, does that directly influence performance or are they kind of ‘ring fenced’ from the GPU?

If say, someone then introduced a second 6K monitor, would someone see a big performance difference? Particularly with a base M chip?

Thanks in advance
 
  • Like
Reactions: drrich2
I run the Pro Display XDR at 6K with my base model M1 MacBook Air with no problem. However, to run dual displays at 6K, I think you need to step up to something like the M4 pro in the Mac mini.
 
  • Like
Reactions: Tagbert
Thanks for your replies. Yeah, I just wondered what the performance was like and if there is a big difference. For example the M2 Pro MBP can officially support two 6k monitors at 60Hz but wondered if you would notice day to day performance reducing because of it?
 
Higher resolution means more RAM consumption. That is probably more important than CPU. Even more so with two 6K monitors.
 
I'm running 8K on an M2 Max just fine without issue.

The framebuffer will be bigger but with the amount of unified memory we have it is really not a problem.
 
Higher resolution means more RAM consumption. That is probably more important than CPU. Even more so with two 6K monitors.
So is the amount of pixels effectively loaded into the RAM? So you consistently won’t have as much memory available to you whilst you are using a higher resolution screen?
 
If I'm not mixing up my numbers a 4K framebuffer takes around 32MB, a 6K framebuffer around 83MB. At 60Hz the display refresh needs about 2GB/sec at 4K and 5GB/sec at 6K. For 8K everything is quadrupled compared to 4K.

An M2 Max has 400GB/sec maximum memory bandwidth so a single 8K display uses 2% of that which it will constantly consume since the framebuffer doesn't fit into SLC/LLC.

An M2 Pro has 200GB/sec maximum memory bandwidth so dual 6K displays use 5% of that again all the time. The two framebuffers will together use 1% of RAM on a 16GB machine.
 
If say, someone then introduced a second 6K monitor, would someone see a big performance difference? Particularly with a base M chip?
No, composing and drawing the UI of the OS does not really strain the GPU and has negligible effects on performance.

But if you're asking about the kind of workload where each pixel means work to be done (games, graphics or 3D modeling software) then yes, hardware requirements will scale, sometimes almost linearly, with the total number of drawn pixels.
 
Thanks for your replies, really helpful. Last question (I think!). How the performance of using say these 6K monitors whilst video editing? If the drawing and composing of the OS has negligible performance drawbacks on the GPU, how would this affect performance in say Final Cut Pro where the heavy lifting is done by the media engines/encoder/decoders?

Does anybody have any experience with this using 6K screens?
 
For tasks that maxes out GPU, such as video rendering without accelerator, or 3D rendering / games, you can see measurable difference increase of render time or decrease in frame rate, when external displays are active. I recall seeing a video testing specifically the M2 Max MacBook Pro when it was new, the guy was plugging and unplugging an external 4k display and can see Adobe Premiere exporting time changes, a few % of total time difference.

However it is hard to project and expect the same behaviour across other software and workflows, because the combination of how much and which part of the silicon is used for a given task can vary a lot. But what I can say for practical productivity considerations, the displays are your primary interface with the computer, so that the proficiency of it should take priority over loss of performance when we are only talking about single digit in loss.

Though for cases where you can manage or do not mind losing displays / resolution etc, there are scenarios that I have personally done so, for example:
1) doing overnight batch export on a MacBook Pro where I am not staring at the screen at all, just unplug the display before starting the task
2) for playing games on a MacBook Pro, even when running an external display, set macOS to mirror external to internal or engage clamshell mode, so that only one display buffer is being rendered instead of two, this increases frame rate / game performance
 
  • Like
Reactions: gymrat2k and leman
For tasks that maxes out GPU, such as video rendering without accelerator, or 3D rendering / games, you can see measurable difference increase of render time or decrease in frame rate, when external displays are active. I recall seeing a video testing specifically the M2 Max MacBook Pro when it was new, the guy was plugging and unplugging an external 4k display and can see Adobe Premiere exporting time changes, a few % of total time difference.

However it is hard to project and expect the same behaviour across other software and workflows, because the combination of how much and which part of the silicon is used for a given task can vary a lot. But what I can say for practical productivity considerations, the displays are your primary interface with the computer, so that the proficiency of it should take priority over loss of performance when we are only talking about single digit in loss.

Though for cases where you can manage or do not mind losing displays / resolution etc, there are scenarios that I have personally done so, for example:
1) doing overnight batch export on a MacBook Pro where I am not staring at the screen at all, just unplug the display before starting the task
2) for playing games on a MacBook Pro, even when running an external display, set macOS to mirror external to internal or engage clamshell mode, so that only one display buffer is being rendered instead of two, this increases frame rate / game performance
Thanks for that detailed reply with examples. I’m just curious at what point it’s actually almost detrimental to get a higher resolution screen(s) for your workflow when doing certain tasks on a Mac. Particularly when say video editing when modern Mac’s media engines etc take a lot of the load off of the CPU/GPU.
 
Thanks for your replies, really helpful. Last question (I think!). How the performance of using say these 6K monitors whilst video editing? If the drawing and composing of the OS has negligible performance drawbacks on the GPU, how would this affect performance in say Final Cut Pro where the heavy lifting is done by the media engines/encoder/decoders?

Does anybody have any experience with this using 6K screens?
I think this scenario is where a 2019 MacPro would shine, with multiple dedicated GPU’s.
 
I think this scenario is where a 2019 MacPro would shine, with multiple dedicated GPU’s.

Provided the software makes use of them. I have a pair of 6K screens and the 7,1 runs fine with them. As for the LG 6K screens, well I have less than pleasant feedback for LG on those.

I haven't connected them to the M4 Pro Mac Mini I have to see how they'd run with that, but it only has to run Zwift.
 
  • Like
Reactions: MarkC426
If I'm not mixing up my numbers a 4K framebuffer takes around 32MB, a 6K framebuffer around 83MB. At 60Hz the display refresh needs about 2GB/sec at 4K and 5GB/sec at 6K. For 8K everything is quadrupled compared to 4K.

An M2 Max has 400GB/sec maximum memory bandwidth so a single 8K display uses 2% of that which it will constantly consume since the framebuffer doesn't fit into SLC/LLC.

An M2 Pro has 200GB/sec maximum memory bandwidth so dual 6K displays use 5% of that again all the time. The two framebuffers will together use 1% of RAM on a 16GB machine.
Just to add a detail to this otherwise correct breakdown: macos normally uses triple buffering: one buffer is being shown (i.e. the front one), one is ready for showing (a ready back buffer), and one is being worked upon (an incomplete back buffer). So you can multiply those footprints by 3 in most scenarios. This is without touching upon non-color view attachments like depth and stencil.
 
  • Like
Reactions: Brian33 and Basic75
Thanks for that detailed reply with examples. I’m just curious at what point it’s actually almost detrimental to get a higher resolution screen(s) for your workflow when doing certain tasks on a Mac. Particularly when say video editing when modern Mac’s media engines etc take a lot of the load off of the CPU/GPU.
Your question is almost impossible to answer, there are just way too many variables to consider.

This used to be a lot "easier* to determine, back almost decade and a half ago, I was using multiple Mac minis I think 2012 for multicam HDMI capturing and live streaming where its CPU and GPU are absolutely maxed out, since it was an iGPU with shared VRAM so it is like a worst case scenario opposite of unified memory in the context of your question, my answer at that time was *zero* physical display plugged in, only using remote desktop so a 1024x768 virtual display was rendered, because it literally made the difference between dropping frames or not.

Nowadays you just have so much GPU and memory to spare most of the time, unless the display(s) you use require the top end of the bandwidth spectrum, otherwise you are not going to be impacted in interfacing, though like my above reply you'd see impact in exhaustive tasks such as export and render.
 
Thanks for that detailed reply with examples. I’m just curious at what point it’s actually almost detrimental to get a higher resolution screen(s) for your workflow when doing certain tasks on a Mac. Particularly when say video editing when modern Mac’s media engines etc take a lot of the load off of the CPU/GPU.
You tax the GPU heavily if rendering 3D to those screens; eg, when playing games. You can run them in a lower resolution mode for gaming, however.

In 2D mode the impact is fairly small. The frame buffer is less than 50MB and you will have off screen buffers in each application but it’s all maybe 200MB at most.

Each Thunderbolt port is an independent 40 gbps. Sharing a 6K screen on a Thunderbolt port with another high bandwidth connection, such as storage, is not a great idea. Works fine but the storage could get slowed down.

If you want fastest rendering time, you already know the drill — quit unused apps, or all other apps and let the machine be idle besides rendering. The higher res of the screens will have very little impact if they’re just sitting there plugged in.
 
I can add my experience. I use an energy rating application to do thermal simulations on new house plans, and the interface leans heavily on the GPU to draw elements over a PDF background. It's pretty inefficiently coded, being a PC app first. I run 2x 28" 4k screens on my M3 Pro MacBook Pro. I don't see any performance difference in my app depending on the number of screens I have running, but what does make a difference is the scaling I have the screens set to. If I have them at native or 1080, performance is fine. If I pick a scale in between, then the app is a lot more sluggish.
 
I can add my experience. I use an energy rating application to do thermal simulations on new house plans, and the interface leans heavily on the GPU to draw elements over a PDF background. It's pretty inefficiently coded, being a PC app first. I run 2x 28" 4k screens on my M3 Pro MacBook Pro. I don't see any performance difference in my app depending on the number of screens I have running, but what does make a difference is the scaling I have the screens set to. If I have them at native or 1080, performance is fine. If I pick a scale in between, then the app is a lot more sluggish.
Thanks for your reply. That means that maybe if you use a 6k screen that can scale correctly for MacOS, it could tax the GPU less?
 
  • Like
Reactions: Bungaree.Chubbins
Thanks for your reply. That means that maybe if you use a 6k screen that can scale correctly for MacOS, it could tax the GPU less?
More precisely, when you use a 6k display such as the XDR Pro Display, you either set your UI resolution to 1x at 6016x3384 for the least impact (which is unusably small), or 2x at 3008x1692 for just slightly more (this is the default UI res).

Anything else will mean a non-integer scaling ratio, macOS will need to render x2 of that UI res into the display buffer, and then down scales it back to 6k for the DisplayPort output.
 
  • Like
Reactions: gymrat2k
More precisely, when you use a 6k display such as the XDR Pro Display, you either set your UI resolution to 1x at 6016x3384 for the least impact (which is unusably small), or 2x at 3008x1692 for just slightly more (this is the default UI res).

Anything else will mean a non-integer scaling ratio, macOS will need to render x2 of that UI res into the display buffer, and then down scales it back to 6k for the DisplayPort output.
Am I right in thinking though that it’s just the UI that is at 2x 3008 x 1692 when it does this? So that monitor itself isn’t actually at that resolution and is still at 6k resolution?
 
  • Like
Reactions: Bungaree.Chubbins
Am I right in thinking though that it’s just the UI that is at 2x 3008 x 1692 when it does this? So that monitor itself isn’t actually at that resolution and is still at 6k resolution?
The way I look at it (no pun intended), if you have a 27” QHD set at 2560x1440 and a 5k at same Rez retina, any window you open i.e. Finder/Mail/Safari etc will look exactly the same size, it will just be clearer on the 5k (much like an iPad). You don’t gain any extra screen space.
 
  • Like
Reactions: Bungaree.Chubbins
Am I right in thinking though that it’s just the UI that is at 2x 3008 x 1692 when it does this? So that monitor itself isn’t actually at that resolution and is still at 6k resolution?
By default, macOS will output the exact native resolution of the attached display, so for the 6k display it will always output 6k (at the final stage). This can be changed, but you have to go all the way into system settings, alt-click to get the resolution list to show, and then toggle the "show all resolutions", to deliberately choose any one saying "(low resolution)" at the end for the Mac to output something else.

There are obvious exceptions, some full screen apps namely games, they more or less all default to the scenario above, disregard your display's native or maxium display specs, and output a lesser resolution instead. This typically is done for better frame rate / performance reasons, since asking the Macs to run games at native retina res is asking too much, even for the Max and Ultra chips on demanding games.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.