More pixels means more stress to GPU, so yes but that depends on the res of the ext screen
if its in mirrored mode then there shouldnt be a MASSIVE hit - if you go into extended mode then the RAM usage per display is halved, then you will run into troubles.
yup closing down the lid would work fine, although it would add to heatYeah mirrored is different thing as it just sends the same signal to both displays i.e. basically one display in use but dual non-mirrored means that there is double the stress or even more if the resolution is greater in external.
If you play on external and close the lid of MBP, it should help as there is only one display in use and GPU can concentrate on it.
using 2 displays in extended mode (1 for the game, the other for.. whatever) will effect FPS somewhere in the effect of 30%. the 512MB of GPU RAM becomes 256MB for each display.Dual displays in this situation doesn't really hurt as I doubt OP is going to pay on BOTH, external and MBP's screen at the same time (is that even possible?) but as I said, greater resolution stresses GPU more so if you were able to play @1280x800 with all settings at high, you may have to lower them to medium when playing @1920x1080
using 2 displays in extended mode (1 for the game, the other for.. whatever) will effect FPS somewhere in the effect of 30%. the 512MB of GPU RAM becomes 256MB for each display.
no, running a monitor doesnt require that - but when running games that might require more then 256MB GPU RAM it effects it dramastically.Are you sure? Running desktop doesn't need 256MB of VRAM![]()
on singular monitors (or mirroed) yes, where there are 2 or more extended monitors then the effects are much greater.Anyway, differences between 256MB and 512MB have been VERY small as seen in benchmarks
for this conversation, yes hahaMore pixels, more work. That's all you need to understand.
If you're running multiple displays, the OS does split the GPU RAM evenly across screens (look at Display's under System Profiler). What I'm not sure is why there is significantly less power to any high demand application running on either screen. Does the OS somehow split the GPU into equal processing power portions, or is it when all screens aren't in 'full screen' mode, Apple's graphical processes are still using resources.
no, running a monitor doesnt require that - but when running games that might require more then 256MB GPU RAM it effects it dramastically.
on singular monitors (or mirroed) yes, where there are 2 or more extended monitors then the effects are much greater.
yup. 256MB for each, it doesnt matter if one has more intensive requirements. the GPU is a fair placeBut does it automatically split the VRAM for both displays?
yup it all effects it, i guess there are other bottlenecks elsewhere too though.. all relative.If you're using more than internal + 1 external, then it starts to matter. I guess 330M doesn't have enough high memory bandwidth to take full advantage of 512MB as benchmarks has shown that the difference is like a frame or two, so with dual monitors, 256MB for each, it shouldn't matter that much. Of course it requires other resources to run the second display too so there'll be worse performance when both displays are in use
wonderful!! sorry for nerding it upThanks for the clarification : )
I ended up opting for the standard res MBP anyways. Glad I did : )
like CPU, the GPU will give power to the processes that require it most. however, with the display that isnt showing the "game" still needs to have the images and pixels computed every however many times a second (60??) - the image wont remain static in terms of the GPU processing the image. this all means less power can be given to the process that needs it most.
If you fire up the developer Quartz Debug app, you can switch on the 'update rectangles' or something. That will draw rectangles around the areas on screen that have changed. I'm assuming that this means that the OS doesn't constantly redraw every pixel of the screen, just the bits that matter, which makes sense. I remember reading somewhere some company had a method for only drawing portions of the screen that needed to be updated. Some demonstation that showed that an iPhone level device could power many megapixel displays.
intersting. i dont have Quartz Debug. but is that referring to redrawing just at the OS level? or at the hardware level too? just because the OS doesnt redraw it doesnt mean that the GPU wont have to (im guessing).
Well the OS is an abstraction of the hardware. If the OS can't control the GPU, then that would rather useless.
does the OS decide whether to use the the FPU, ALU or similar of the CPU? even with the rectangle scenario - the OS will still be required to send back instructions to the GPU to redraw the previously drawn image - the GPU still is required to compute the same amount of pixels each time. a bad analogy i know, but can you see what im getting at?
it saves the OS, but i dont think it would save the hardware.
intersting. i dont have Quartz Debug. but is that referring to redrawing just at the OS level? or at the hardware level too? just because the OS doesnt redraw it doesnt mean that the GPU wont have to (im guessing).
I'm not sure about the process myself, especially since the GPU powers every pixel of the screen. But what we can be sure is that every OS will attempt to limit the amount of work required of any component. Perhaps the GPU is really rendering every pixel every frame update the OS issues, or maybe the OS tells the GPU to only render a portion, overlaying the previous static portions of the display, the GPU is still technically telling the display device/s to 'do that', but that's pretty easy. The hard stuff is things that require maths on neighbouring pixels.
I'm going to do some research because I'm wandering dangerously close to pulling stuff from my ass.
wonderful! how much bandwidth does each pixel require!? i presumed it wouldnt be much.... but if you have 2million pixels (1080p) to transmit would be pretty intense to send every second.There are several levels here. One is the level where the contents of the graphics memory has to refresh what you see on the monitor. That is basically limited by the bandwidth of the connection between card and monitor, but has very very little cost. It means you can't use some 30" monitors with some Macs because they would need too much bandwidth, but up to the maximum size that works, there is basically no cost.
good pointThen there is the level where your desktop is made up of multiple windows. If you move a window around in front of other windows, your screen is refreshed maybe 60 times per second, and a lot of pixels are redrawn each time, but the graphics card doesn't actually do much work, because all it does is copy from buffered window contents in VRAM directly to the screen; a very, very cheap operation. About the cheapest you can get. It's the kind of stuff that a graphics card from 10 years ago did without breaking into any sweat.
i see. for text/etc it would be easy. i guess they would take an area slightly larger then the character itself and change the VRAM entry for that? for gaming though, that would get pretty intense though! as we all know..What takes more time is changing the contents of a window. That can be as complicated as you want it to be, but most stuff isn't complex. Like I type a letter as I type this post, then depending how clever or not Safari is more or less of the _window_ will be redrawn, then the OS figures out what area that is on the screen, and it only needs to redraw that area of the screen.
wonderful! how much bandwidth does each pixel require!? i presumed it wouldnt be much.... but if you have 2million pixels (1080p) to transmit would be pretty intense to send every second.
Like I said, they would be trying to make everything to less work.good pointso all it does it moves the "links" of the location of the entire window to a separate location for the screen to display. these people are very smart
![]()
Graphics is an intense series of mathematical calculations based on the surrounding pixels, the textures, and other layers used in combination to generate visual effect. Like a basic room, there are bumpmaps, lighting to add, shadows to add, any fog you need to add, specular effects, ANTI-ALIASING (it goes on), all these are highly parallelable mathematical operations. GPU's are just stacks of FPU units. (I'm doing a final year course of multimedia and graphics btw).i see. for text/etc it would be easy. i guess they would take an area slightly larger then the character itself and change the VRAM entry for that? for gaming though, that would get pretty intense though! as we all know..
so how about the scenario before? i presume i was wrong then? when using 2x displays, one gaming and the other idle with windows - the effect of that idle window will only be from using 1/2 the VRAM but not much else? (apart from halving the bandwidth as well).
is that all?List of device bandwidths, Digital_video_interconnects
The bit-depth of the display setting: say 32 bit. 1 byte for each RGB, and 1 for alpha. Multiply that by the number of pixels, then by the frames. 1920*1080 would be roughly 8 MB a frame.
i never denied thatLike I said, they would be trying to make everything to less work.
i might check that one day. i will report backCould be. Perhaps if you load up a game. With just the one display connected, go into a room and just look around, note the FPS. Now do the same with a 2nd display connected.
in the game scenario - would it re-draw each pixel for every FPS? im presuming that it would (as opposed to the scenario before how window movements are done via referencing etc, and dont require re-drawing). gaming would be different i guess.If it were a memory thing, then the FPS should be the same because the GPU shouldn't need that much memory to contain the textures for a single room.
17.2gbit/s over 2m cable for version 1.2, which is what the apple products use i believe. thinking about it though, 1.1 has only just been approved, max of 8.64gbit/s @ 1080p for ONE monitor. 486MB/s translates to 3.8gbit - having 2x1080p monitors on DP quickly reaches that upper limit.I think if you run out of bandwidth, low FPS is the last thing you'd be worried about. 1080p @ 60fps is 486 MB/s, well below Displayport spec, and internal connectors are even faster.
DisplayPort 1.1 (is that a typo on the wiki page? it's got 1.1 twice) 1.35GB/s. 1600p@60 is 950 MB/s. Still good, but rather close. DPv1.2 gives the headroom. Also, where did you read that the MBP's have DPv1.2?is that all?and @2560x(whatever) is maxing out the link? ick
Wasn't really pointed at you. That guy basically said what I said, so I saying, "See?".i never denied that![]()
Yes, because the scenes are far more complex, there isn't that simple 2D layering that happens on a desktop UI (windows on top of windows etc). There isn't much you can keep from a previous frame to make it easier to generate the next one.in the game scenario - would it re-draw each pixel for every FPS? im presuming that it would (as opposed to the scenario before how window movements are done via referencing etc, and dont require re-drawing). gaming would be different i guess.
17.2gbit/s over 2m cable for version 1.2, which is what the apple products use i believe. thinking about it though, 1.1 has only just been approved, max of 8.64gbit/s @ 1080p for ONE monitor. 486MB/s translates to 3.8gbit - having 2x1080p monitors on DP quickly reaches that upper limit.
im not sure what the GPU bus supports though (mobile bus that is).
interesting topic![]()
no i dont believe they are typo's. i believe they are with different frequency ratings (1.1, then 2.something etc).DisplayPort 1.1 (is that a typo on the wiki page? it's got 1.1 twice) 1.35GB/s. 1600p@60 is 950 MB/s. Still good, but rather close. DPv1.2 gives the headroom. Also, where did you read that the MBP's have DPv1.2?
awww your such a great friendWasn't really pointed at you. That guy basically said what I said, so I saying, "See?".
interesting. what about the packground renderings? i guess with 3D nothing is relative thoughYes, because the scenes are far more complex, there isn't that simple 2D layering that happens on a desktop UI (windows on top of windows etc). There isn't much you can keep from a previous frame to make it easier to generate the next one.
yea i guess i do - was thinking of something before but it doesnt make sense now.Chances are that each display connector will have it's own bus to link from. If you have a look at that EyeFinity 6xDisplayPort ATI card, all those ports don't share the single DP bus.
What do you mean by the GPU bus? You mean how it connects to the Mobo? Well on my machine it's a plain old PCIe x16, which is 4000MB/s. Plenty of headroom.
hmm im guessing its meWhy is it that every thread of yours I join, always descend into a two person thread?