Whatever ad hominems help you sleep at night.
You mean like this?
leman said:You read some stuff somewhere and now think that you know everything about how graphics hardware works.
Whatever ad hominems help you sleep at night.
leman said:You read some stuff somewhere and now think that you know everything about how graphics hardware works.
You mean like this?
Even if the first part of his argument was arguably an ad hominem, over 80% of that post was completely fact-based. Responding to the 20% header and none of the 80% body just seems like you're asking for a trollfight.
Facts that supported what I was saying (note he/she never admitted it).
Facts supported what you were saying, and facts supported what they were saying. They admitted that there was overhead associated with the graphics changing operation, but argued that the single display type that required such overhead was mostly obsolete, and that the overhead generated in this specific case was inconsequential to the everyday running of the machine.
Facts supported what you were saying, and facts supported what they were saying. They admitted that there was overhead associated with the graphics changing operation, but argued that the single display type that required such overhead was mostly obsolete, and that the overhead generated in this specific case was inconsequential to the everyday running of the machine.
The topic of the thread is essentially asking "Inconsequential for how long?"
In other words, sure it may not be a problem TODAY (questionable thus far, given the reported lag issues), but how long can we reasonably expect the hardware and software to keep up with ever increasing graphical requirements?
Based on the idea that a machine such as this should perform in three years, much as it does today.
Now I see what is going on. You read some stuff somewhere and now think that you know everything about how graphics hardware works.
Anand is probably talking about RAMDAC/resolution converters which is used to transform the digital video buffer into the signal fed into the monitor. This unit essentially supports free scaling in a sense. AFAIK, RAMDAC in modern consumer hardware is not built for the tasks HiDPI rendering ala-Apple has to perform, as the RAMDAC cannot downscale a buffer above the native resolution anyway (I might be mistaken about this point though). The shader-based downscaling Apple uses is still very cheap. Current IGPs have fill rates well over 1Gpixel/sec (the texture filtering performance is substantially higher). The full 2800x1800 frame buffer is 5 megapixels. This means that the IGP can fill the buffer 60 times per second (maximal required fps) without even breaking a sweat. The performance overhead is so low compared to the work required to actually draw the UI on the screen that any attempt to optimize that area is likely a waste of time.
P.S. Strictly speaking, RAMDAC is a more or less obsolete unit used for analog video output. They have been replaced by other hardware which outputs digital video signal instead. I still use the term RAMDAC here, for convenience reasons.
i don't believe this for a second. i think its pretty obvious to anyone whos used the RMBP that the gpu can't handle normal UI operations efficiently at all. i have ML GM installed and i can easily see things slow down/lag here and there. the sad fact is the loss of FPS for simple things like scrolling/dragging makes the computer feel much slower and less responsive than it should be.
even when i have an external monitor plugged in, dragging a photoshop window from one screen to the other stutters horribly. little things like this add up to give an overall poor user experience.
The term future proof is ridiculous.
Facts that supported what I was saying (note he/she never admitted it).
The internals can handle 2880x1800 just fine. The issue is that the software hasn't yet been optimized...
i don't believe this for a second. i think its pretty obvious to anyone whos used the RMBP that the gpu can't handle normal UI operations efficiently at all. i have ML GM installed and i can easily see things slow down/lag here and there. the sad fact is the loss of FPS for simple things like scrolling/dragging makes the computer feel much slower and less responsive than it should be.
Why do you people are so keen on claiming that rMBP performance issues are in the GPU? Complex UI rendering is still mostly done on the CPU. And scrolling (especially fast scrolling) usually involves complete redrawing of the window. Of course, if you have a complex website in a fullscreen window, the rendering task overhead is often 4x compared to the non-retina display. However, all these things can most likely be fixed by using better (optimized) rendering algorithms. I think ML GM already showed this.
its pretty obvious its the gpu. why would the ipad 3 have basically the same clocked CPU as ipad 2 but 4x the GPU power? so you're saying all the retina rendering on ipad is just done by mostly the CPU?
its pretty obvious its the gpu. why would the ipad 3 have basically the same clocked CPU as ipad 2 but 4x the GPU power? so you're saying all the retina rendering on ipad is just done by mostly the CPU?