I believe 3dfx used to dabble in 96-bit memory architectures with their original VooDoo lines. =]
And if that happens, people will complain about how 192-bit is "only halfway there" and how there needs to be "a REAL successor to 128-bit."
Lol.... it's not a matter of succession, 256-bit memory architecture vs. 128-bit is simply four 64-bit memory units vs. two. It's more silicon, not technology advancement. All GPUs are and have been 256-bit for years as far as I know, starting with the original Radeon and I think the GeForce256.
192bit would actually take full advantage of 512mb, unlike the dated 128bit architecture, so considering the most vram we'll see on the MBP is 512mb for the time being, it'd be worlds better than 128bit.
That's ... so stupid. This is like the second time I've seen you post this, and it doesn't even make sense. "Take advantage of 512 MB"? The only way you "take advantage" of a 512 MB frame buffer is if all of it is needed at any given time, and your GPU is powerful enough so that it's not already framerate limited.
Stop saying this. 128-bit memory architecture isn't a "dated technology," it's precisely the same technology as 192 or 256-bit or even 512-bit memory, there's just fewer memory controllers.
In fact that's the most disappointing thing about mid range GPUs right now; 128 measly bits. By far the biggest performance hit.
Go do some reading and research. How does 128-bit memory architecture affect performance? Well, it affects memory bandwidth. How do you increase memory bandwidth? One of a few ways. Increase the memory width (i.e. to 192-bit, 256-bit, etc.), ramp up the memory's clockspeed, or, among other things, do some optimization in the core and the memory architecture itself, and introduce and improve things like Hyper-Z.
Mid-range cards don't need 256-bit memory architectures because by and large, given their high memory clocks, and fewer number of texture units, ROPs and shaders, they don't need as much memory bandwidth. Extra memory bandwidth would be a waste of money and resources. It's a stupid idea.
And by far the biggest ego hit is paying $3k for a notebook that even has a mid range card.
High-end cards aren't possible in solutions like the MacBook Pro. Why? Size. The heat/power output by the higher-end cards is too great to put in a notebook as thin as the MBP. Look at desktop GPUs... they've been getting more and more power-hungry year after year; not simply faster. They've been die-shrinking, and taking advantage of that to pump even more silicon into their chips... raising their transistor counts finally to over a billion in the high end GeForce cards, which the recommend a 1kW PSU for. Obviously that's impossible to translate over to mobile graphics cards. Gaming laptops, in case you hadn't noticed, are usually quite bulky.