A guy on a game forum I post on wrote this up a couple of weeks ago:
For months, Nvidia has had a gaping hole in their mobile lineup. The GeForce GTX 660M was based on the generation's slowest GPU chip--from either vendor. The GeForce GTX 680M was a very nice card, but cost a fortune. Laptop vendors don't publish absolute prices, but only differences in prices between various cards, but if you got a GTX 680M, you were probably paying about $700 or $800 for the video card alone. That's fine if you were planning on spending $2500 on a laptop, but not if you're looking for something a little more budget-friendly.
In between, there was the GeForce GTX 670M and the GTX 675M. Unfortunately, those were old Fermi rebrands. You could cope with the massive heat output in a desktop, but not a laptop.
So you buy AMD this generation, right? The Radeon HD 7970M certainly looks good in terms of specs and price. But in previous generations, going AMD had typically meant keeping the discrete card running all of the time, as AMD didn't bundle their own drivers with Intel video drivers for integrated graphics, the way that Nvidia did. That's fine for some purposes, but it will kill your battery life.
But no, this generation, Clevo decided to use AMD Enduro switchable graphics, rather than leaving an AMD card running all the time. That meant no driver updates. It also ran into a rather problematic glitch where AMD switchable graphics wasn't able to use the full PCI Express bandwidth, which hurt performance badly. A 7970M was still faster and far more efficient than previous generation cards, but not nearly as fast as it should have been.
For all the promise of 28 nm, the generation is nearly over and neither side had something suitable for $1500-$2000 gaming laptops on the market.
Today, Nvidia fixed that with the launch of the GeForce GTX 675MX and GTX 670MX. The model number hole between the 660M and 680M was already filled by the 675M and 670M. Rather than getting creative by having the third digit be something other than 0 or 5, or calling a card a GTX 665M (and thereby conjuring comparisons to the disastrous GeForce GTX 465, a marketing no-no), Nvidia marketing decided to add an X onto the end. But contrary to the similar names, the GTX 675MX has nothing to do with the GTX 675M, and likewise if you subtract 5 from the model numbers.
The short story is that the GeForce GTX 675MX is a severely cut down GK104 die, with only 5 of the 8 SMXes active. The GeForce GTX 670MX is a fully-functional GK106 die. With the same clock speeds, they should have identical GPU performance. The difference is in video memory, where the GTX 675MX has four memory channels at 900 MHz, while the GTX 670MX has three channels at 700 MHz. Both are more expensive than the GeForce GTX 660M, of course, but not outlandishly so--and they're cheaper than a Radeon HD 7970M.
All is not lost on the AMD front, though. AMD recently launched their first laptop drivers for discrete switchable graphics. A hotfix for the PCI Express bus problem is currently being tested and due for public release next week. Those who bought a 7970M early on won't be left out in the cold. Hopefully this will mean that we see some laptops equipped with a Radeon HD 7870M shortly, too.
More generally, all up and down the lineup, Nvidia seems to have bet that more memory capacity and more GPU performance will win, while AMD has bet that more memory bandwidth will win. A GeForce GTX 680M has considerably more GPU power than a Radeon HD 7970M, but only 3/4 of the memory bandwidth. A GeForce GTX 670MX has a little shy of double the GPU performance of a Radeon HD 7870M, but only 5% more memory bandwidth.
So which side is right here? On memory capacity, from a performance perspective, AMD is right and Nvidia is wrong. It really is that simple. But they probably both knew that a long time ago. Nvidia is betting that customers are stupid and will think that more video memory means a faster card, and from a marketing perspective, they might be right about that. Maybe.
Where it gets more interesting is the GPU performance versus memory bandwidth tradeoffs. Here, it's the same story in desktops, where Nvidia went for more GPU performance while AMD went for more memory bandwidth. And who bet correctly?
For older games that have MSAA (or maybe SSAA through drivers) as their only anti-aliasing options, AMD wins. But as post-processing anti-aliasing effects such as FXAA replace the traditional MSAA, GPU performance matters a lot more and video memory bandwidth less. In that case, Nvidia wins.
Assuming that the transition to post-processing anti-aliasing continues and MSAA dies out (which it should, but then, DirectX 9.0c should have died out by now, too), Nvidia has the more forward-looking architecture here, and there's a good chance that in games that launch two or three years from now, an Nvidia card would do better as compared to an AMD card in today's games. Don't expect miracles; this isn't going to magically double the performance of Nvidia cards. But movement of 5% in Nvidia's direction is a realistic possibility.
Furthermore, if you're comparing two cards that are both plenty fast enough in older games today, it doesn't matter which card is faster. All that matters is how they'll perform in the future, more demanding games, where both cards might not have far more performance than you need anymore.
This only applies if we're comparing Kepler to Southern Islands. In particular, it doesn't apply to Fermi, which still is and always will be a train wreck. But AMD didn't regress here; Nvidia simply got better. Fermi didn't scale well to highly demanding cases. Kepler fixed that.