Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
just for kicks, results from a 650M in a 21.5" 2012 iMac i5 @ 2.9GHz
 

Attachments

  • Screen Shot 2013-08-14 at 23.59.45.png
    Screen Shot 2013-08-14 at 23.59.45.png
    54.5 KB · Views: 64
Here is the gtx680mx running at native.

As promised here is the gtx675mx also running at native 1440...

GTX675MX = 1054
GTX680MX = 1153

WOW VERY CLOSE INDEED!

The VALLEY benchmark we are using is ment to really push the video cards limits. So what does this mean in terms of video RAM being so different but the benchmark being so close, even at native 1440 which should theoretically expose any RAM advantage.

Ive been very happy with this cards performance so far!
 

Attachments

  • Screen Shot 2013-08-14 at 6.01.49 PM.png
    Screen Shot 2013-08-14 at 6.01.49 PM.png
    764.8 KB · Views: 120
Last edited:
As promised here is the gtx675mx also running at native 1440...

GTX675MX = 1054
GTX680MX = 1153

WOW VERY CLOSE INDEED!

The VALLEY benchmark we are using is ment to really push the video cards limits. So what does this mean in terms of video RAM being so different but the benchmark being so close, even at native 1440 which should theoretically expose any RAM advantage.

Ive been very happy with this cards performance so far!

There's no question the GTX 675MX is a good card. However, this benchmark doesn't tell the whole story. It's a freakin' benchmark!

To put it in perspective even WITH this benchmark, the minimum frame-rate you saw at native res (1440p) was 14.6fps, and the minimum posted on the GTX 680MX was 17.1fps, which is a 17.1% increase in the lower framerate. That's a pretty significant difference (though neither is exactly playable).

Also, at 1080p with vsync off, your minimum framerate was 19.3fps, and your max was 64.7. My 680GTX at stock clocks with vsync off, got min fps 22.1, max fps 88.4. 64.7fps to 88.4fps is a huge difference, and shows you the potential wiggle room you have with the faster GPU. And then when you overclock the 680, it's even more impressive: min fps 26.4, max fps 112.3 at 1080p. That's a free 25% boost in frame-rates.

I also don't know how well the GTX 675MX overclocks. Like I said earlier, the MX 680GTX is a massive overclocker. I average 25% increase in frame-rates in every game with a basic overclock of 225/350 core/memory.

I really suspect you're a little too much in love with the 675. :D It's a good card, but be realistic, here. I'm not sure how much RAM the Valley test is using, but it wouldn't surprise me if it's even 1GB. Relying entirely on this benchmark would be folly if you ask me. I know you're looking for some sort of personal validation of your getting the 675 over the 680, but let's keep it real, shall we? :)
 
Last edited:
There's no question the GTX 675MX is a good card. However, this benchmark doesn't tell the whole story. It's a freakin' benchmark!

To put it in perspective even WITH this benchmark, the minimum frame-rate you saw at native res (1440p) was 14.6fps, and the minimum posted on the GTX 680MX was 17.1fps, which is a 17.1% increase in the lower framerate. That's a pretty significant difference (though neither is exactly playable).

Also, at 1080p with vsync off, your minimum framerate was 19.3fps, and your max was 64.7. My 680GTX at stock clocks with vsync off, got min fps 22.1, max fps 88.4. 64.7fps to 88.4fps is a huge difference, and shows you the potential wiggle room you have with the faster GPU. And then when you overclock the 680, it's even more impressive: min fps 26.4, max fps 112.3 at 1080p. That's a free 25% boost in frame-rates.

I also don't know how well the GTX 675MX overclocks. Like I said earlier, the MX 680GTX is a massive overclocker. I average 25% increase in frame-rates in every game with a basic overclock of 225/350 core/memory.

I really suspect you're a little too much in love with the 675. :D It's a good card, but be realistic, here. I'm not sure how much RAM the Valley test is using, but it wouldn't surprise me if it's even 1GB. Relying entirely on this benchmark would be folly if you ask me. I know you're looking for some sort of personal validation of your getting the 675 over the 680, but let's keep it real, shall we? :)

You make valid points but lets be realistic...MOST people are not going to overclock their card or use triple frame buffering (which your test results use). So in apples to apples stock form, especially within OSX the cards are VERY close in performance. VALLEY is just that, an industry standard benchmark...benchmarks are ment to test different things using equal parameters. So these results while not being "real world" are accurate, unbiased and a good measure.
 
You make valid points but lets be realistic...MOST people are not going to overclock their card or use triple frame buffering (which your test results use). So in apples to apples stock form, especially within OSX the cards are VERY close in performance. VALLEY is just that, an industry standard benchmark...benchmarks are ment to test different things using equal parameters. So these results while not being "real world" are accurate, unbiased and a good measure.

I respectfully disagree with some of that. When it comes to tech savvy people like I'd imagine many of us here are, it makes all the difference. The kind of person even interested in benchmarks is the kind of person who has an inkling of what GPU they're going to buy, and whether it can be overclocked or not etc. It's the same reason I bought an STI. I don't care about what it can do in stock form, and I'd imagine many buyers don't, either. Just because "most" (in your argument) people maybe don't overclock or use triple buffering doesn't somehow negate the performance differences between two cards.

I'm also not sure that Valley is an "industry standard" benchmark. It's a benchmark, and in my eyes not a very good one. It's a landscape which you float around in, with some absolutely appalling pop-in the likes of which I've NEVER seen before in any benchmark (or real-world game) pretty much ever.

I'm not disputing this to be difficult, but in no way does Valley give me any realistic information about my iMac's ability to run games in 1440p. After all, I run 95% of games in 1440p at 60fps, and in this Valley benchmark that obviously isn't the case, at times barely hitting half of that.
 
You make valid points but lets be realistic...MOST people are not going to overclock their card or use triple frame buffering (which your test results use). So in apples to apples stock form, especially within OSX the cards are VERY close in performance. VALLEY is just that, an industry standard benchmark...benchmarks are ment to test different things using equal parameters. So these results while not being "real world" are accurate, unbiased and a good measure.

I'd have to agree with this. Based on the benchmark results there is a not a "dramatic" difference between these two GPUs when a true apples to apples comparison is done which is the only valid comparison of them.

Valley is actually a decent benchmark that stresses the GPU using features currently in use in modern games. Whether the visuals suffer from texture pop-in or not is probably unimportant. What is important is how rapidly the GPU is able to process data, how rapidly data is transferred from video memory and ultimately, what FPS the user sees. Valley is designed to stress the most powerful desktop cards presently on the market. So of course one should not expect to see a stable 60 FPS in less powerful laptop cards such as come with iMacs.

I am not a fan of overclocking components personally. The tolerance limits exist for a reason. I'm not a fan of voiding my warranty either. But whatever floats one's boat I guess. If somebody wants to squeeze out a few more FPS, it's their hardware to do with as they please.

At the end of the day it is pretty obvious that the 675MX is also a good card for current games and compares well to the 680MX. No, it is not as fast of course but it is certainly fast enough for an enjoyable gaming experience with current titles and that's what counts. I hardly think anyone with a 675MX should be hanging their head and thinking, "Oh, it sucks to be me." lol
 
Valley is actually a decent benchmark that stresses the GPU using features currently in use in modern games. Whether the visuals suffer from texture pop-in or not is probably unimportant. What is important is how rapidly the GPU is able to process data, how rapidly data is transferred from video memory and ultimately, what FPS the user sees. Valley is designed to stress the most powerful desktop cards presently on the market. So of course one should not expect to see a stable 60 FPS in less powerful laptop cards such as come with iMacs.[/QUOTE]

If it doesn't actually correlate to games I play, it's not a valid benchmark. I feel that way about many, many benchmarks out there. Getting 20fps in a game is not acceptable to me.


At the end of the day it is pretty obvious that the 675MX is also a good card for current games and compares well to the 680MX. No, it is not as fast of course but it is certainly fast enough for an enjoyable gaming experience with current titles and that's what counts. I hardly think anyone with a 675MX should be hanging their head and thinking, "Oh, it sucks to be me." lol

Nope, the 675MX is a great card, but this test doesn't really prove much, if anything. It's not a real-world benchmark. I know this because no games I play fluctuate between 20fps and 80fps. None of them.
 
All i know is that the 680mx is a piece of piss to overclock and get a great deal more performance from, i've been playing around tonight and settled on 225/350, I tried 225/425 and even 225/600 but only gained a few extra points on my overall score and an average of 1fps extra compared to 225/350 but the difference between stock and 225/350 was HUGE. Almost doubled my average fps and gives a welcome improvement in game play. Would be interesting to hear how hard the 675 can be pushed in comparison if you wouldn't mind giving it a go flavr? It has to be carried out in windows, this website can give you all the advice you need to get yourself up and running http://lifehacker.com/how-to-overclock-your-video-card-and-boost-your-gaming-30799346

Next time i'm in bootcamp i'll upload my screenshots for comparison.
 
If it doesn't actually correlate to games I play, it's not a valid benchmark. I feel that way about many, many benchmarks out there. Getting 20fps in a game is not acceptable to me.

Nope, the 675MX is a great card, but this test doesn't really prove much, if anything. It's not a real-world benchmark. I know this because no games I play fluctuate between 20fps and 80fps. None of them.

Then I think you don't understand what a benchmark is or attempts to do William. The whole point is to send controlled input to a GPU system and measure the output. In this controlled environment (which does not equal "real world" gameplay) it becomes possible to do apples to apples performance comparisons between various video cards that would not otherwise be possible without such control and measurement.

The whole point is to get a ballpark idea of relative performance between various GPU models by stressing them invoking features typically used in game software. The benchmark does provide this relative performance feedback.

It is important to note that comparisons between what FPS you get running this benchmark and running say FarCry 3 at similar graphical settings are not valid. This software is deliberately designed to stress a GPU and its memory system to their limits. It is not optimized for performance the way a game would be. It is optimized to bring a system to its knees basically on max. The min/max values are not especially useful and again should not be compared to gameplay experiences. They are only data points to be used in making comparisons between one card and another running the test. It is a test. It is NOT a game.

In conclusion, a stock 675MX does perform well in comparison to a stock 680MX when evaluated with this controlled test. Not surprisingly however, the 680MX is faster as it should be given its design and resultant price increase over the more affordable 675MX. It is also not a small difference in my opinion that the 680MX includes twice as much video ram at a time when games are increasingly using more than one gigabyte. So personally, I see that as a substantial plus.

I don't know why you find any of that hard to swallow really. Nobody is saying, including the benchmark, that you do not own an excellent GPU. You do.
 
Last edited:
Then I think you don't understand what a benchmark is or attempts to do William. The whole point is to send controlled input to a GPU system and measure the output. In this controlled environment (which does not equal "real world" gameplay) it becomes possible to do apples to apples performance comparisons between various video cards that would not otherwise be possible without such control and measurement.

The whole point is to get a ballpark idea of relative performance between various GPU models by stressing them invoking features typically used in game software. The benchmark does provide this relative performance feedback.

It is important to note that comparisons between what FPS you get running this benchmark and running say FarCry 3 at similar graphical settings are not valid. This software is deliberately designed to stress a GPU and its memory system to their limits. It is not optimized for performance the way a game would be. It is optimized to bring a system to its knees basically on max. The min/max values are not especially useful and again should not be compared to gameplay experiences. They are only data points to be used in making comparisons between one card and another running the test. It is a test. It is NOT a game.

In conclusion, a stock 675MX does perform well in comparison to a stock 680MX when evaluated with this controlled test. Not surprisingly however, the 680MX is faster as it should be given its design and resultant price increase over the more affordable 675MX. It is also not a small difference in my opinion that the 680MX includes twice as much video ram at a time when games are increasingly using more than one gigabyte. So personally, I see that as a substantial plus.

I don't know why you find any of that hard to swallow really. Nobody is saying, including the benchmark, that you do not own an excellent GPU. You do.

I understand the point of the benchmark, I do. What I'm saying is - is that I don't understand the point of the benchmark. :D

Real-world tests may show something completely, completely different. So sure, in this benchmark, the 675 and 680 may perform similarly. But in actual games, the difference could be (and likely is) much more significant. At the end of the day, nobody plays a benchmark. Because of that, the benchmark really has no point to it, at all. It's one particular engine, UNIGINE, and that has been used in precisely.... one game, apparently.

http://www.ign.com/companies/unigine-corp

http://unigine.com/

So, one engine, one game. I'm sorry, this benchmark is not helpful. It doesn't test every facet of the video card. It tests "specific" facets of the video card, using a specific engine. Sure, in a completely arbitrary manner, it tells me that at stock clocks, the 675MX and the 680MX are very similar. In the same way that if you pick my street to drive down at 30mph, my Subaru and a Ferrari 458 perform very similarly. But let's go to an actual track, shall we?
 
Have you tried a comparison with a more varied benchmark test like 3dmark? The 675MX has less CUDA cores, and less memory, though it's still a powerful mobile graphics card. iMac owners with 675MX will also have good gaming experiences, but you will be more restricted in what kind of games and settings you can use. If gaming is one of your goals with an iMac, and you can afford it, going for the 680MX is a no-brainer IMO.
 
Last edited:
As an side, having vsync set to off does nothing for our iMac displays except heat up our GPUs more than necessary. The iMac cannot display more than 60fps.

.

This is true to an extent but using in-game VSYNC can produce more issues then having it turned off, you are best to use nvidia's adaptive vsync with triple buffering, this stops the jumps from 60fps to 30fps in the graphically demanding parts of games.
 
This is true to an extent but using in-game VSYNC can produce more issues then having it turned off, you are best to use nvidia's adaptive vsync with triple buffering, this stops the jumps from 60fps to 30fps in the graphically demanding parts of games.

You need to use d3doverrider if you want proper vsync in direct3d games (which are nearly all games in recent years).
 
Last edited:
This is true to an extent but using in-game VSYNC can produce more issues then having it turned off, you are best to use nvidia's adaptive vsync with triple buffering, this stops the jumps from 60fps to 30fps in the graphically demanding parts of games.

That doesn't work (triple buffering) in d3d games, only OpenGL, to my knowledge (unless you use d3doverrider). I think that what Mac32 was trying to say and completely confused everyone. :D Also, adaptive vsync is horrible. I used it for 5 minutes and screen tearing was still there. Adaptive vsync doesn't work consistently, if you ask me.


http://hardforum.com/showthread.php?t=1688239


----------

You need to use d3doverrider if you want proper vsync in openGL games.

See above. ;)
 
Last edited:
No, this is for ANY display at 60hz, or 120hz, or whatever. 60hz = 60fps, 120hz = 120fps. So vsync on a 120hz display would lock the display to 120fps. Our iMacs are 60hz, as are most monitors. So anything beyond 60fps is absolutely, 100% pointless. This isn't a debate about whether you or I can see the difference. There is NO difference. All you get with vsync off is screen tearing and an unnecessarily hot GPU, since the GPU has to render those frames internally despite the fact that the display device (iMac, in this case) can't show all those frames.
I guess you never played fast multi-player shooters like quake or ut?
vsync=on gives an extreme input lag in this kind of games, so please, add "imho" at the end of sentense stating vsync=off is completely pointless.
 
I guess you never played fast multi-player shooters like quake or ut?
vsync=on gives an extreme input lag in this kind of games, so please, add "imho" at the end of sentense stating vsync=off is completely pointless.

lol. That's the whole point of triple buffering!!! To reduce the input lag of vsync-enabled gameplay. It's NOT "imho."

"In other words, with triple buffering we get the same high actual performance and similar decreased input lag of a vsync disabled setup while achieving the visual quality and smoothness of leaving vsync enabled."

http://www.anandtech.com/show/2794/2

On a side-note, I'm extremely, EXTREMELY sensitive to input lag. It's the reason I have an Eizo FS2333 LCD for gaming on with my consoles.
 
Last edited:
lol. That's the whole point of triple buffering!!! To reduce the input lag of vsync-enabled gameplay. It's NOT "imho."

"In other words, with triple buffering we get the same high actual performance and similar decreased input lag of a vsync disabled setup while achieving the visual quality and smoothness of leaving vsync enabled."

http://www.anandtech.com/show/2794/2

On a side-note, I'm extremely, EXTREMELY sensitive to input lag. It's the reason I have an Eizo FS2333 LCD for gaming on with my consoles.

Looks like you're not sensitive to input lag at all. Tripple buffering doesn't eliminate it completely. Yes, it improves it a bit, but input lag IS still there, that's for sure.
 
Looks like you're not sensitive to input lag at all. Tripple buffering doesn't eliminate it completely. Yes, it improves it a bit, but input lag IS still there, that's for sure.

Whatever. This is a pointless argument. If you want to deal with screen tearing, go right ahead. I'll enjoy lag-free (as much as is possible) vsync+triple buffering.

We know that there are variances in how this works, but how anyone can deal with screen tearing is beyond me. That would throw me off my "competitive" game much more than a few ms of added input lag from vsync+triple buffering.

In any case, as with everything, it's subjective. Hence the "pointless argument" comment.
 
Whatever. This is a pointless argument. If you want to deal with screen tearing, go right ahead. I'll enjoy lag-free (as much as is possible) vsync+triple buffering.

All of your triple-buffering argumentation was wrong and pointless so yeah, whatever.
 
We know that there are variances in how this works, but how anyone can deal with screen tearing is beyond me. That would throw me off my "competitive" game much more than a few ms of added input lag from vsync+triple buffering.

Most competitive shooters don't suffer from screen tearing that much, while games like Diablo 3 or Dota 2 (named a few I recently played) do. At the same time, Diablo 3 doesn't suffer from input lag at all, and I too highly recommend playing it with vsync=on.

Those "few ms" are, in other hand, a huge disadvantage in games like quakelive.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.