Yeah, if you enjoy watching your laptop temperatures sky rocket.
So when you have no arguments left, you can make blanket statements, eh?
Enjoy thinking a 20% increase is somehow a lot.(Anyways, I need to sleep, so enjoy your 320M that's "almost" as fast as the 9600M GT)
Yeah, I'm totally making blanket statements.
Enjoy thinking a 20% increase is somehow a lot.
Also "my" 320m? I don't own a single computer with a 320m, thanks for asking.
People justify because they haven't seen the benchmarks. If a lot of people had known that the 512MB of VRAM in the 330M GPU performs no better than the 256 MB VRAM, and also that the Arrandale i7 is hardly any better than the i5, then you can bet that people would be saving their money.I dunno, 20% more money can be a lot, 20% in a race is a lot, heck 20% faster for processor speed would be considered significant, so why can't we say 20% is a lot for graphic cards? The difference between the slowest Core i5 and the i7 in the MacBook Pros is less than 20% yet people can actually justify the cost difference which is 22%.
comparing an actual video card to others that are no more available is not relevant
most people would agree to say that 320M is better than 9400M
so why the hell comparing it to 9600M or older?
the 13" MBP is not a gaming machine anyway...
People justify because they haven't seen the benchmarks. If a lot of people had known that the 512MB of VRAM in the 330M GPU performs no better than the 256 MB VRAM, and also that the Arrandale i7 is hardly any better than the i5, then you can bet that people would be saving their money.
So just because they say it's worth it means that it is? You really think a 400 dollar trade off for a couple of seconds off of encoding time is worth it?Actually, I believe lots of people here already know that the i7 advantage is as little as you cited yet they see that 11-15% as big enough to justify the 22% extra cost. Lots of people here ask for advice on that very question. In the first place, how would someone justify something without any basis of justification (i.e. benchmarks)? Even Anand from Anandtech seems to think the i7 is worth it for that 11-15% which is more than the benchmarks you provided. So what I'm saying is, in light of all this, 20% is quite significant in perspective.
So just because they say it's worth it means that it is? You really think a 400 dollar trade off for a couple of seconds off of encoding time is worth it?![]()
If you are doing any heavy 3D rendering (which is what that benchmark was about) then you are going to need more than a MacBook Pro in general.Oh come on now, the only way you'll save a couple of seconds encoding is if the video's length is in seconds too.
Look at the link I provided; 15% quicker encoding time between the i7 and i5. What would take the i5 3 hours would take the i7 27 minutes less.
If you are doing any heavy 3D rendering (which is what that benchmark was about) then you are going to need more than a MacBook Pro in general.
And that is a REALLY specific benchmark.
You haven't provided jack. You've posted one synthetic benchmark out of god knows how many and the only one that showed any marginal improvement was Quicktime X, an application that no one in their right mind would touch for doing any serious encoding. Maybe in 3-4 years when every application has Grand Central Dispatch and OpenCL implemented you will start to see your amazing 15% improvement, but by that time the computer will be old and you'll be ready for a new one.Dude, you're shifting the goal posts on me. I provide a counter-argument to a point you make and you then proceed to deem the point irrelevant or limited in application. So really, beside the point, beside the point.
Actually, that 15% I quoted is for video encoding so look again.Look for Quicktime X. I'm far from a media professional; I don't even freelance. I do make YouTube videos and use iMovie so 15% can be seen there. I have an EyeTV that records digital TV. I like to archive shows and encoding in h.264 is very processor intensive so more 15% of time saved that can add up. I don't personally rip DVDs, but many MacBook Pro users do and again, video encoding, yada yada.
Do I REALLY need a Mac Pro for all that? I could get an i5/i7 iMac but I like being able to carry my computer around with me.
So I DON'T see how that's a REALLY specific benchmark.
Now we're really off topic.
You haven't provided jack. You've posted one synthetic benchmark out of god knows how many and the only one that showed any marginal improvement was Quicktime X, an application that no one in their right mind would touch for doing any serious encoding. Maybe in 3-4 years when every application has Grand Central Dispatch and OpenCL implemented you will start to see your amazing 15% improvement, but by that time the computer will be old and you'll be ready for a new one.
LMAO!You've got to be kidding me! Actual video encoding which is an ACTUAL, COMMON, REAL-WORLD PRODUCTIVE USE of a computer is a "SYNTHETIC" benchmark!?
And now you say no one in their right might would touch Quicktime X for any "serious" encoding? WTF is "serious" encoding? Encoding that is done by professionals? Now who's the one giving really specific benchmarks. In fact, why must we limit ourselves to "serious" encoding? Quicktime X encoding would be one of the most common encoding apps to be used by Apple's user base so it is a completely valid and very applicable benchmark. There are many many more casual YouTube encoders than "serious" encoders in the world and they will experience this 15% benefit. If I export video using EyeTV, it uses Quicktime. If I export video using iMovie, it uses Quicktime. And are those "synthetic" or do you not understand its meaning with regards to benchmarks?
That link I provided provided hard, reproducible numbers that showed 11 - 15% improvement and you're telling me we'll have to wait for Grand Central Dispatch and OpenCL to see this 15% improvement!? Er, you're really all over the place here, and you're making no sense because when I say 15% improvement, it means the i7 is 15% quicker than the i5. So if you reject that and say we need GCD and OpenCL, won't both systems have GCD and OpenCL and thus show no 15% disparity then!?
I've provided more than you have so if I'm providing jack, you're in the negative.
I realize now that I'm just wasting time discussing this with you. Good bye.
So when you have no arguments left, you can make blanket statements, eh?
(Anyways, I need to sleep, so enjoy your 320M that's "almost" as fast as the 9600M GT)
Oh come on now, the only way you'll save a couple of seconds encoding is if the video's length is in seconds too.
Look at the link I provided; 15% quicker encoding time between the i7 and i5. What would take the i5 3 hours would take the i7 27 minutes less.
People justify because they haven't seen the benchmarks. If a lot of people had known that the 512MB of VRAM in the 330M GPU performs no better than the 256 MB VRAM, and also that the Arrandale i7 is hardly any better than the i5, then you can bet that people would be saving their money.
They ALWAYS try to test on an external monitor at the same resolution so you can compare the graphics performance with other Macs.I hate to nitpick (not really) but it would make more sense to see the minimum framerates in the applications tested, which these days are much more important to consider.
This may show a much larger difference than what is presented here.
It's weird, however, that Barefeats didn't test at the native resolution (1440 x 900 or 1680 x 1050).
I hate to nitpick (not really) but it would make more sense to see the minimum framerates in the applications tested, which these days are much more important to consider.