Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

666sheep

macrumors 68040
Original poster
Dec 7, 2009
3,686
291
Poland
D300:
– GPU 800 @1125 mV (Boost 850 MHz @1175 mV), memory 1270 MHz (5080 effective), TDP 116W

D500:
– GPU 650 @1025 mV (Boost 725 MHz @1075 mV), memory 1270 MHz (5080 effective), TDP 108W

D700:
– GPU 650 @918 mV (Boost 850 MHz @1100 mV), memory 1370 MHz (5480 effective), TDP 108W

Taken from PC BIOSes extracted from nMP EFI update.
 
Last edited:

MartinAppleGuy

macrumors 68020
Sep 27, 2013
2,247
889
D300:
– GPU 800 @1125 mV (Boost 850 MHz @1175 mV), memory 1270 MHz (5080 effective), TDP 116W

D500:
– GPU 650 @1025 mV (Boost 725 MHz @1075 mV), memory 1270 MHz (5080 effective), TDP 108W

D700:
– GPU 650 @918 mV (Boost 850 MHz @1100 mV), memory 1370 MHz (5480 effective), TDP 108W

Taken from PC BIOSes extracted from nMP EFI update.

Thanks. Just out of curiosity, what clock does my GT 750m 1Gb GDDR5 run at? I know this is a Mac Pro forum, just I would like to compare...
 

Quash

macrumors regular
Sep 27, 2007
192
20
Wow those are some low clocks on the D500
With these clocks a D300 would basically be equal to a 7970m/8970m

Would surprise me if the D500 would be faster at games as a D300.
I suspect it will be significantly slower, that's gonna piss off some people.
The D500 will be superior for (double precision) GPU compute.

Was to be expected though, they needed to keep the GPU to roughly 100/120 watt to not fry the power supply.
 

iBug2

macrumors 601
Jun 12, 2005
4,531
851
D300:
– GPU 800 @1125 mV (Boost 850 MHz @1175 mV), memory 1270 MHz (5080 effective), TDP 116W

D500:
– GPU 650 @1025 mV (Boost 725 MHz @1075 mV), memory 1270 MHz (5080 effective), TDP 108W

D700:
– GPU 650 @918 mV (Boost 850 MHz @1100 mV), memory 1370 MHz (5480 effective), TDP 108W

Taken from PC BIOSes extracted from nMP EFI update.

So D300 has the highest TDP? That's interesting. If 116W is ok for one GPU, why are D500 and D700 capped at 108?
 

Quash

macrumors regular
Sep 27, 2007
192
20
So D300 has the highest TDP? That's interesting. If 116W is ok for one GPU, why are D500 and D700 capped at 108?

Maybe because it's less likely to be ordered with high end CPU's :confused:
Even though they all have the same TDP, the 6 core will use more power as a 4 core at max load.
 

slughead

macrumors 68040
Apr 28, 2004
3,107
237
D300:
– GPU 800 @1125 mV (Boost 850 MHz @1175 mV), memory 1270 MHz (5080 effective), TDP 116W

D500:
– GPU 650 @1025 mV (Boost 725 MHz @1075 mV), memory 1270 MHz (5080 effective), TDP 108W

D700:
– GPU 650 @918 mV (Boost 850 MHz @1100 mV), memory 1370 MHz (5480 effective), TDP 108W

Taken from PC BIOSes extracted from nMP EFI update.

Thanks for this! I want to find VirtualRain's table and see how close he was with his estimates.

The W9000 is 975Mhz, 7970 GE is 1,000Mhz.
 

Pressure

macrumors 603
May 30, 2006
5,043
1,384
Denmark
so d500 is worse than d300? wtf

The D300 is based on Pitcairn (1280 Stream Processors), while the D500 (1536 Stream Processors) and D700 (2048 Stream Processors) both are based on Tahiti.

The D300 has 2 GB of RAM and 256-bit memory bandwidth, while D500 has 3 GB of ram and 384-bit memory bandwidth.

More so, the Double Precision performance of the D300 is 1/16th of its total 2TFlops/sec, while the D500 has 1/4th of its 2.2TFlops/sec.

In other words, D300 has 125GFlops/sec Double Precision performance and the D500 has 550GFlops/sec.
 

wildmac

macrumors 65816
Jun 13, 2003
1,167
1
The D300 is based on Pitcairn (1280 Stream Processors), while the D500 (1536 Stream Processors) and D700 (2048 Stream Processors) both are based on Tahiti.

The D300 has 2 GB of RAM and 256-bit memory bandwidth, while D500 has 3 GB of ram and 384-bit memory bandwidth.

More so, the Double Precision performance of the D300 is 1/16th of its total 2TFlops/sec, while the D500 has 1/4th of its 2.2TFlops/sec.

In other words, D300 has 125GFlops/sec Double Precision performance and the D500 has 550GFlops/sec.

So I would assume that this still means the D500 is a better performer, depending on what you are doing with the card.. (In my case, it's PS, LR, and WoW).
 

slughead

macrumors 68040
Apr 28, 2004
3,107
237
Virtual Rain's estimates were based on the FLOPS. I wonder how the nMP will stack up with the power and clock constraints:

FddEFt3.png
 

Quash

macrumors regular
Sep 27, 2007
192
20
Well if the original posters clock speeds are correct these guesstemates are quite a bit off.

In other words, D300 has 125GFlops/sec Double Precision performance and the D500 has 550GFlops/sec.

Yes the D500 is gonna murder a D300 in double precision GPU compute.

But for single precision or stuff like games it's gonna be very very close if the clock speeds by 666sheep are correct. So for a lot of people the D500 might not be a good value pick.

We'll know within a week i guess.
 

BEIGE

macrumors member
Oct 10, 2008
70
0
What types of apps benefit more from double precision vs single precision?

scientific simulation that really requires accuracy so there are no rounding errors over long periods or iterations. Everything else is pretty much single precision – even 3D rendering on the GPU is single-precision because it's accurate enough to give you realistic looking results so the significant speed hit wouldn't give you much
 

ZnU

macrumors regular
May 24, 2006
171
0
so the D500 is essentialy wasted money for most?

50% more VRAM, 50% more memory bandwidth, 20% more cores. That should be a significant difference for some workloads even leaving aside the better double precision performance.
 

Quash

macrumors regular
Sep 27, 2007
192
20
50% more VRAM, 50% more memory bandwidth, 20% more cores. That should be a significant difference for some workloads even leaving aside the better double precision performance.

The W8000 has 40% more cores, 15% more memory bandwidth and it's cores are only clocked 6% slower then a W7000. But the w8000 is still significantly slower then a W7000 at games(around 25%). Read the toms hardware review.

Now a W8000 is not the same as a D500 but still. (The D500 has more memory bandwidth in exchange for less cores.)

All i'm saying is: Tahiti cores are not the same as Pitcairn cores. You can't just compare them by multiplying the nr of cores by the frequency of those cores.
I would wait until your application is benchmarked to make sure you don't waste 400$

The D700s will always be faster btw ;)

The estimates are the boost clock, so they're not terribly far off.
Ok fair enough but remains to be seen if you can boost for significant amounts of time. Because otherwise the frequency difference is quite big.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,286
3,883
So D300 has the highest TDP? That's interesting. If 116W is ok for one GPU, why are D500 and D700 capped at 108?

The D500/D700 under sustained, long term computation load is going to hold up better with the lower TDP (especialy if there is a hefty CPU component to the workload also). The D300 won't. Given the D300 is lacking ECC, the heaving lifting GPGPU loads are far more likely to be thrown at the D500/D700. D300 is far more likely to be more burst aligned workloads ( or just plain lightweight ones ).

D300 is also, at least in the first year or so of usages, to run more solo workloads ( the other GPU is most dark.) That also is going to give single cards more headroom.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,286
3,883
Can you explain how this works?

Self segregation. those folks who know they have software that can put both GPUs to work are likely going to buy D500 and D700. With both at work that is just that much more work they can get done.

Those folks primarily in the camp that have software that can't use more than one GPU and/or GPU limited are going to far more likely choose D300. If not GPU limited not going to buy more than need. Even more likely when they come in pairs.

I think Apple knows that some fraction of folks buying Mac Pros aren't going to leverage them in pairs. Given the basic Pitcairn that powers the D300 isn't exactly a world-beater anyway it makes less sense to crank the clock down as far.

If Apple moves a large number of dual Mac Pro and dual GPU MBPs (and lessor extent dual GPU iMacs0 there will be more software over time that leverages it. Apple has been pointing to GCD, OpenCL, etc for years. Eventually it sinks in to the software developers that they are actually serious about it.
 

FredT2

macrumors 6502a
Mar 18, 2009
572
104
Self segregation. those folks who know they have software that can put both GPUs to work are likely going to buy D500 and D700. With both at work that is just that much more work they can get done.

Those folks primarily in the camp that have software that can't use more than one GPU and/or GPU limited are going to far more likely choose D300. If not GPU limited not going to buy more than need. Even more likely when they come in pairs.

I think Apple knows that some fraction of folks buying Mac Pros aren't going to leverage them in pairs. Given the basic Pitcairn that powers the D300 isn't exactly a world-beater anyway it makes less sense to crank the clock down as far.

If Apple moves a large number of dual Mac Pro and dual GPU MBPs (and lessor extent dual GPU iMacs0 there will be more software over time that leverages it. Apple has been pointing to GCD, OpenCL, etc for years. Eventually it sinks in to the software developers that they are actually serious about it.
Ok, I get it. So another question: what exactly do the flops number of these cards mean? One (me) would think based on the published numbers (2 vs. 2.2) that the performance difference between the D300 and D500 would not be very great.
 

mrsavage1

macrumors regular
Feb 1, 2010
220
0
The D500/D700 under sustained, long term computation load is going to hold up better with the lower TDP (especialy if there is a hefty CPU component to the workload also). The D300 won't. Given the D300 is lacking ECC, the heaving lifting GPGPU loads are far more likely to be thrown at the D500/D700. D300 is far more likely to be more burst aligned workloads ( or just plain lightweight ones ).

D300 is also, at least in the first year or so of usages, to run more solo workloads ( the other GPU is most dark.) That also is going to give single cards more headroom.

Where does it say that the d500 and d700 are using ecc?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.