Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
. I'm hoping they're erring towards better GPUs and providing less VRAM to keep heat manageable though, plus the VRAM is presumably ECC (otherwise what's the point of using ECC RAM for the system),

The VRAM isn't ECC. The system is (or can be):

"... So starting with the FirePro W series AMD will have full ECC support in selected models. This will include both ECC for internal SRAM caches (which is actually a free operation), and ECC for the external VRAM (accomplished through the use of a virtual ECC scheme). This functionality is going to be limited to products based on the Tahiti GPU, which means the W9000 and W8000. ... Consequently, the Pitcarin based W7000 and W5000 will have no such ECC support, mirroring their lower compute performance and emphasis on graphics. ..."
http://www.anandtech.com/show/6137/the-amd-firepro-w9000-w8000-review-part-1/6

"Maximum data rate a all costs" VRAM really doesn't put a priority on data integrity at all. So looking for data integrity support in that specific area is a lost cause.

If Apple used a Pitcarin/W7000 for the D300 then it won't have it. If they got AMD to deliver a even lower scoped down Tahiti derivative then it will. The D500 and D700 are very likely derived from a Tahiti foundation ( Tahiti LE an Tahiti XT derivatives respectively ). All they'd need is a Tahiti
"extra LE (light edition) ".

Chopping down a PItcarin/W7000 to 2GB probably has far less to do with heat than it does with cost. Same kneecapping of not filling all DIMMs slots on the entry model. It is far more likely Apple chopped off hardware to hit the "sub $3,000" price point than some technical decision or benefit. That could have also been done by not adding sky high (even relative to Apple's normals ) mark-ups on the cards, but customers getting less puts more money in Apple's pockets in this case.
 
I'm hoping they're erring towards better GPUs and providing less VRAM to keep heat manageable though,

It has nothing to do with heat as the machine has to accommodate its highest cto specs. Those use more than 2GB per card. They've used it as a cost cutting measure in the past too. Note how many times the mac card comes with less. The 650m in the cMBP last year used 512. Basically all PC versions used 1GB. They used a 1GB version with the 5870. Memory is definitely one area where they control costs.
 
The VRAM isn't ECC.
[snip]
"This functionality is going to be limited to products based on the Tahiti GPU, which means the W9000 and W8000. …"
Hmm, I guess I'd been assuming the lineup would be W8000, S9000 and W9000 for the three types, but the W8000 has the same 1792 stream processors as the S9000. It would seem odd for Apple to not just use Tahiti for the whole line though, especially if they are assembling the cards themselves, as ECC VRAM on all machines would go some way towards justifying the cost and the lesser individual RAM.

I notice that there's no S8000, or at least none I could find information about, I wonder if Apple could be using something of that type though, as if it were similarly specced to how the S9000 and W9000 are to one another then it could fit the D300's specs while still being Tahiti based.

It is kind of one of those things though; the professionals that use their machines for high end rendering like to have ECC RAM to prevent inaccuracies from creeping in during long builds/rendering sessions, but if the VRAM doesn't have the same protection then that's now a large potential area for errors to creep in with the popularity of OpenCL for high performance computation. It's no use having your RAM protected if all the errors creep in in VRAM instead, so I just assumed Apple would use Tahiti with ECC memory as standard, though by that same token you'd think they'd mention it if they were.
 
Hmm, I guess I'd been assuming the lineup would be W8000, S9000 and W9000 for the three types, but the W8000 has the same 1792 stream processors as the S9000. It would seem odd for Apple to not just use Tahiti for the whole line though, especially if they are assembling the cards themselves, as ECC VRAM on all machines would go some way towards justifying the cost and the lesser individual RAM.

There are no Tahiti variants that have just 1280 stream processors. The low point is actually about were the D500 (Tahiti LE , 1536 ballpark ) is placed. The fully flushed out Tahiti tops out at 2048 stream processors. That is a -37.5% decrease in cores. It is much cleaner fab wise to do that with a different die that is targeted at 1280 than to flip 38% of a bigger die off. This issue is that AMD has to pay for processing that bigger die even if not all of the stuff is not "turned on" active. The smaller 1280 processor die will have lower die fab costs and hence lower sales cost.

The D500 looks like a Tahiti LE variant. I'm sure if Apple told AMD they would pay D500 GPU package prices for the less capable D300 level of enabled processors that AMD would probably take the money and run (presuming the hooks are present to turn that much off). It is far more likely that Apple's Scrooge McDuck mode kicked in and that they are far more happy with the lower priced, less capable Piticrain that brings in higher profit margins.


so I just assumed Apple would use Tahiti with ECC memory as standard, though by that same token you'd think they'd mention it if they were.

Well the "Tech spec" pages generally suck at being technical. These days they are largely the online BTO text/options in a slightly different format. (i.e., oriented toward showing you what you can buy directly from Apple as opposed to what it is technically. )

You'd think if they are going make a big deal about targeting high end OpenCL computation that they would explicitly mention this. AMD's slacker approach to deploying systems with ECC is one reason they have gotten weak traction on previous iterations of their card into this specific market.
 
Well...that's not for sure. A Dreamworks guy in a presentation that took place after WWDC said that the dual cards of nMP can be used combined, something that "till now was available only to games". That's about the phrase he used.

Sure, parallel tasks can be allocated to each card such as rendering a scene, encoding video, etc. They don't have to be in crossfire for this to work.
 
The D500 looks like a Tahiti LE variant.

yeah, I'd agree

I'm sure if Apple told AMD they would pay D500 GPU package prices for the less capable D300 level of enabled processors that AMD would probably take the money and run (presuming the hooks are present to turn that much off). It is far more likely that Apple's Scrooge McDuck mode kicked in and that they are far more happy with the lower priced, less capable Piticrain that brings in higher profit margins.

i don't think I'd call making the low-end pitcairn's scrooge mcduck mode . . . .I'd just call your other scenario CRAZY ASS behavior. there's no way those aren't pitcairns, but I'd love to be wrong. :)
 
In any case, we got to love how after 2 presentations where the machine is discussed, we're still missing most of the vital information about Mac Pro, concerning the GPUs.

I'm not sure I like how Apple handles this so far.
 
....
i don't think I'd call making the low-end pitcairn's scrooge mcduck mode . . . .I'd just call your other scenario CRAZY ASS behavior. there's no way those aren't pitcairns, but I'd love to be wrong. :)

At the profit margins that Apple/AMD are slapping on these cards it isn't that crazy. Even "over paying" for the chopped down Tahiti "extra LE" GPU would still carry a very sizable profits. For the mainstream card market (with radically thinner margins) it makes sense to fab optimize. that is why there are variants of the AMD 7870 ( GHz edition with Pitcairn XT with Tahiti LE ). Is it crazy that AMD has those two in the same product number slot? Not really, However, throw on top around 100+% mark-ups and there is lots of revenue slop here. This "over paying" is probably in the range of an additional $15-25 on a $600-700 card where over half of that revenue is is just for "pro overhead".



Since Apple is doing the custom board lay out if there was a why to use the same GPU package and just vary what is connected and populated on the same basic printed circuit board infrastructure then can do a bit of trade off in costs for one area for the increased on a component. Apple does heavy design reuse on many occasions. Similarly, if Apple lops off 2GB of VRAM at the same price point then it is neutral cost wise. Reasonably more expensive GPU package to get ECC traded for less VRAM. A reasonable design trade-off with no significant financial hit.

The Scrooge McDuck is more is about not doing a design that makes a decent amount of money, but maximize money as being the primary goal. Going Pitcairn should allow the pro card to keep the 4GB of VRAM (e.g., W7000 implementation). Not chop it off.
 
Last edited:
The Scrooge McDuck is more is about not doing a design that makes a decent amount of money, but maximize money as being the primary goal. Going Pitcairn should allow the pro card to keep the 4GB of VRAM (e.g., W7000 implementation). Not chop it off.

definitely agreed- it would have been nice for apple to do that. but it's not the way they roll, never has been. Margins are king, end of story.

personally, the AMD gpu's are the only thing holding me back from pulling the trigger on the new MP. If there was a decent nvidia option or an easy upgrade path for the gpu's, I'd be buying one for sure. As it is, I'm really not too sure, even though my 2008 octo is getting a little slow (even with an nvidia gtx 680 in there, sigh)
 
definitely agreed- it would have been nice for apple to do that. but it's not the way they roll, never has been. Margins are king, end of story.

It isn't so much Apple's margins are high. AMD (or Nvdia if they get selected next round) and Apple piled on top of each other. The Pro Card's probable 50-60+% net and then Apple's additional 30-35% piled on top of that. if those two would spit 80%, they would each have 40% and have made buckets of money more than they would selling mainstream card infrastructure.

Typically Apple (at least in the 2nd Jobs era ) doesn't go with "the most expensive part possible" because of that negative blow-back that 'piling on' additive effect tends to generate. This Mac Pro reeks of that. From the stratospheric E5 2697 v2 pricing Intel slapped on that part and these "pumped even higher" Pro card prices. It just leads to bloat.


If there was a decent nvidia option or an easy upgrade path for the gpu's, I'd be buying one for sure. As it is, I'm really not too sure,

Exactly the kind of blowback the product will get when carelessly depress the $/perf and value proposition.

I think Apple is going to sell a decent number of these for a several months (while the initial wave dissipates ) and then start to run into some serious 'pot hole' weekly sales numbers updates from time to time.

If the plan was to shrink to substantially lower numbers of Mac Pros sold per year they are probably going to succeed over the long term with these tactics.
 
I think it's really going to come down to how these new pro apps versions (particularly Final Cut X) takes advantage of the hardware.

Hardware specs matter but if Final Cut X takes advantage of hardware better than other software out there takes advantage of "faster" cards, it could still be a net gain for the user regardless of what the specific specs are.

I'm not expecting miracles - but Apple has spent quite some time on this machine. I think there will be something redeeming about it. It'll never sell in crazy numbers just on the price alone.
 
I think it's really going to come down to how these new pro apps versions (particularly Final Cut X) takes advantage of the hardware.

Hardware specs matter but if Final Cut X takes advantage of hardware better than other software out there takes advantage of "faster" cards, it could still be a net gain for the user regardless of what the specific specs are.

I'm not expecting miracles - but Apple has spent quite some time on this machine. I think there will be something redeeming about it. It'll never sell in crazy numbers just on the price alone.

The problem is FCP-X isn't the ONLY reason people buy MPs.

I know several places near me with a fleet of them each. Rolling DIT carts. And most, if not all of those have Nvidia cards in them. And those folks have gotten extra value from those machines by incrementally upgrading them, rather then chucking them out every time a piece becomes stale.

About once a week I have someone trade a GTX285 in on a 580 or, FI today I have someone trading a 580 in on a Titan. (He had traded his 285 in previously)

That entire ecosystem has been chucked under the bus.

Maybe the Pro folks in hollywood weren't the target market, but I know these people, and most are not happy. Gleaming endorsements from Friends of Apple notwithstanding. I am personally acquainted with a member of one large company who has had to publicly act excited when in fact the internal members who make the products work are quite the opposite.

There will doubtlessly be lots of people happy and/or thrilled with nMP. It looks really nifty and is going to look great at the receptionists desk at an ad agency. But to some extent, Apple has knowingly dropped certain customers. Whether they have targeted enough new ones remains to be seen.
 
I think it's really going to come down to how these new pro apps versions (particularly Final Cut X) takes advantage of the hardware.

Hardware specs matter but if Final Cut X takes advantage of hardware better than other software out there takes advantage of "faster" cards, it could still be a net gain for the user regardless of what the specific specs are.

I'm not expecting miracles - but Apple has spent quite some time on this machine. I think there will be something redeeming about it. It'll never sell in crazy numbers just on the price alone.

I think the problem with this argument (I've seen it elsewhere too) is that who really cares about Final Cut Pro X? It's a fine NLE, but the market that this type of machine is targeted towards uses a ton of different applications. Adobe, Autodesk, Maxon, The Foundry, etc. In he grand scheme of things Apple has a very small user base of their pro apps (ignoring FCP7) and video editing is only a fraction of it. Apple would be foolish to be pushing this as just a FCPX machine, which I don't think they are. But I do see quite a few people saying that this is being used to push their pro apps and the problem with that is that they don't really have many. And I doubt most of those users are the ones spending $4k+ on a computer. The only way this thing sells is if the non-Apple pro apps benefit from the hardware.

----------

The problem is FCP-X isn't the ONLY reason people buy MPs.

Damn, you beat me to it.
 
Well the good thing is Adobe is on the OpenCl bandwagon. Photoshop, premiere and After effects are all getting OpenCl optimizations. So that's a good start.

Hopefully many more follow.
 
If this method(mod driver) is feasible, on the platform of WINDOWS should have the same case broke, but the reality is not.

I'm guessing what you're saying is: If the only difference between the W9000 and 7970 is drivers, then someone would've soft-modded the 7970 to run at W9000 speeds.

The fact that the above has not happened yet doesn't prove anything. The W9000 runs identically--and I mean +/- one FPS--to the 7970 in gaming. Clearly the W9000 performs multiples better in professional tasks, but if there were substantive differences in the hardware itself, wouldn't the gaming benchmarks differ? ATI was thoroughly embarrassed when their previous models were soft-modded and performed at the same level as their pro cards. What's more likely: They changed the way they make the cards or that they just got smarter at preventing soft-modding?

As we saw with Glide, CUDA, and [soon] Mantle, software designed to take advantage of different aspects of the hardware can make things run multiples better. Even apps ported from CUDA to OpenGL have ridiculously better performance in Nvidia Vs AMD. This is all software.

I am highly skeptical of the real differences between the hardware, nothing I've seen posted here and elsewhere does anything to change my mind. This could easily all be smoke and mirrors done with drivers and by the way: They've done it before.
 
Last edited:
I understand that everyone hopes this will be a generic super computer but I think apple cares firstly about their pro platforms.

Apple is happy being a niche player with high margins. As a reasonably fast computer this should run just about anything well, and things that drink the apple kool aid even faster.

Some people will buy this just to have the fastest Mac possible in the ecosystem.

I did find it funny that they mentioned aperture at the event as being updated for this machine as well. I know some people who use aperture but I think it's safe to say adobe lightroom ate that lunch. (And this machine is probably overkill for either - just saying apple called it out)

Time will tell. I could totally be wrong. Wouldn't be the first time. But I suspect this:

- fastest Mac you can buy (outside of an advanced hackintosh)
- Apple pro tools for video, audio, and photos will scream on it
- tools that don't really buy into the design and function of this machine will just be running on "a fast Mac"
- apple is fine with that and welcomes anyone to optimize the **** out of their software for this vision.
- as long as sales aren't "dismal" apple is fine with this machine being niche. It was never going to sell in volume at these prices anyhow.
 
Crossfire would work in windows though, right? One of the reasons I'd want a mac pro is for gaming. I have 50 million other reasons, but it does need to do gaming well, as well as other graphically intensive stuff programming wise....

you're right...gotta be able to handle gaming as well...there are times we might get bored from doing work or whatever...

it would be sad if the nMP can't handle the basic...
 
I am highly skeptical of the real differences between the hardware, nothing I've seen posted here and elsewhere does anything to change my mind. This could easily all be smoke and mirrors done with drivers and by the way: They've done it before.
Well, if some or all of the cards have ECC RAM then in the professional space that does seem like something worth paying for as it should make the cards more reliable when running for a long time. But that alone is a question mark already, but I can't imagine ECC GDDR5 RAM comes cheap so it'd be a reasonable cost if that's what some or all of the cards have.

The real question mark for me is how much Apple will really be charging for these; all the historic arguments for the pro cards have boiled down to drivers, support and stability.

Drivers will most likely be coming from Apple as usual, so while they may make particular efforts to optimise for professional uses and claim that as a suitable cost the same as AMD do, we don't really know anything about that. People have claimed seeing drivers named for the consumer equivalents appearing in Mavericks, but that may not necessarily mean anything as Apple engineers could have started writing the drivers for consumer cards slotted into current Mac Pros for example. We just don't know yet, so it's very much an either or case on drivers I think.

Support is presumably going to be typical Applecare, though I really would like to see Apple start bundling Applecare with its pro products at the very least, but we know we're not really paying any extra for this. Plus Apple's support for GPUs hasn't been very good so far; they'll get a few important updates sure, but it's unlikely the drivers will see much continuous development.

Stability is one I realise I've never actually seen proven; I've seen articles describing that pro cards are designed to run at a steady speed for a long time without any performance issues, while the consumer cards aren't quite as reliable, though for games etc. the demands are variable anyway so it doesn't matter. If it's true then it could well be an area where gaming cards and pro cards are genuinely different with gaming cards better at changing their workload and pro cards better at maintaining theirs, but I dunno. ECC is the only concrete feature I can really think of on stability and iirc that's pretty recent for GPUs, but that's very much a prolonged use-case. It would be interesting if someone could set up some very long running (several days at least) benchmarking tests that could be run on a pro card and its consumer equivalent, as that may very well show some real differences for once.

Manufacturing is the one where I think Apple will most likely (and genuinely) be able to justify any cost, as they seem to be assembling the cards themselves so that means they have to make back all the development expenses, costs of machinery and manufacturing etc. And what they're producing is a sort-of passively cooled professional level GPU, so it's not a small accomplishment, and not a cheap one either. Of course the margins are going to be high, but then it's Apple :)


But yeah you're right, I thought there were more concrete articles showing differences between pro and consumer cards, but there does seem to be very little real evidence to any of it. You'd think someone would have come out with more specifics on actual hardware used.

For example; components in a pro card may undergo more thorough testing with a lower threshold for rejection, whereas consumer cards will ship with chips that are "good enough" (still working obviously, but maybe lose performance under extreme strain). Same question for the memory, the capacitors, etc. etc. These are all things that it'd be reasonable to expect are made/tested to a higher standard in a pro card, but no-one seems to have actually come out and said as much.
 
But I dunno. ECC is the only concrete feature I can really think of on stability, but that's very much a prolonged use-case. It would be interesting if someone could set up some very long running (several days at least) benchmarking tests that could be run on a pro card and its consumer equivalent, as that may very well show some real differences for once.

Yes, ECC is the only concrete hardware advantage. However, not all "Pro" cards come with ECC. The W7000 (D300) does not, IIRC.

As far as testing and components, I challenged other users to find consistently lower-quality components in consumer boards. ASUS, for instance, prides itself on selling equipment that has higher-quality capacitors and incredibly efficient cooling systems.

Nobody really has tested these things you're talking about--these are all areas where yes, it's possible... but do we know? No? Then why do you presume? Because it costs more?

What I do know is: Consumer GPUs are becoming utterly amazing in their quality, performance, cooling, noise, and price. The whole "it's designed to run at higher speed for longer!" argument doesn't hold up, I think. Gaming can often keep the FLOPS up for hours at a time, just as well as professional tasks.

I used to work in a warehouse that sold proprietary HP, IBM, and Dell server components. I actually ran a department that did testing of these extremely expensive "professional" components. We were selling things at ridiculous markups that had much higher quality counterparts floating around in the consumer world. The quality was all smoke and mirrors, which is why the business (which sold "refurbished" replacement parts) was doing so well.

We sold IDE hard drives that were smaller and less reliable than those on the consumer side. We just found the models that we could put a Dell part number on (only specific HD lines were "certified") Poof! A hard drive that I wouldn't pay $75 for the company would turn around and sell for $300. "Dell Certified" means a lot to people, but not to me. "Certified Professional" fits in the same category.

In the case of the FirePro, clearly the benchmarks show that it outperforms everything consumer cards in Professional apps. All I'm saying is: this could have more to do with the software, and there's no indication that reliability (except with the ECC models) is any different.
 
Last edited:
One interesting test I've never scene, and I doubt we would, would be to see how a pro card flashed with gaming drivers performs. Say a w9000 with the 7970 drivers plays Crysis or battlefield.

I say this because the w9000 performing near that of a 7970 at playing a game just proves how powerful the w9000 is. The w9000 is certainly at a disadvantage and slightly handicapped at that task do to its drivers rendering bevaiour.

I'm pretty sure you could make a graphics card that is good at everything if the drivers/behavior were dynamic and not static. Play a game, use the gaming drivers, launch maya, use the DCC drivers. That might involve firmware, and restarting, but it seems possible. Just not economically desirable by the card makers. Just like Autodesk never made an uber app, they preffer to have 3 revenue streams in max, maya and softimage.
 
Last edited:
One interesting test I've never scene, and I doubt we would, would be to see how a pro card flashed with gaming drivers performs. Say a w9000 with the 7970 drivers plays Crysis or battlefield.

Drivers are (usually or mostly) kernel-mode operating system files (.sys or .kext).

They're not put into flash memory on the card.

Do you want to put gaming firmware into the flash, or use game drivers with the pro firmware?
 
One interesting test I've never scene, and I doubt we would, would be to see how a pro card flashed with gaming drivers performs. Say a w9000 with the 7970 drivers plays Crysis or battlefield.

I say this because the w9000 performing near that of a 7970 at playing a game just proves how powerful the w9000 is. The w9000 is certainly at a disadvantage and slightly handicapped at that task do to its drivers rendering bevaiour.

The W9000 benchmarks like a 7970 (As in: +/- 1% in nearly every game) because they're almost exactly the same card--same processing power. You thought that was a coincidence??

There is no "flashing with gaming drivers" because they are probably already using the same drivers for gaming. The only difference between the cards is the special "Professionally optimized" drivers, something on the W9000 that works with those drivers, ECC RAM on the W9000, and the W9000 has 6GB instead of 3 standard.

The W9000 is not handicapped at gaming, it is performing as well as the 7970 because they are nearly identical.

7970 GHZ EDITION Vs W9000 FirePro

Stream Processors: W9000 - 2048 || 7970 - 2048
Texture Units: W9000 - 128 || 7970 - 128
ROP: W9000 - 32 || 7970 - 32
Core Clock: W9000 - 975Ghz || 7970 - 1,050Ghz
Process: W9000 - 28nm || 7970 - 28nm
Mem Clock: W9000 - 5.5Ghz || 7970 - 6.0Ghz
VRAM: W9000 - 6GB || 7970 - 3GB
Architecture: W9000 - GCN || 7970 - GCN

Note the Clock on the Original 7970 is 925mhz Vs the 975 in the Ghz edition.

Edit: Anyone else notice that AMD doesn't have the complete specs of the W9000 on their website? Looks like they're relying on people's misconception of "workstation" cards having "unicorn hair"
 
Last edited:
VERY disingenuous.

Even more than Nvidia's Quadro cards, FirePro means "regular card marked up by 300-400% for no discernible performance increase".

This is a lot like saying "we didn't want to charge so much, but when you factor in the Gucci Design fan imported from Italy, it's a sensible price"
Lol. This made me laugh. I'm always gonna love Nvidia more.

....If game developers wrote their audio engines correctly, they could easily do positional audio with very high accuracy. Hell, Mumble does it with voice audio at a minimal performance hit. As for DSP, aren't those better served by external devices nowadays? .
My thoughts exactly...game developers leave so much out of audio design, its pretty pathetic. I'm only just seeing what I consider decent surround output from games like Battlefield but nothing insane. And external decoders are where its at, other wise Dolby and DTS would be cashing in with computers ALLOT more (not like they need it with the theater and film business however).

But I strongly believe that Phil was flashing "D300" as if it meant something when he full well knows it means nothing whatsoever right now.

"My SkyJam4000 is Super Duper fast since it uses the Intel TurboJet 427" sounds really impressive but also tells us nothing. Consider this, the crappy Radeon 2600XT that came in 2008 Pros also had another name. It was called a FireGl V3600 and cost $400. (Much more impressive name, and it had better be, you're paying for it)

It was no faster or better in OSX than a 2600XT. It had a solid copper heat sink but a cheapie 2 wire fan and was used in cheapie workstations.
That's exactly what he was doing. When he said D300 I was like "what the heck is that? Is he talking about Nikon DSLR's now?". To his credit he did make it sound really impressive, still I would have liked to know more...

I think it's really going to come down to how these new pro apps versions (particularly Final Cut X) takes advantage of the hardware.

Hardware specs matter but if Final Cut X takes advantage of hardware better than other software out there takes advantage of "faster" cards, it could still be a net gain for the user regardless of what the specific specs are.

I'm not expecting miracles - but Apple has spent quite some time on this machine. I think there will be something redeeming about it. It'll never sell in crazy numbers just on the price alone.
Its gotta be an amazing bond, that is FCP X and the new Mac Pro or like you said price alone will not sell these things. I think the new Mac Pro + FCP X is going to rip through projects at unthinkable speeds for sure though, I mean look what they've done with their current mobile hardware in the MacBook Pro's, iMac's + FCP X! After they cleaned up their "mess" (FCP X v1) it started to look and work really nicely...I stuck with every software update and it did get really really nice! I feel like my investment really paid off with FCP X because quite honestly I'd hate to be stuck with Avid or FCP 7 right now. However the hardware is nice and probably super duper optimized but its getting to uncustomizable. No PCIe slots? Too soon....I hope thunderbolt can really hold a candle to 16x PCIe slots....

I don't believe OSX supports crossfire. Professionals use the cards independently, so I doubt you'd want them for gaming.
What is the point of dual graphics cards without crossfire or SLI? Not that I would want AMD's bumbling crossfire crashing through my video projects, but I would have enjoyed Nvidia's remarkably better parallel processing via SLI. Still seams weird to include dual graphics cards of the same type and talk about them as two units that can be combined when its not supported...maybe Apple and AMD are trying to make crossfire work good and not really buggy? Who knows. All I can say is I want dual Quadro's in the next Mac Pro XD I can dream right?!

EDIT: Anyone feel like Apple is giving AMD the deal for some sort of "pitty"? I mean Nvidia is easily 80% of the GPU market and Nvidia mobile Quadro wipes the floor with a W9000...LOL. That or AMD desperately paid Apple a handsome price or offered a good deal (latter most likely) on GPU options.
 
Last edited:
Ding Ding Ding !!! We have a winner - D300 = R9 270X

Trying to research what others have done with R9 290X led me to a Hackintosh site where they specialize in reposting other people's stuff.

And I discovered someone running a R9 270X low and behold...it showed up as D300.

Mystery Solved, we now know all 3 card's roots.
 

Attachments

  • Screen Shot 2013-11-13 at 5.37.23 PM.png
    Screen Shot 2013-11-13 at 5.37.23 PM.png
    70.5 KB · Views: 226
  • Screen Shot 2013-11-13 at 5.37.06 PM.png
    Screen Shot 2013-11-13 at 5.37.06 PM.png
    145.9 KB · Views: 243
Last edited:
And I discovered someone running a R9 270X and low and behold...it showed up as D300.

Mystery Solved, we now know all 3 cards roots.

Just out of curiosity what about a GPU determines how the OS decides what it is under OSX? The device ID, chipset specifically ?

Also the D300 is an R9 270X and the D700 is an R280X (7970)? What is the D500? An R280 with 512 Stream Processors disabled? It has the 384-bit memory bus etc.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.