Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
People who are going to spend $3000+ on a computer tend to be more informed then the average user. They will already tend to know what kind of graphics they will be needing for what software applications. I've heard people for years claim they are just gaming cards, but offer little proof beyond a few benchmarks.

The point is, on Windows for example, in applications that benefit from workstation class GPUs there is a notable difference in performance between a workstation class GPU and a gaming class GPU.

On OSX, this has not been the case in the past. There has been no practical benefit to workstation oriented GPUs over gaming oriented ones on OSX.

So claiming that the FirePro's are giving the equivalent value of dual W9000's would in a Windows context is false, or at least will be until the day when the OSX drivers provide performance differentiation between gaming and workstation class cards in professional applications.
 
The point is, on Windows for example, in applications that benefit from workstation class GPUs there is a notable difference in performance between a workstation class GPU and a gaming class GPU.

On OSX, this has not been the case in the past. There has been no practical benefit to workstation oriented GPUs over gaming oriented ones on OSX.

So claiming that the FirePro's are giving the equivalent value of dual W9000's would in a Windows context is false, or at least will be until the day when the OSX drivers provide performance differentiation between gaming and workstation class cards in professional applications.

You still did not explain on how these are actually gaming cards. Now if your claiming that because of lack of driver support or some such that is not much difference then using a gaming card, then thats a different matter. Not sure the last time Apple used ATI/AMD workstation cards in the Mac Pro or equivalent workstation. It appears they don't have drivers for their current cards in older models of the Mac Pro. But the current Nvidia workstation cards do have full driver support.

So claiming that the FirePro's are giving the equivalent value of dual W9000's would in a Windows context is false, or at least will be until the day when the OSX drivers provide performance differentiation between gaming and workstation class cards in professional applications.

Making your own claims are just as false, as we don't know yet and the New Mac Pro has not been released.

But agree we should be careful in equivalent values compared to other graphics cards, but will add regardless of operating system.
 
Last edited:
You still did not explain on how these are actually gaming cards. Now if your claiming that because of lack of driver support or some such that is not much difference then using a gaming card, then thats a different matter.

Have you proven that they aren't?

There is more evidence that they are the same, or nearly so, than not.

- The 7950 is recognized as a D700 in Mavericks.

- It has been industry standard practice to share GPU platforms among gaming and workstation cards since the early 2000's, differentiated only by drivers, support, and memory density.

- Gaming cards have been easily modded to be recognized as workstation cards for years.

I'm no workstation card hater by any means. I use them for my workstations on PC. But on the Mac there has never been any point because there is no difference in the drivers. Hopefully that will change someday or they'll offer gaming options in the Mac Pro. It would theoretically be cheaper without offering any less advantage.
 
Have you proven that they aren't?

So I guess that would be a no on either case, for or against.

There is more evidence that they are the same, or nearly so, than not.


Response from From Alexis Mathers from AMD Graphics:

These questions are understandable given that GPUs like the ATI Radeon HD 4870 and the ATI FirePro v8750 appear to have the same GPU (RV770) and hardware configuration, but Alexis explained that there are several significant, but unapparent hardware-level differences.

First and foremost, workstation GPUs are different from desktop GPUs at the ASIC and board level. If you were to place a workstation ASIC (the actual GPU chip) in the equivalent consumer grade board, the card would exhibit different behavior. In other words, the GPU dies are not simply interchangeable.


http://icrontic.com/article/the-real-difference-between-workstation-and-desktop-gpus

So its more then drivers that differentiate workstation graphics but on the hardware level too.


- The 7950 is recognized as a D700 in Mavericks.

- It has been industry standard practice to share GPU platforms among gaming and workstation cards since the early 2000's, differentiated only by drivers, support, and memory density.

- Gaming cards have been easily modded to be recognized as workstation cards for years.

I do think they tend to use close to the same architecture from their consumer versions and even some features carry over.

But thats where I think they split off on their workstation graphics...on drivers, support and hardware changes.
 
The nMacPro GPUs probably fall somewhere between consumer and pro Windows GPUs.

Hardware wise they are clocked to run robustly for long periods but a quick look at the Apple web site didn't indicate if the VRAM is ECC or not - so that is one key question.

Driver wise, will Windows drivers for Firepro cards recognize and run with them or will they have to be run using consumer card drivers? (Under Windows - under OS X, as people have pointed out, there isn't the same differentiation between pro and consumer cards in terms of drivers.)

Apple describes them as workstation class but it is difficult to know exactly what this means until they get released into the wild. Until then it is difficult to say what sort of value for money they represent.
 
I'm surprised that Apple would not gouge on these.

I'm not that surprised. They had to keep costs down somewhere since the decision to go with PCIe Flash and a single CPU jacks up the price significantly.


Though I am looking forward to see some real world benchmarks. The firepros have performed well in Maya tests, so I'm interested to see how these stack up.
 
The 7950 is recognized as a D700 in Mavericks.
So? A 3gb Radeon 7950 is what… $400+? Apple are giving you two "Radeon 7950"'s with 6gb each for an extra $600, if those cards turn out to give discernible performance benefits as well then they're an absolute bargain, and I won't be able to give Apple my money fast enough.

Also actually, isn't it the 7970 that the D700's are equivalent to? Even better.


So I'm not sure that Apple even need them to perform better under OS X for specific tasks, and actually we shouldn't want them to. Okay, in the short term we do, but remember again that these cards are for OpenCL; they're just fancy extra computation units, at which point the drivers don't really matter (much).
 
Not shocked at all. Gaming GPUs with FirePro name slapped on. It was the only logical way to drive consumer perception of the nMP being a good value.

Plus with FirePro being such a minute market share of overall workstation graphics cards its no surprise they are aggressive.

While to a certain degree all FirePros are just gaming GPUs with the FirePro name slapped on, these are spec for spec a match to the W9000, right on down to the FirePro specific features.

I'm not surprised at the prices, I'd heard whispers Apple wanted to make a big splash by demolishing Windows workstations in price comparisons.
 
AMD's finest early 2012 era GPU.

I actually am curious to see how long it takes Apple to adopt the latest Hawaii GPU tech. With the focus on GPU computing I could see an update at WWDC next year. If new Xeons aren't out by then, they can update the GPUs to keep things fresh. Offer in store upgrades to existing Mac Pros, and gouge pros out of more money. Everybody wins! :)

But, to be fair, a lot of AMD's 2013 GPUs are rebrands of the 2012s. But it would have been nice to ship with R9 not-rebranded GPUs.
 
While to a certain degree all FirePros are just gaming GPUs with the FirePro name slapped on, these are spec for spec a match to the W9000, right on down to the FirePro specific features.

I'm not surprised at the prices, I'd heard whispers Apple wanted to make a big splash by demolishing Windows workstations in price comparisons.

I can see why Apple would want to do this, but if the nMPro can run Windows with the FirePro graphics drivers at a much lower cost than an equivalent Windows workstation then it will kill the market for the Windows cards and I can't see why AMD/ATI would want to do this. The CPUs are Intel Xeons and the price they get for the GPUs must be pretty low so shifting sales away from expensive W9000s with large margins to cheap D700s with very small margins makes no sense - even though the numbers will go up they will not be huge anyway.

The only thing that makes commercial sense from AMD/ATI's point of view is to have the GPUs supplied to Apple in some way recognizable as different from the equivalent FirePro card and then the Windows FirePro drivers will refuse to work with them. It will be interesting to see if this is the case when the nMPro eventually appears.
 
I can see why Apple would want to do this, but if the nMPro can run Windows with the FirePro graphics drivers at a much lower cost than an equivalent Windows workstation then it will kill the market for the Windows cards and I can't see why AMD/ATI would want to do this. The CPUs are Intel Xeons and the price they get for the GPUs must be pretty low so shifting sales away from expensive W9000s with large margins to cheap D700s with very small margins makes no sense - even though the numbers will go up they will not be huge anyway.

Apple is building the cards, not AMD. All Apple is doing is buying the GPU. That gives Apple a lot of freedom in setting pricing. AMD is probably ok with it because you can't yank these cards out and put them in a PC instead.

I think Apple knows that you could buy one of these and run it Bootcamp only. They'd like people to run OS X, but if they can make a lot of money off Windows only users while hurting their competitors, that works too.
 
Both Nvidia and AMD are running on fumes... there is nothing newer. All the latest GPUs are just bins/overclocks of last year's silicon.

There are been some core group enabled or die variations in addition to the bins/overclocks. For example core count going up of the same fundamental core designs. (both AMD and Nvidia.... shuffling the same basic GK104 and GK110 designs into different configs at different price points to make it look like something "next gen" is happening. It isn't. )

However, most of this stuff is in the category of "should have been in earlier versions but didn't make the design cut off date" rather than true next generation designs. They are the same basic design tweaked a bit.

The GCN 1.1 of Hawaii and Bonaire is far more non graphics with the TrueAudio support.

Depending upon how far AMD is going to slide GCN 2.0 not sure there will be much for the Mac Pro to iterate too.

The Hawaii implementation isn't particularly suitable for the new Mac Pro design because it primarily just throws higher power consumption at the performance. That isn't a core principle of the new system design. perf./W. is far more aligned.
 
The Hawaii implementation isn't particularly suitable for the new Mac Pro design because it primarily just throws higher power consumption at the performance. That isn't a core principle of the new system design. perf./W. is far more aligned.

Exactly. I think for this reason, among others, we will see GPUs for future nMP refreshes be at the mid-to-end of a generational lifecycle when the parts can be binned for better power efficiency rather than maximum performance at all costs.

This won't be great for those who want to live on the bleeding edge, but it's the only way to get reasonably powerful GPUs in a solution that's effectively silent.
 
The point is, on Windows for example, in applications that benefit from workstation class GPUs there is a notable difference in performance between a workstation class GPU and a gaming class GPU.

On OSX, this has not been the case in the past. There has been no practical benefit to workstation oriented GPUs over gaming oriented ones on OSX.

Mac Pros don't have to run OS X all the time. For some high end apps when look a the "Mac requirements" section is will effectively say "start Windows from BootCamp and run .... " as being their Mac support.

Mac Pros in that context are actually more normalized against workstation class GPUs because in that rare air the Windows configurations are around the same workstation cards. This brings the Mac Pro more into alignment rather than less.

Long term I think the objective is to build a significant enough base of Workstation GPU class cards that Apple can walk more of those software vendors back into OS X space.

Long term the Mac Pro has a major problem if Apple doesn't put money into the OS X , Frameworks , and standard set of drivers to get high utility value out of the hardware bundled with the Mac Pro. There has to be "enough" hardware bought and deployed though to get good leverage off that base. Hence, workstation cards at not quite so inflated prices are far more likely to build a significant base to leverage.



or at least will be until the day when the OSX drivers provide performance differentiation between gaming and workstation class cards in professional applications.

On the track the Mac line up is on right now there aren't going to be focused gaming cards. There are going to be general usage GPUs and workstation GPUs.

There hasn't been a huge premium focus on soaking every last drop of frame rate and gaming corner cases out of Mac GPUs for very long time.
It isn't a "new track" that Apple has to get on.
 
The Hawaii implementation isn't particularly suitable for the new Mac Pro design because it primarily just throws higher power consumption at the performance. That isn't a core principle of the new system design. perf./W. is far more aligned.

Makes you wonder if Apple will switch back to Nvidia. It seems like both are pretty power hungry, but the Nvidia 7X0 series seems less power hungry.

There hasn't been a huge premium focus on soaking every last drop of frame rate and gaming corner cases out of Mac GPUs for very long time.
It isn't a "new track" that Apple has to get on.

My guess is that once Apple removes those 2.5 bays from the Mac Mini and replaces them with PCI-E storage, they may have enough space to stick a 775m or (less likely) a 780m in. The power consumption would be a little higher, but it would quiet the people looking for a headless but not workstation machine.

It's possible Iris Pro would even be enough to assuage people.
 
Makes you wonder if Apple will switch back to Nvidia. It seems like both are pretty power hungry, but the Nvidia 7X0 series seems less power hungry.

Not that you were asking me, but I recall reading that when Nvidia was asked why they were not in this nMP, they responded with something to the effect of... they weren't interested. The implication was that they didn't want to bother and AMD did. So I don't think it's all up to Apple. (BTW, this is the same story with the PS4 and Xbox1... again Nvidia wasn't interested in participating in either of those two opportunities whereas AMD was).

At any rate, when next gen GPUs are released, the top end parts are typically power hungry and can double as small heaters or wind turbines. It takes a while before binning can yield enough parts for products like the 7990 with dual GPUs on a single card or 7X0 like thermal performance. That's when parts suitable for a next-gen nMP would be available.
 
Exactly. I think for this reason, among others, we will see GPUs for future nMP refreshes be at the mid-to-end of a generational lifecycle when the parts can be binned for better power efficiency rather than maximum performance at all costs.

It is just as much software as hardware. The "professional card' market doesn't want half baked GPU drivers any more than bleeding edge hardware. That overall market is generally risk adverse to unnecessary 'drama".

Apple has much of the same issues. Few Mac users want a Mac with flakey software. Even fewer in the "pro" segment of the Mac market. the GPU vendors have to chase the early adopter market with Windows drivers so the Mac ones are going to trail behind in release.

I think you "mid-to-end life" characterization is a bit to extreme. It is far more like mid life than anything close to the end. Even in the mainstream market there is a tendency to use last cycle's mid range as the next cycles low end. Just because it is cheaper to crank out stuff whose R&D is already paid for many times over on perhaps slightly older fab processes ( which are now substantially cheaper too).



This won't be great for those who want to live on the bleeding edge, but it's the only way to get reasonably powerful GPUs in a solution that's effectively silent.

It isn't the only way. Apple's approach is far more aligned with the Einsten notion of "Everything should be made as simple as possible, but no simpler"

With "throw power at the problem" chips can always just "throw even more complex cooling " design at the problem. There is usually some kind of brute force solutions to most problems. That doesn't mean they are good solutions.
 
Not that you were asking me, but I recall reading that when Nvidia was asked why they were not in this nMP, they responded with something to the effect of... they weren't interested. The implication was that they didn't want to bother and AMD did. So I don't think it's all up to Apple. (BTW, this is the same story with the PS4 and Xbox1... again Nvidia wasn't interested in participating in either of those two opportunities whereas AMD was).

My understanding of the story is that Apple had Nvidia and AMD get drivers working first, and whoever gets the best experience goes in the next Mac Pro. New drivers for unreleased cards and even unreleased OEM Apple cards keep popping up for this reason. I had heard this specifically around the time the 6870 drivers popped up, along with the possible existence of an actual Mac Pro 6870 in prototyping. Someone here on the forums (I think it was MacVidCards) also got an anonymous package in the mail with a working Mac unreleased Nvidia card. As far as I know, both were pipelined for a 2012 Mac Pro update that was killed.

I hadn't heard anything about NVidia's participation in the annual card competition this year, but the 780 and Titan working implies that they might have still been aiming for the Mac Pro. But I could understand that if NVidia's prototyping for the last two years didn't result in them shipping anything, they might get tired of putting in that work.

It is just as much software as hardware. The "professional card' market doesn't want half baked GPU drivers any more than bleeding edge hardware. That overall market is generally risk adverse to unnecessary 'drama".

This may be another reason Apple chose AMD. I've always found the AMD drivers much more stable. I've been disappointed that every Macbook Pro I've happened to buy has had Nvidia (and I'm once again being bitten by driver issues on the latest while my Mac Pro with a 5870 is solid. Who'd have thought?)
 
My understanding of the story is that Apple had Nvidia and AMD get drivers working first, and whoever gets the best experience goes in the next Mac Pro. New drivers for unreleased cards and even unreleased OEM Apple cards keep popping up for this reason. I had heard this specifically around the time the 6870 drivers popped up, along with the possible existence of an actual Mac Pro 6870 in prototyping. Someone here on the forums (I think it was MacVidCards) also got an anonymous package in the mail with a working Mac unreleased Nvidia card. As far as I know, both were pipelined for a 2012 Mac Pro update that was killed.

I hadn't heard anything about NVidia's participation in the annual card competition this year, but the 780 and Titan working implies that they might have still been aiming for the Mac Pro. But I could understand that if NVidia's prototyping for the last two years didn't result in them shipping anything, they might get tired of putting in that work.

Actually I think you're probably right... My memory is bad... The news was that Nvidia was disappointed to not be in the upcoming nMP...

https://forums.macrumors.com/threads/1614432/
 
Makes you wonder if Apple will switch back to Nvidia. It seems like both are pretty power hungry, but the Nvidia 7X0 series seems less power hungry.

The next round may boil down to timing but should help on the "power hungry" aspects if detach from the very tip-top largest dies. If Intel slides E5 v3 well into 2015 I think Nvidia may have better shot in the next round. If E5 v3 doesn't slide much and AMD can get their 20nm parts to the bake off sooner ( http://www.xbitlabs.com/news/graphi...Islands_and_Pirate_Islands_Get_Uncovered.html ) with substantive OpenCL throughput improvements I think they'll win the next round also.

I won't be surprised if new AMD mainstream GPUs come in March-June '14 ( this current round just seems to be a stopgap till TSMC gets their new process together.... and I'll believe Feb 2014 when TSMC finally demonstrate they can hit a deadline. ) and new FirePro updates come in July-September '14 ( which is close to the time to catch the end of prep for new Mac Pro ... which won't be surprising if Intel E5's pragmatically slides to Q1 '15 even if the "announce release" it in early Q4 '14 )

IF the rest of the system is projected to slide farther into 2015 (late Q1 - early Q2 ) then Nvidia's Maxwell offerings may get on the table. AMD's GPUs are still smaller than Nvidia. So I expect them to be first out the gate on the new TSMC process again.

I expect the next generation D700 class cards not so much to be revolutionary faster but to run at full speed over a broader workload.

My guess is that once Apple removes those 2.5 bays from the Mac Mini and replaces them with PCI-E storage, they may have enough space to stick a 775m or (less likely) a 780m in.

I highly doubt they are going to do that. One 2.5 bay a decent change. ( the iMac doesn't have two), but two is a bit loopy. Photostream still lands on the user's home system... which means need capacity. Complete entire audio library .... capacity.

The Mini far, far, far, far more needs a desktop CPU to be more competitive than any hopped up GPU budget. With a much smaller increment and some lower Intel Iris pricing the Mini could have some very decent GPU without such a large leap. The Mini is and will probably continue to be an iGPU system. It is just far more space/volume efficient. Next gen Iris Pro, if can fit within the Mini BTO price zone, probably is reasonably close to the 780m range on general usage.

The question more so is whether the 21.5" iMac is going to keep dGPU rather than the Mini get one. The 21.5" iMac and Mini sharing more common components is a somewhat likely path. There is no good reason for the Mini to follow the laptops into the soldered RAM zone. If the Mini needs a dance partner the iMac makes more sense it the whole laptop lineup is going anorexic.

It's possible Iris Pro would even be enough to assuage people.

The are plenty of folks pounding the table for Iris Pro in the Mini forums already. The problem with the Iris Pro is more so price in this current generation than fit. Intel charges dGPU+VRAM prices for the Pro version.
First generation that makes sense as they feel out the market.
 
Last edited:
Seems to me that nvidia coming back to OSX will come down to OpenCL support. With their love of their own CUDA, seems doubtful.
 
Last edited:
Seems to me that nvidia coming back to OSX will come down to OpenCL support. With their love of their own CUDA, seems doubtful.

Yes. Actually my understanding is that this is the actual deal breaker and why we have AMD in the nMP. Along with lower power consumption on the AMD chips in general.
 
Yes. Actually my understanding is that this is the actual deal breaker and why we have AMD in the nMP. Along with lower power consumption on the AMD chips in general.

Nvidia dragging their feet on OpenCL 1.2 when Intel is fully on board with driver support ( along with AMD) probably didn't help at all with the Mac Pro. On the other 2013 Mac products Intels work help cover up the kludge.

http://support.apple.com/kb/HT5942

( I suspect the intel iGPU is actually covering the 1.2 on those where there are mobile Nviia GPUs. It is probably not winning any Nvidia friends in the other Mac design bake offs either. )


Better power cracked AMD's 2010-2011 lock on the rest of the Mac line up for Nvidia in 2012 ( which basically automatically lead to 2013 speed bumps). If Apple completely "hated on" Nvidia primarily for CUDA, they wouldn't be in most of the line up.

There is probably a more wide open design bake-off coming because both Nvidia and AMD are about to move to new fab process. Apple isn't giong to "give" the result to either one for meeting just one criteria. There are probably price , power, and performance criteria. Throw in Apple making custom cards there is licensing "overhead" too.

Nvidia probably isn't completely drinking CUDA kool-aid any more that all of Adobe was drinking Flash kool-aid. Adobe got to work on HTML5 support all the while wailing about Flash inertia being an overwhelming major bonus point. Nvidia is slacking because they can get away with it for now. Perhaps waiting to skip to OpenCL 2.0. As the GPGPUs converge more on core features OpenCL is just going to get increasing traction. It is going to get better traction on OS X because Apple isn't going to piss around in the mean time. They can see where things are going.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.