Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
3D graphics GPUs are massively parallel, with dozens/hundreds of cores running very limited instruction sets, which are highly suitable for vector calculations but not so much general computing. That said, CPUs and GPUs seem to be converging, so years from now I wouldn't be surprised if they merge. Then we can go back to software rendering. (Which sounds bad initially when you're used to GPUs, but in reality not only would speed not be an issue, but it would allow you to ditch OpenGL/Direct3D with their attendant limitations, driver problems, etc. and just do whatever you want. That's right: Mac versions of games will finally perform the same as Windows, and be a lot easier to port.)

--Eric

And it looks like software rendering is as close as 2012 according to Epic Games.
 
The beatles!

Haven't you heard?! The Beatles are on iTunes! Why the heck is anyone talking about this crap when they could be listening to The Beatles?
 
I think this is NVIDIA selling pro cards for more money based upon software drivers...

They are, the hardware of most business class cards are identical to the gaming version (maybe more ram), the main difference is the software drivers and its firmware, one set is tuned for gaming the other set is tuned for business. but you can flash the gaming version to the business class and the business class to the gaming class.

its been proven time and time again with the older cards.
 
all you need is "love"

I was thinking of getting this card to display my iPhone library of 250 photos of my cat. I don't have any games or drawing apps. Just iWork and iLife. I figure I just need this card and 64 GBs of memory for my Mac Pro server edition.

Apple stockholders love people like you.

;)
 
They are, the hardware of most business class cards are identical to the gaming version (maybe more ram), the main difference is the software drivers, one set is tuned for gaming the other set is tuned for business.

its been proven time and time again.

yeah, the hardware is often the same, which allows you to flash a similar GeForce card into the Quadro equivalent. however, you're not just paying for the optimized drivers...you're paying for the support.
 
$1,199 and you still can't do 3D glasses. Sounds awesome! Oh wait, it doesn't.
 
Then we can go back to software rendering. (Which sounds bad initially when you're used to GPUs, but in reality not only would speed not be an issue, but it would allow you to ditch OpenGL/Direct3D with their attendant limitations, driver problems, etc. and just do whatever you want.

That is unlikely. Part of the reason that general CPUs have extra circuitry is so that several different programs can all just "do whatever they want". As soon as that is allowed folks will start stomping on each other. So you end up with things like virtual, protected memory , separate privileged states for management, etc.

The second issue you are blowing off is that the legacy inertia behind the CPU instruction sets. That means the instruction decoding, scheduling, caching, etc are going to be much larger than GPUs tend to (or need to) be.

There will be a merged onto one die, but there will be non homogenous cores. There will be CPU cores and GPU cores that share resources like memory access. If software rendering was a slam dunk Intel's Larrabee would have been a huge hit.

As long as can squeeze in 2-5 more cores because the GPU specific one is smaller it will generally beat using the CPU cores because can put more on a die. Quantity usually wins because graphics is generally a set of problems that are embarrassingly parallel. In that context, "more" almost always wins. You just chop the problem up into smaller pieces and get more throughput.

For embedded systems the "CPU only" trend may be more right. Maybe eventually there will be embedded devices that don't have a GPU because the CPU software rendering is "good enough" and need to shift resources back and forth between CPU and GPUs jobs (or just don't have space because memory , controllers , etc. are on single die ) . However, in the high performance field, it has the problems outlined above.

some high performance compiler folks are working on implementing CUDA on x86
http://www.hpcwire.com/specialfeatu...ikely-Marriage-of-CUDA-and-x86-108421564.html

that will make more sense when AVX enabled x86 models come along. However, it is doubtful are ever going to catch the GPU cores in collective raw FLOPs througput.
 
Last edited:
The good news is that this driver should be tweakable to support GTX 4xx series cards, possibly even the GTX 580. I'm quite stoked about the last part.
 
i think what folks are wondering is the profit margin on quadro line versus the profit they take making the geforce. most understand they are for different uses but can the development costs really do so different?

Maybe, maybe not, but I think the market sizes are probably very different.

Think about it, you might have millions of people that play games, but maybe tens of thousands of people that need workstation graphics for CAD, scientific and other specialized visualization needs. It's easy to spread the development cost around when you have a consumer product and plan to sell millions of chips. A workstation version of the same board needs different math algorithms tuned for absolute accuracy vs. speed & good enough accuracy. They're probably more aggressive about reliability on the workstation version.
 
The good news is that this driver should be tweakable to support GTX 4xx series cards, possibly even the GTX 580. I'm quite stoked about the last part.

there is already support for gtx 460/470/480


$1,199 and you still can't do 3D glasses. Sounds awesome! Oh wait, it doesn't.
1. I am not sure what you mean by "do 3d glasses"
2. assuming you are talking about wearing special glasses that allow you to see stereoscopic imagery.... why would a professional workstation card have this feature?
3. what does the hardware have to do with what the software is displaying that would allow you to see stereoscopic imagery?
 
Last edited:
Wirelessly posted (Mozilla/5.0 (iPhone; U; CPU iPhone OS 3_0 like Mac OS X; en-us) AppleWebKit/528.18 (KHTML, like Gecko) Version/4.0 Mobile/7A341 Safari/528.16)

jbg232 said:
You should stop while you're ahead. There's more to Quadro than CUDA cores they have specific drivers and are geared towards specific professional applications. Here read up before you post nonsense: http://www.nvidia.com/page/partner_certified_drivers.html

So, how will this card affect performance for aperture 3 compared to the standard ATI 5870, etc cards?

It wouldn't necessarily, these cards are meant for 3D, autocad etc.
 
Yeah, after looking into it further, perhaps I should revise my statement. This is a workstation card, after all, and this is a mac product, so of course there would be a mac tax.

Something many people forget is that Mac versions of graphics cards require EFI firmware support, and 64-bit at that. Apple was ahead of their time in choosing EFI, as now PCs are having to switch to it to support drives coming out over 2+TB. I fully suspect to see more and cheaper Mac-compatable cards to come along as EFI gains traction in the PC world.

Goodbye BIOS, nobody will miss you. Eventually. :D

I'm glad NV listed the 2008 Pros as well, it makes me happy for mine. Not that I need a Quadro (I'm eying a 5770 to replace my 8800GT), but at work this will be very useful, especially for all the 3D gfx we do.

That's the other thing: Many folks don't realize that there are lots of us who use Macs for work as well as play. I manage an IT group with over 100 Macs, from servers to laptops, and they are used and used hard. We get way better ROI over PC's, and we can use them for anything - OS X, Linux, or Windows.
 
I know this is totally inappropriate use for such high technology.. but anyone know how the Cuda cards handle games?

While this looks like an instabuy for my Premiere CS5 work .... I do play quite a few games in windows via bootcamp. I wonder how it handles games like X3 or Crysis.
 
I know this is totally inappropriate use for such high technology.. but anyone know how the Cuda cards handle games?

While this looks like an instabuy for my Premiere CS5 work .... I do play quite a few games in windows via bootcamp. I wonder how it handles games like X3 or Crysis.

are consumer gaming cards like 5870 and gtx 480, not sufficient workstation cards?
 
What spec on this video card states that will have better quality then a consumer card? A NVIDIA consumer gaming card for $500.

The consumer card even has twice as many cuda cores...
The consumer card has better specs...

http://www.nvidia.com/object/product-geforce-gtx-580-us.html

This sounds like a card for a business user that just wants to spend a lot of money and not focus on the specs.

Better specs? Hardly.

Several professions need the precision of rendering that this type of card and the verification process that it requires. Just because the other card offers more cores or whatever, does NOT make it better for some applications, such as CAD.
 
there is already support for gtx 460/470/480
And it's really piss poor. OS X does not have full support for it. The things bench abysmally low. This is an actual, full driver that should provide real support in OS X. Hopefully it's at least somewhat close to Windows performance...
 
are consumer gaming cards like 5870 and gtx 480, not sufficient workstation cards?

There must be more, but over all the 480 cracks out more. S unless I am missing a bit more its more Marketing than substance :rolleyes:

CUDA Cores 480
Graphics Clock (MHz) 700 MHz
Processor Clock (MHz) 1401 MHz
Texture Fill Rate (billion/sec) 42
Memory Specs:
Memory Clock (MHz) 1848
Standard Memory Config 1536 MB GDDR5
Memory Interface Width 384-bit
Memory Bandwidth (GB/sec) 177.4
Microsoft DirectX 11
OpenGL 4.1
Bus Support PCI-E 2.0 x 16

Quadro 4000 for Mac
CUDA Cores 256
Form Factor 4.376” H x 9.50” L Single Slot
GPU Memory Specs:
Total Frame Buffer 2 GB GDDR5
Memory Interface 256-bit
Memory Bandwidth (GB/sec) 89.6 GB/s
Shader Model 5.0
OpenGL 4.1¹
Microsoft DirectX 11
NVIDIA CUDA Architecture
FSAA (maximum) 64x2
 
I was thinking of getting this card to display my iphoto library of 250 photos of my cat. I don't have any games or drawing apps. Just iWork and iLife. I figure I just need this card and 64 GBs of memory for my Mac Pro server edition.

My grandpa is actually like this. I was over at his place the other day and he had a brand new, top of the line 27inch imac sitting on his desk. He said he asked the people at the apple store for one with a nice screen. And he has the resolution turned way down because he's old and can't see anyways.
 
Better specs? Hardly.

Several professions need the precision of rendering that this type of card and the verification process that it requires. Just because the other card offers more cores or whatever, does NOT make it better for some applications, such as CAD.

+1

An analogy that I saw posted here before was to imagine playing Call of Duty, you don't care if you run by a wall and a texture pops.

But say you are doing some high end 3D renders that take days to render a scene and in half of your frames a shadow pops. Thats a huge problem.

With these high end cards they generally do not have these problems, and if they do you get instant support. No waiting, no checking on forums, etc, you simply make a phone call.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.