Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I really feel ignored here :mad:

Nobody reads the answers from other people before reply?
I mean, I give some links where anyone can see a good explanation about what is the difference between gaming and pro cards and still people just ignored it and the "wtf my superduper gaming card cost less and play crysis omfg!" thing is going on...
 
You use this card if you're a 3D artist working on the next Toy Story or Avatar movie.

not if you're a pimple faced teenager wanting to play Crysis.

Nvidia Quadro cards have always run around $1000, no kids.. it's not your $129 GTS 450.. this is a serious card for serious work.

Pro level cards actually suck for gaming FYI.

ROFLMAO
You "Pro" consumers are always getting screwed by Nvidia Crap for $1200. "Pro" serious work my a$$.

/
And yes it's true - Nvidia makes "consumer" cards OpenGL drivers bad enough so they can't be used for VFX.
 
Last edited:
On the mac? Are you sure? Because Maxon said Open GL 3 support isn't in the mac because Apple doesn't support it yet. Is this chart out of date?
GLSL is an OpenGL based shader language, it has different version numbering from OpenGL itself, and matured quite quickly which accounts for the oddity I believe. So it's GLSL 3.1 on OpenGL 2.x I think?
 
ROFLMAO
You "Pro" consumers are always getting screwed by Nvidia Crap for $1200. "Pro" serious work my a$$.
You might want to look at some comparisons between professional level cards and gaming cards when it comes to 3d rendering (you know, the kinds of thing these cards are used for?), as while there's certainly a premium, you pay it for a reason.
 
You might want to look at some comparisons between professional level cards and gaming cards when it comes to 3d rendering (you know, the kinds of thing these cards are used for?), as while there's certainly a premium, you pay it for a reason.

3d Rendering? What are you talking about? Cuda cores? Besides CPU software rendering there's GPU rendering -> Quadro 4000 has 256, 580 GTX has 512 cores.
Drivers are the difference, 580 GTX is way more powerful than crappy quadro 4000. As I said Nvidia makes "consumer" cards OpenGL drivers bad enough so they can't be used for VFX.
 
it's merely "pay for value received"

As I said Nvidia makes "consumer" cards OpenGL drivers bad enough so they can't be used for VFX.

So? Apple cripples perfectly capable systems to force you to buy a more expensive system if you need certain features. NVIDIA does the same, Microsoft does the same, IBM does the same, Dell does the same, Adobe does the same, Intel does the same, AMD does the same, ....

Accept it, or build a cabin deep in the forest without electricity. If you can't accept that companies will figure out how to monetize added value in their product lines - go off the net and off the grid.
 
As I said Nvidia makes "consumer" cards OpenGL drivers bad enough so they can't be used for VFX.

Then what? Use AMD/ATI? Is that your ultimate point? I'm not sure if you just have a fan thing going on or not, and AMD have the same kind of program going on with FireGL. I haven't seen a significant difference in reliability troubles in either brand of card.

I doubt it's about making OGL crappy, they're different driver tunings for different needs, precision vs. speed because of the underlying number theory required to do graphics calculations. They even offer different driver tunings for different engineering applications.
 
Last edited:
So? Apple cripples perfectly capable systems to force you to buy a more expensive system if you need certain features. NVIDIA does the same, Microsoft does the same, IBM does the same, Dell does the same, Adobe does the same, Intel does the same, AMD does the same, ....

Accept it, or build a cabin deep in the forest without electricity. If you can't accept that companies will figure out how to monetize added value in their product lines - go off the net and off the grid.

Blah blah blah, I don't accept it and I simply won't buy it.
 
the original x86 CPU's were CISC but today's Intel CPU's are a hybrid of CISC and RISC.

But you have to blow gates and die space translating from CICS-like to RISC-like. You are in same state that poster outlined; with those same gates and space you could build a hardwired FFT or graphics or ..... specialized compute engine that is way faster.

GPUs have same problem. Dedicated hardware H.264 decoders are going to be able to do the job faster and for less power than a general software running on more generic hardware.

The hybrid thing just helps the x86 stay out in front of several of the more RISC like alternatives. However, they have to some of the die space to do that. It tends not to hurt Intel as much because they stay aggressive on process so have more transistors to blow on overhead.

So 1-3 generations down the road the CPUs will do what the GPUs did 0-2 years ago. Only if the GPUs stand still are they in trouble. (or get side tracked into adding too much general CPU overhead: multi level caches , branch prediction logic , legacy opcode translators , etc. )
 
I've never seen a GFX card with that cable linking the front to the back. What's the deal with that anyway?

I was wondering the same thing. I think it's a cable lock so someone doesn't steal the $1,199 GPU out of your computer.
Personally I'm waiting for quantum computers. I figure I'll have enough nickels and dimes saved up for one by the time they come out. Either that or a PowerBook G5.
 
Wow - it's been years since I remember seeing a CISC vs RISC post on MacRumours. ;)

x64 won.

I was thinking about that when deconstruct mentioned it, it's pretty well settled for the time being. It looks to me that x64 is pretty much what the desktop / notebook (PC) platform is going to be for quite some time.

However, it seems to me that there might eventually be a post-PC, era despite many failed predictions going back a decade or more. The non-PC electronics landscape appears to be dominated by ARM devices. I probably own as many, if not more, ARM-based devices than I do x86-based (and later) devices.
 
ROFLMAO
You "Pro" consumers are always getting screwed by Nvidia Crap for $1200. "Pro" serious work my a$$.

/
And yes it's true - Nvidia makes "consumer" cards OpenGL drivers bad enough so they can't be used for VFX.

Gaming cards can handle simple object geometry which looks realistic by using some short of texture techniques in one screen, or view if you prefer.
Pro cards can handle more complicated geometry (much more polygons) and advanced physics (not the imitation of physics like physix or havoc) while they display 4 views or even more in multiply monitors.

I work with 3D programs with an ATI 4850, when it comes to many objects with many polygons and 4 views the card is down to it's knees, even if it plays crysis. A pro card can handle that situation however.

You might want to look at some comparisons between professional level cards and gaming cards when it comes to 3d rendering (you know, the kinds of thing these cards are used for?), as while there's certainly a premium, you pay it for a reason.

3D rendering is done by the CPU in most programs, some solutions which use the GPGPU thing of cuda have surfaced but they work the same in pro and gaming cards.

Pro cards are all about previewing as I said above.

3d Rendering? What are you talking about? Cuda cores? Besides CPU software rendering there's GPU rendering -> Quadro 4000 has 256, 580 GTX has 512 cores.
Drivers are the difference, 580 GTX is way more powerful than crappy quadro 4000. As I said Nvidia makes "consumer" cards OpenGL drivers bad enough so they can't be used for VFX.

The day where they where the same cards with different drivers and all you needed was a softmod are far far behind us.

Gaming Cards have drivers and hardware to handle games and other operations based on gaming APIs.
The pro cards are different hardware and different drivers and are for more advanced tasks, as I said answering your first quote.

I linked 2 articles from xbit labs a few pages before, where you can get all explanation you need with testing and not just blueprints and theories, but as it seems none is reading what other people are writing in that forum.

Just skip to go direct for the flame...
 
Gaming cards can handle simple object geometry which looks realistic by using some short of texture techniques in one screen, or view if you prefer.
Pro cards can handle more complicated geometry (much more polygons) and advanced physics (not the imitation of physics like physix or havoc) while they display 4 views or even more in multiply monitors.

I work with 3D programs with an ATI 4850, when it comes to many objects with many polygons and 4 views the card is down to it's knees, even if it plays crysis. A pro card can handle that situation however.

Your Ati 4850 can handle that many polygons, you just have to softmode your card. Sure, your "gaming" 4850 can't handle it, but the card itself can handle it.
Also Nvidia gaming cards can be partially hacked with some nice results in "Pro" applications. So I would say ordinary gaming cards can easily handle more compicated geometry.
Remember 8800 GTX? A gaming card with gaming drivers working Flawlessly in Maya but suddenly Nvidia changed the game with those new generation cards:)
 
Last edited:
Your Ati 4850 can handle that many polygons, you just have to softmode your card. Sure, your "gaming" 4850 can't handle it, but the card itself can handle it.
Also Nvidia gaming cards can be partially hacked with some nice results in "Pro" applications. So I would say ordinary gaming cards can easily handle more compicated geometry.
Remember 8800 GTX? A gaming card with gaming drivers working Flawlessly in Maya but suddenly Nvidia changed the game with those new generation cards:)

Dude feel, there are differences in hardware, all those softmods for the latest cards only make them unstable and you need stability when you work.

Read and learn something please:

http://www.xbitlabs.com/articles/video/display/nvidia-quadro-5000.html

http://www.xbitlabs.com/articles/video/display/quadrofx-firepro.html


After all what is your problem if people buy this cards? Their money, and even if you believe it or not, they pay and get back more things than they can get from any gaming card out there.
 
After all what is your problem if people buy this cards? Their money, and even if you believe it or not, they pay and get back more things than they can get from any gaming card out there.

I've always wondered why it is that Cinema 4d doesn't require "pro" cards, but Maya does.

Maya on windows still supports consumer cards, and heck, Maya on the mac is primarily supported with consumer graphics cards.

http://download.autodesk.com/us/qualcharts/2011/maya2011_qualifiedgraphics_win.pdf
 
Well, the Radeon 5870 is about $300 for PC folks, yet we still have to pay $450 plus tax. So by a similar margin, we Mac folks have to pay more for the Quadro card as well.

Sooo,,, if I have the ATI Radeon HD 4870,,,

Would this be much of an upgrade???
I do CS4 adobe stuff, aperture,,, pictures/graphics stuff mostly,,,

Or would the 5870 be a better value/upgrade???
I'm running apple 30 led monitor, 2X2.93 quad cores,,, 16 GB memory,,,
 
This would have been helpful to me early in the year. However, as a professional 3D artist, I had to switch platforms back to Windows 7 64 bit for numerous reasons. In part, it was because there was no nVidia cards when I wanted to upgrade, only ATI which was offered as 5870 top end.

I think this card is great for those still on a Mac Pro doing 3D art design. Programs like I use such as Maya, Vue, etc. all take a great deal of power and use the CUDA cores. What some do not realize is just how limited the Mac Pro has become since this is the first time in well over a year that any professional 3D artist has had an option besides their poor offerings for video cards they typically have for Mac Pro use.

I continue to see Apple as being more consumer focused than professional which is why I went back. However, I recommend Apple to anyone that is a typical computer user. It's just not for those like myself because it is too little and too late now.

Macfanjeff

I needed to buy a second workstation this summer so I can render projects while I work (I am also a freelance 3D artist working on a Mac) and I really considered going PC for the very reason that I have always suspected that the video cards I could get for my mac pro were just not cutting it. I have never worked on a machine with a pro quality card so I havn't actually experienceid the difference. In the end I ended up buying a 12 core mac pro with the ATi 5870 but I am chomping at the bit to try the 4000 on my 2008 mac pro and if all goes well I will buy one for my new machine as well. In the end between the cost of all the software I would have to buy to move to windows as well as the hassle of managing across two platforms as a independent it just didn't make sense. The final straw was when i saw in August that the Quadro 4000 was coming so I figured I would go for another mac. Something else I have been thinking about is building a hackentosh but I dont' know that will actually increase my hardware options. Jobs and Apple are really demonstrating that their priority is Mobile and I wonder if I am forced to go back to PC because apple just isn't making the hardware for pros anymore.
 
Sooo,,, if I have the ATI Radeon HD 4870,,,

Would this be much of an upgrade???
I do CS4 adobe stuff, aperture,,, pictures/graphics stuff mostly,,,

Or would the 5870 be a better value/upgrade???
I'm running apple 30 led monitor, 2X2.93 quad cores,,, 16 GB memory,,,

I don't think it will do anything for you...nor will a 5870, for that matter, not beyond the VRAM, anyway. a 5770 would be a better upgrade since Aperture is very VRAM dependent, not so much GPU.
 
The Quadro 4800 Mac Edition has been available for some time as an aftermarket offering including through the Apple Store; the Quadro 4000 Mac Edition replaces this at a modest speed bump and a few hundred dollars in savings. What's nice on the Mac Pro, and it has been for awhile, is that Apple doesn't skimp on the power supply like some lower end desktops, so I would be able to stuff two Quadro 4000's in and still not exceed the 300W available on the PCIe bus.

At least a few renderers will take advantage of a large number of CPU cores and GPU's; I'm interested in KeyShot Professional as it supports the MCAD platforms that I use, and runs well in a mixed OSX/Windows 7 environment.

I'm sympathetic to your situation, and while it isn't a necessity that I even have a Mac Pro, with an FCP update on the horizon for early next year (Steve's text!) the Mac Pro platform seems to be the best future solution for myself, eliminating a couple of PC's that I need today for MCAD.


I almost bought the Quadro FX 4800 but saw reviews where it was actually performing worse than the GTX 280 (I think I have the right model number) in key 3D applications and processes. I almost bought one despite the bad reviews but am glad that I didn't because I have heard that the 4800 was really a disappointment and expensive.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.