Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The answer to #1 and #2 are the same. GPUs are advancing at a much greater rate than CPUs, and optimized software can get a much greater speed improvement out of more GPUs than more CPUs.

OpenCL code will be sent off to any CPUs or GPUs your computer has on hand, that can handle the calculations the fastest. Programs running OpenCL code will absolutely run on two GPUs at once. No Crossfire/SLI is needed, the operating system simply sends related (but independent) tasks off to separate GPUs, just as it would send tasks off to more than one core of your CPU. OpenCL code can still run on CPUs, too, but in many cases the sort of operations OpenCL is running run much faster on GPUs. This is because GPUs are set up to run a lot of simple operations in parallel, the sort that's important for 3D gaming, but in abstraction, those sorts of operations are also useful for other kinds of computing work, too. Processing an image, or video (lots of images put together), isn't so different from rendering a 3D model. A lot of scientific work also depends on a lot of those sorts of operations. By putting all that work onto hardware that's been optimized for that sort of work, you can get significant performance boosts.

While not every task of every program will run better through OpenCL on a GPU, the more tasks you can send to the GPU, the faster the program will run. GPUs remain specialized hardware, whereas CPUs are general-purpose. It's all about identifying where the specialized hardware is specialized, and letting it do what it does best.


I understand how OpenCL (and other GPGPU frameworks) works and why it can be very advantageous for particular workloads (not everything can be parallelized), I guess my point was whether professionals working in video/audio/3d/etc are actually using OpenCL-based software and how widespread that is... Essentially, are they actually getting a benefit versus have a second CPU and more RAM.

----------

what are you rendering? a model but who drew it? the most time consuming part for the user, by far, is the modeling phase.. a typical project for me would be something like 4 days to model/texture, 1/2 day to run previews, 4 days to render..
..
for the 4 days of modeling, the time i'm actually sitting at my computer and interacting with it, i want the highest clock speed available.. all of ...

So that begs the question, how much off the life/interactive rendering calculations are done on the CPU vs the GPU. And if it is mostly GPU-limited, can most popular 3D programs use BOTH GPUs at once (whether through OpenGL itself or OpenCL or whatever).

What my interest is is whether Apple's Mac Pro would be faster for most workloads with a second CPU versus the second GPU...
 
So that begs the question, how much off the life/interactive rendering calculations are done on the CPU vs the GPU. And if it is mostly GPU-limited, can most popular 3D programs use BOTH GPUs at once (whether through OpenGL itself or OpenCL or whatever).

no.. most can't.. but some can and they do it very well at that..

most are hybrid renderers or 'gpu accelerated'.. the cpus are used for their strongpoints and the gpus are used for theirs.. when coded this way, it's not necessarily that more than one gpu can't be used-- more like, the cpus and gpus have to work in a type of balance.. one can't out run the other so any potential energy left in the gpus sits idle because the cpu assignments can't keep up..

writing so it's all happening on the gpu/vram is deeper in the software and requires some core changes.. the developers i know of are doing these things as we speak.. the ones that have already completed the task are showing how effective it is..


What my interest is is whether Apple's Mac Pro would be faster for most workloads with a second CPU versus the second GPU...

today right this minute? a second cpu would be faster when considering all applications as these types of changes can't happen overnight.. but next year or two years from now? it's very realistic by that time we'll have way more concrete evidence with much less guesswork as to how well adopted and/or how effective openCL will be in a broader spectrum of apps..

aside from holding back simply because of the realworld untested hardware, it's a decent case for waiting on v2 of this computer.. things will be clearer then i suspect.. but some of us need computers now ;)
 
It's a bit of a chicken-and-the-egg problem. Without the hardware being there in people's systems, developers aren't going to bother writing a lot of OpenCL code. Without OpenCL code, most people aren't going to want to have two workstation GPUs.

Hopefully, with Apple putting these in every Mac Pro, developers will make it more of a priority.
 
Three things cause this to get a no from me.
1. Choice of GPU (and their lack of upgradability). GPUs go out of date faster than an open carton of milk. In six months you'll be paying through the nose for a GPU considered slow. To top it all, choosing a make other than nVidia puts the Mac pro out in the cold for most GPU computing tasks as CUDA is the most popular standard right now. Any rendering on macs will have to wait until Apple sees sense.
2. Upgradability - On top of this, the ability for GPUs to be upgradable for 3d like gaming and design can make a PC last years as CPUS and memory are less important for these tasks. A simple GPU upgrade sees my 2007 PC running the latest games maxed out.
3. Price - the usual Apple problem - overpriced. Especially for something you'll need to sell far sooner to keep tech current.

What do I like?
The design is genius. It's made the most clunky thing in computing - the desktop PC - beautiful and as a consequence desirable, despite its shortcomings.
 
That certainly was the case prior to the CC releases, but have a read of this...

http://blogs.adobe.com/premierepro/2013/06/adobe-premiere-pro-cc-and-gpu-support.html

OpenCL improvements are coming too :cool:

Two things:

1. The AMD FirePro D300 and D500, which are what the new Mac Pro uses, are not listed as supported GPUs on that Premiere Pro CC "supported GPUs" page.
2. That only addresses Premiere. There are a lot of applications that use CUDA.

----------

Of course you can never become part of the two percent if you don't have a machine that can do 4K.

You can do 4K on an iMac, you just can't view it natively at 4K resolution.

Personally, I have no interest in working in 4K because I'm delivering 1080p and I really don't enjoy swapping out Flash cards on my camera every 10 minutes and I like using SSD and they are very expensive at the larger sizes you'll need for a project in 4K. Just my opinion.
 
Yes...You are guessing. ;)

Actually, not guessing at all.

Check out this chart:
https://docs.google.com/spreadsheet...GamhiUkIySTUteGlzeG9xMEE&oid=6&zx=4b87cbodocm

This is an After Effects raytracing benchmark chart. See the time at the top? That's the nVidia Titan using the Mercury Engine in a 12-core Mac Pro getting a time of 0:09:09. The score at the bottom is a 12-core Mac Pro using the CPU - 6:55:00. Huge difference.

CUDA is several times faster than using CPU and CUDA does not work with AMD, which is what the new Mac Pro has.

You see how an iMac using CUDA with its nVidia GTX780M blows away the 12-core Mac Pro using the CPU? See that? The iMac is around 18 times faster!

So a $4,000 2013 Mac Pro is many times slower than an iMac in CUDA software.

Octane Render is an app that brings CUDA rendering to applications like Maya, Lightwave, Cinema 4D, etc. Adobe's software uses CUDA.

Now, will Octane Render be updated for OpenCL (which supports AMD)? Will all of Adobe's apps be updated for OpenCL? If they are updated, will their implementation of OpenCL be as fast as CUDA?
 
Last edited:
Actually, not guessing at all.

Check out this chart:
https://docs.google.com/spreadsheet...GamhiUkIySTUteGlzeG9xMEE&oid=6&zx=4b87cbodocm

This is an After Effects raytracing benchmark chart. See the time at the top? That's the nVidia Titan using the Mercury Engine in a 12-core Mac Pro getting a time of 0:09:09. The score at the bottom is a 12-core Mac Pro using the CPU - 6:55:00. Huge difference.

CUDA is several times faster than using CPU and CUDA does not work with AMD, which is what the new Mac Pro has.

You see how an iMac using CUDA with its nVidia GTX780M blows away the 12-core Mac Pro using the CPU? See that? The iMac is around 18 times faster!

So a $4,000 2013 Mac Pro is many times slower than an iMac in CUDA software.

Octane Render is an app that brings CUDA rendering to applications like Maya, Lightwave, Cinema 4D, etc. Adobe's software uses CUDA.

Now, will Octane Render be updated for OpenCL (which supports AMD)? Will all of Adobe's apps be updated for OpenCL? If they are updated, will their implementation of OpenCL be as fast as CUDA?

Yes, it depends on whether or not OpenCL has been implemented and optimized for each and any of many Pro Apps. This is where you are guessing. Any pro Apps that use CUDA will obviously run slower since the new MacPro doesn't support CUDA. It will be interesting to see how much slower the new 12 core MacPro runs cuds apps.though. Maybe not much slower if at all. We will have to wait and see. ;)
 
I think the issue is that apple made far too many compromises and increased the cost of the machine instead of going for a more standard form factor. The old pro was too big. That I will agree on. But there was so much more room for a better compromise between that and what they gave us. Any of these form factors between would have been better than the can form they released instead.

Image

Image

You forgot this part: "In my opinion" any of these form factors would have been better than the can form they released instead.

In my opinion you are wrong. Those form factors look very old and dated. They look like something from the 1960's. ;)
 
For this specific case ( diameter and height )? It won't physically fit. The second CPU package is almost useless without another set of DIMM slots.
Go take a peak at the "memory" section of the Mac Pro overview. (hmm, managed to find a static link. So

Image
http://www.apple.com/mac-pro/

)

The DIMMs are about as tall as the whole logic board the one CPU is sitting on. Where going to find room for 4 more of those? That single logic board the CPU is sitting on can't hold another CPU. The stuff you can see that you are moving out of the way would more likely double than be able to shove aside somewhere else.

Even if you could "steal" the logic board space from the space one of the GPU cards is taking up I doubt QPI links are going to traverse logical board boundaries well. So would likely have to be same board.


For a dual CPU logic board you'd need a substantially bigger power supply (if still want dual GPUs also ) , bigger fan (wider and/or more ) , taller ( two different DIMM stacks ) design. Probably would need two different thermal cores as the GPUs wouldn't be as tall as the extended CPU board. The glut of PCIe lanes would beg the question of just one storage device. Overall, it is probably at the tipping point of flipping back to a rectangle and multiple fans to get things to fit well.

Three major power consumers gets you a naturally to the triangle shape. Since need to cool both inner and outer sides of the triangle it is easy to over both with a larger circle. Hence the core design depenency flow.

four elements start to push at more rectangular shapes and solutions. Two thermal cores with circular tops bound by an enclosing rectangle.( like a house chimmney for a furnance & fireplace ).







It is more so it would have required two different cases with different fans , power supplies , thermal core(s) , etc. Detached from the single CPU package model the number of duals sold probably were not viable to Apple. That isn't particularly "new" since even in old design Apple didn't float two different cases for the "Mac Pro" class solutions. The CPU+RAM daughterboard was designed to promote reuse between single/dual configs.

If they didn't reduce the size as much as they did it would have been easy to include dual processors and memory. If you want duel CPU's now, then you have to buy the MacPro's in pairs. :)
 
You forgot this part: "In my opinion" any of these form factors would have been better than the can form they released instead.

In my opinion you are wrong. Those form factors look very old and dated. They look like something from the 1960's. ;)

Its a workstation. Its main purpose is to do work. Not look pretty, and quite frankly I don't think the new mac pro is very attractive to look at either.
 
Two things:

1. The AMD FirePro D300 and D500, which are what the new Mac Pro uses, are not listed as supported GPUs on that Premiere Pro CC "supported GPUs" page.

The D300 and D500 haven't shipped yet. Those are Apple cards not AMD ones. I bet you don't find future mainstream products from AMD or Nvidia on the list either. That list gets updated as new cards arrive and are certified. [ Odds Adobe isn't certifying on pre-production Mac Pros is pretty slim. The first blog mentions Adobe did a WWDC session. ]


As for likelihood. The D300 is roughtly equivalent to AMD FirePro 7000 and AMD HD 7870 GHz Edition both of which are on the list ( list doesn't mention "GHz Edition or "XT" so presuming doesn't make difference) . The D500 is somewhat of a mix between FirePro 8000 and AMD 7870 XT ; again both are on the list.


2. That only addresses Premiere. There are a lot of applications that use CUDA.

No sign that Adobe is trying to limit OpenCL just to Premiere:

http://blogs.adobe.com/standards/20...s-more-compelling-and-efficient-applications/

It is a matter of time and resources. Adobe is incrementally swapping out and/or matching GPGPU code kernels with OpenCL alternatives to CUDA. It isn't some "big bang" transition approach. Adobe is not alone, there are other vendors swapping/matching with OpenCL.

Some vendors and/or code bases will stay fixed on CUDA forever.

Depends up whether view is fixed on where things are going or on the rearview mirror as to what the mix of who does/doesn't have OpenCL support hugely different.
 
You are comparing Apples to Oranges. An i7 is a desktop processor whereas the Xeon is a workstation/server class processor. That processor coupled with ECC ram gives you high precision calculations and depending on your altitude and amount of local background radiation, that difference can become significant.

You are also comparing a gaming card to dual workstation gfx cards.
:rolleyes:

I hope this is sarcasm!!!

ECC ram doesn't do anything for non-server workloads. It's actually slower than non-ecc ram. ECC RAM just hashes the data stored in ram into a checksum and confirms that what was written is what remains in memory when it is read. All of this does come at a performance penalty.

Also - it's not going to make your calculations more accurate. The worst case scenario is a beach ball. For a personal computer, ECC is stupid. For a server where thousands of people could lose hours due to a crash, it makes sense.

It DOES NOT add any precision to calculations ... wow (LOL)

In terms of the CPU, core for core, clock per clock, an I7 is going to be faster than a xeon for almost all workloads. Xeon's have more memory buses that allow more system RAM - and also more L1 cache that can help for certain workloads.

Also, dual socket is not 2x single socket. It's more like 1.25x single socket - and for RAM intensive tasks it's actually going to be slower because the RAM is divided between the to chips, so they have to talk through each others' RAM bus.
 
Yep, sure do ... but focusing on ECC RAM is missing the point (re-read the post I quoted but don’t focus on the hardware specifics...)

:)

[edit]

Just to clarify, I don’t disagree that at least some hardware is easily replicated in a non-Apple product (and for cheaper), and some of that might be what could agree is “pro grade”, my post was about more _business_ related concerns. My point - since we both have cars in our avatars - it’s like comparing a kit car to a factory performance vehicle. :D

The mac pro is more like a concept car that has too many compromises, has those really thin seats, and no door handles. And at the same time its not even one of the good looking concept cars. Its one of the ones that the press though was ugly.

The competitors workstation products are simply faster, more flexible and better priced.
 
Actually, not guessing at all.

Check out this chart:
https://docs.google.com/spreadsheet...GamhiUkIySTUteGlzeG9xMEE&oid=6&zx=4b87cbodocm

This is an After Effects raytracing benchmark chart. See the time at the top? That's the nVidia Titan using the Mercury Engine in a 12-core Mac Pro getting a time of 0:09:09. The score at the bottom is a 12-core Mac Pro using the CPU - 6:55:00. Huge difference.

CUDA is several times faster than using CPU and CUDA does not work with AMD, which is what the new Mac Pro has.

You see how an iMac using CUDA with its nVidia GTX780M blows away the 12-core Mac Pro using the CPU? See that? The iMac is around 18 times faster!

So a $4,000 2013 Mac Pro is many times slower than an iMac in CUDA software.

Octane Render is an app that brings CUDA rendering to applications like Maya, Lightwave, Cinema 4D, etc. Adobe's software uses CUDA.

Now, will Octane Render be updated for OpenCL (which supports AMD)? Will all of Adobe's apps be updated for OpenCL? If they are updated, will their implementation of OpenCL be as fast as CUDA?

There is a guy on this forum that has a 2010 or 2009 something Mac Pro with Dual Nvidia GTX 680's, prior to switching from the stock AMD cards Apple throws in render time was like 20hours for one 3D model, went down to an hour and twenty minutes with the dual GTX 680's....if you don't believe me I can dig up the post but I think the users name is Dr. Stealth. Point being, if your workstation is your money maker do you want it essentially out of commission for 20 hours or an hour and a half? I mean really did AMD throw these cards at Apple for free? AMD has the Xbox and the Ps4, give Nvidia a chance Apple because they make cards that are proven with CUDA. Even on my mere mortal rMBP I've tested render times with the CPU and integrated graphics vs using the Nvidia 650M (which is by no means a heavy lifter) and the render times with the 650M are so much faster...CUDA cores mean a hell of allot more than any AMD offering at least for now and unless Apple and AMD are going to release some magical parellel GPU rendering support for every program that matters the day of the release I can't see it being good. When an iMac is whiping the floor with a single Nvidia card vs a 12-core Mac Pro you know something is backwards.
 
The competitors workstation products are simply faster, more flexible and better priced.

This is my last reply, I don’t do endless back and forth like some folks on here who are so desperate to prove something. With a 5 year old, track [car] events, two tech companies, tri-training, and busting up the occasional surf, my life is way too busy (in the most fantastic ways :) )

You originally quoted me [out of the blue] in the context of build-your-own-hackint0sh - my original post and _all_ my subsequent posts had nothing to do with turnkey “competitors workstation products” (also implying they’re running appropriate/fully supported OSs).

On topic, to my original point, without you changing the original context: a DIY Hackint0sh build is not a replacement for an OEM Mac Pro regardless of what might seem like some level of hardware parity. See originally quoted post.

Cheers. :cool:
 
That was beautiful! I don't even know 1/8th of what you know about audio work but the guy you just elegantly tore up seams to be full of himself in all the wrong ways. Anyways I would like to see Logic, Pro Tools, etc. start to leverage the GPU for more power, in fact I'd like to see all sorts of apps whether audio or video be able to take advantage of the GPU, it gives some hope for speeding up machines that couldn't otherwise be upgraded.

You don't know 1/8th of what he does yet you so wilfully claim I'm full of myself in wrong ways?

My friend makes a living on a 2008 macbook white and dual core windows PC both with 4GB ram, doing exclusively audio. And no, he's not having any issues and his work is as smooth as it gets.

My laptop (and previously tower) is a rocket compared to that, yet apparently you guys are doing something else with audio that is beyond my comprehension. (Yeah yeah, sample libraries. Need a lot of CPU. not)
 
Though I could marginally justify the entry level model I remain somewhat skeptical to the cost value vs. a PCIe SSD for my aging 2009 MacPro.

The $700 480gb SSD could really breathe new life into what I already have, while allowing me to shift my discretionary funds elsewhere. As it stands now, I'm in a wait-state as to which way to go.
 
Last edited:
I'm still puzzled why the new Mac Pro will launch with the AMD FirePro GPU and not a nVidia Quaddro GPU, thinking of the CUDA support that many of the higher end applications are built around.

I'm not saying the FirePro is bad in anyway rather for a Pro machine I would had hoped for at least an nVidia BTO. The only thing that is putting me off the new Mac Pro is there's currently no nVidia BTO...
 
You know it's not just a question of CPU performance, yes? The new Mac Pro has 6 Thunderbolt 2.0 and 4 USB 3.0 connectors, the old one doesn't. The base GPU's can drive 4K displays, not so on the old one.

Good point. That is something to consider for sure.
 
hahahahahah ... Yes you are right win 7 is solid OS ... hahahah .. Sorry I don't smoke,.. Like I am not using win7 in my life.. I am using win 7 since its release in office with all legal softwares & Mac pro with OSX in Home ... DO you know how many times IT have to format and re-install win7 ? and how many times I clean up my OSX .. hahahah

again its not 2/3 price if spec are same .. you must have skip components (may be you don't need)

Lets see, 2 X5670 2.93 westmere processors, 48 GB udimms (most folks don't need ECC ram), ATI 7950, Bluray rom/dvd drive, 1 samsung pro SSD 256, 3x 2GB hitachi hard drives, corsair 800D case w/4 hot swapable drive bays (apple can't do that). Video editing running smooth. Think I did pretty good compared to apple mac pro. Oh yeah, Win7 pro 64. Never reinstalled once. Running smooth since the day installed.
 
Two things:
1. The AMD FirePro D300 and D500, which are what the new Mac Pro uses, are not listed as supported GPUs on that Premiere Pro CC "supported GPUs" page.

It's not listed, but it will be supported...

'This means that – you guessed it – Premiere Pro will utilize the dual-GPUs in the new Mac Pro when exporting to an output file. Indeed, our very own David McGavran will be talking about our OpenCL improvements at WWDC on Thursday.'

...they probably haven't listed it as the MP isn't available to buy yet.

2. That only addresses Premiere. There are a lot of applications that use CUDA.

Sure, but many devs are using OpenCL too:

"We have been testing with DaVinci Resolve 10 builds and this screams. Its amazing and those GPUs are incredible powerful." - See more at: http://www.philiphodgetts.com/2013/...-fine-on-the-new-macpro/#sthash.qH0DngaO.dpuf

Historically, when kit like the MP is out there en masse – there is a lean towards utilising it, i.e, the death of Flash due to the non-support of the iPhone... ok, slightly different but you get my point. I suppose time will tell how large the uptake of the MP is, but Apple obviously seem to be backing OpenCL for the Pro market.

Older, but relevant. OpenCL Gains Ground On CUDA http://archive.hpcwire.com/hpcwire/2012-02-28/opencl_gains_ground_on_cuda.html

This is a good read which speculates the cost justification / spec of the MP graphics cards: http://architosh.com/2013/10/the-mac-pro-so-whats-a-d300-d500-and-d700-anyway-we-have-answers/
 
I wish they would release them without the fire GPUs. Just a baseline and allow users to upgrade it. Keep the price down. As a programmer, I can appreciate a beefy box, but have no use for a workstation class GPU, let alone two.

Sure, today you don't. But by setting a standard baseline of all machines of this calibre coming with the cards, hopefully things like OpenCL might finally take off on high end programs. Which will speed their trickling down to us mere mortals that much quicker...

----------

Is it really true that anyone can build a hackintosh that has twice the processing power for half the price?

Not at all. It's the netbook debate all over again. When outfitted with the same high end components, PC's actually tend to cost more (just like with the previous Mac Pro).

Now, unlike with Netbooks, there is a legitimate need for a minitower Mac with a single socket lower end Xeon - so I can still have a full length slot for a graphics card. Just like with the Netbooks, I don't expect Apple to make one :(
 
ECC ram doesn't do anything for non-server workloads.

Apart from ensure that your data really is what you think it should be. No biggie if you are just gaming, but if you are doing anything you really care about - especially anything with financial data, scientific data, programming or anything worth backing up then having your data be accurate is obviously a concern.

It's actually slower than non-ecc ram.

And backing up a hard drive takes time away from other tasks too. And the last time I remember reading about ECC, the checksumming is performed in parallel with writes/reads and didn't introduce any delays. Now you've got me curious...

Also - it's not going to make your calculations more accurate.

Ha! If data in RAM that your calculations is based on is wrong, then your calculations are wrong. Last time I checked "wrong" has a strong influence on accuracy.

Also, dual socket is not 2x single socket. It's more like 1.25x single socket - and for RAM intensive tasks it's actually going to be slower because the RAM is divided between the to chips, so they have to talk through each others' RAM bus.

Er, this generation Mac Pro is single socket. On previous generations each CPU has it's own bank of RAM and OSX is smart enough to prioritize threads on CPU's where the memory is local to that CPU. I used to have a great article that went into the details of this but I can't find the bookmark right now.

Anyway, your assertion that ECC is worthless for home use is laughable - almost as naive as people dismissing the importance of checksumming in the file system. Random bit errors cause unforseen problems all the time, and as machines with 8GB and more of RAM become more commonplace, the issues of random bit errors get more serious. Similar to the idiocy of Nvidia crippling double precision performance to prop up their workstation cards, Intel restricting ECC to Xeons will start to become more an issue. The only thing that has kept it from really boiling to the surface is the relative cap on memory clock frequency, but with each generation of DDR the frequencies go up....
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.