Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

OpNora

macrumors newbie
Original poster
Sep 8, 2013
2
0
I'm going to be ordering one of the new Mac Pros very soon, and I was wondering with the specs I am getting (6 Core, 1TB SSD, and 32 GB of ram), would the D700 be overkill or not?

My main uses for the machine will be running Photoshop CS6, Logic Pro X, Premiere Pro CS6, After Effects CS6 (everything that I will be editing will be at a 1080p resolution), Maya 2014 (Mostly going to be used to check my employees 3D models), Parallels 9 running Windows 7 and Linux (Just so I can run test servers for about 20 people at max when I am running betas of services and so me and the programmer can be more compatible with his native software). Also I do plan to play some games rarely on this (mostly some casual Phantasy Star Online 2 using Parallels and a few games on steam, like Left 4 Dead, Portal 2, Bioshock Infinite and so on).

So what is everyones opinion on this? I know I probably should upgrade the CPU, but at the moment that's not possible because the CPU upgrade is such a major price compared to everything else.
 

thekev

macrumors 604
Aug 5, 2010
7,005
3,343
I'm not sure here. Some of those things have gpu functionality but only with CUDA. For example After Effects has an okay raytracer. It is based partly off research from NVidia, so it runs on CUDA. OpenCL still has a long way to go. I kind of wonder if it will allow for pointers to virtual memory at any point.
 

pertusis1

macrumors 6502
Jul 25, 2010
455
161
Texas
My ignorant opinion is that if you're really going to be limited to 1080 video, it would be hard to imagine that you would need the D700 *2. I'd buy the D500s, put the $1000 in the bank, and use it to upgrade in 4 years ;)
 

jasonvp

macrumors 6502a
Jun 29, 2007
604
0
Northern VA
My main uses for the machine will be running Photoshop CS6, Logic Pro X, Premiere Pro CS6, After Effects CS6 (everything that I will be editing will be at a 1080p resolution)

I'd recommend that if you're considering a new Mac Pro, that you also entertain the idea of upgrading from the CS6 suite to the CC suite. Yes, the subscription model does suck, but... Premiere at least can access multiple GPUs, and it scales linearly with them. The CS6 version can only address one of the cards.
 

ZnU

macrumors regular
May 24, 2006
171
0
OpenCL still has a long way to go. I kind of wonder if it will allow for pointers to virtual memory at any point.

In OpenCL 2.0, if I recall correctly. Whether Apple will make us wait for OS X 10.10 (that's a lot of tens) or quietly slip support in before then is hard to guess.
 

thekev

macrumors 604
Aug 5, 2010
7,005
3,343
In OpenCL 2.0, if I recall correctly. Whether Apple will make us wait for OS X 10.10 (that's a lot of tens) or quietly slip support in before then is hard to guess.

That made it into 2.0? A lot of recent model gpus will never support 2.0, but that is excellent news for some of the bleeding edge hardware, assuming developers are quick to leverage it. Realistically that may be great downstream when enough people can make use of it to motivate the aforementioned developers. The issue of having to pass all data to the framebuffer is really quite limiting, which is why I say that.
 

goMac

Contributor
Apr 15, 2004
7,662
1,694
I'm not sure here. Some of those things have gpu functionality but only with CUDA. For example After Effects has an okay raytracer. It is based partly off research from NVidia, so it runs on CUDA. OpenCL still has a long way to go. I kind of wonder if it will allow for pointers to virtual memory at any point.

Pointers to virtual memory is nifty, but you can't use it with a discrete GPU. It'll help on the Iris Pro, but it won't help on the Mac Pro.

I also don't think it works with a discrete GPU for CUDA. It really makes no sense with a discrete GPU.
 

ZnU

macrumors regular
May 24, 2006
171
0
Here's how it's described in the announcement:

Shared Virtual Memory
Host and device kernels can directly share complex, pointer-containing data structures such as trees and linked lists, providing significant programming flexibility and eliminating costly data transfers between host and devices.​

That sounds like it works with discrete GPUs.
 

thekev

macrumors 604
Aug 5, 2010
7,005
3,343
It really makes no sense with a discrete GPU.

How so? My concern was that you're still limited on vram for some highly computationally intensive stuff. The trend toward 2GB being basically the norm helps quite a bit, but is there any other way of dealing with large chunks of data on the gpu?
 

goMac

Contributor
Apr 15, 2004
7,662
1,694
How so? My concern was that you're still limited on vram for some highly computationally intensive stuff. The trend toward 2GB being basically the norm helps quite a bit, but is there any other way of dealing with large chunks of data on the gpu?

A discrete GPU can't perform computations with data unless it's in VRAM. So the concept of having a pointer to data off VRAM makes no sense. You can point to it, but you can't do anything with it. At some point you've still got to do the copy to VRAM.

An integrated GPU does it's computations from RAM. So in that case VM pointers make a lot of sense. Rather than copy something redundantly to it's own dedicated buffer, just refer to it's location in RAM.
 

thekev

macrumors 604
Aug 5, 2010
7,005
3,343
A discrete GPU can't perform computations with data unless it's in VRAM. So the concept of having a pointer to data off VRAM makes no sense. You can point to it, but you can't do anything with it. At some point you've still got to do the copy to VRAM.

Yeah I actually got that part. Many of these applications including those mentioned by the OP are limited by the need to load everything into the framebuffer prior to calculations. This means your scene size and textures must fit in the framebuffer for After Effects and its raytracer. I can think of a few things they could do to work around that to a degree, but nothing substantial without at least some method to dynamically load data as needed. I may have phrased it poorly, but I considered the possibility that the language would eventually allow some better method of loading and flushing data from the framebuffer as needed.
 

goMac

Contributor
Apr 15, 2004
7,662
1,694
Yeah I actually got that part. Many of these applications including those mentioned by the OP are limited by the need to load everything into the framebuffer prior to calculations. This means your scene size and textures must fit in the framebuffer for After Effects and its raytracer. I can think of a few things they could do to work around that to a degree, but nothing substantial without at least some method to dynamically load data as needed. I may have phrased it poorly, but I considered the possibility that the language would eventually allow some better method of loading and flushing data from the framebuffer as needed.

Sorry, didn't mean any offense. It's hard to tell what everyone's technical level is on the forums. :)

It seems possible to me the API could abstract that (I haven't looked at the OpenCL 2.0 spec.) You won't get any performance gain in since it's the same work you'd have to do by hand.

The only risk I can think of is your output is based on input spread throughout the data you're giving as an argument, it may have to do multiple loads from VRAM on a single pass of the OpenCL kernel, which could be bad. Imagine if you were doing a blur, and one pixel of data you need was in the portion that had been flushed, and so the kernel has to wait for another load from RAM. Bleh.

That's the sort of situation where I could imagine that OpenCL couldn't have the intelligence necessary to do all the loading for you, and might continue pushing that off on the developer.

As far as I can tell, OpenCL 2.0 is only targeting virtual memory pointers at integrated GPUs.

http://www.anandtech.com/show/7161/...pengl-44-opencl-20-opencl-12-spir-announced/3

The biggest addition here is that OpenCL 2.0 introduces support for shared virtual memory, the basis of exploiting GPU/CPU integrated processors.

That said, the performance win for when you're working with data that is far bigger than a discrete GPU's frame buffer is obvious. Integrated GPUs are looking better and better.
 

thekev

macrumors 604
Aug 5, 2010
7,005
3,343
Sorry, didn't mean any offense. It's hard to tell what everyone's technical level is on the forums. :)

It seems possible to me the API could abstract that (I haven't looked at the OpenCL 2.0 spec.) You won't get any performance gain in since it's the same work you'd have to do by hand.

My knowledge is actually somewhat sparse when it comes to the inner workings of GPGPU, but I didn't interpret that as hostility. I know some Python and C++ (C too). I have years of experience with Maya (I remember maya complete) and Photoshop, and I'm decent with After Effects. It's kind of why I comment on threads like this and ones that involve color management or display questions.


The only risk I can think of is your output is based on input spread throughout the data you're giving as an argument, it may have to do multiple loads from VRAM on a single pass of the OpenCL kernel, which could be bad. Imagine if you were doing a blur, and one pixel of data you need was in the portion that had been flushed, and so the kernel has to wait for another load from RAM. Bleh.

That's the sort of situation where I could imagine that OpenCL couldn't have the intelligence necessary to do all the loading for you, and might continue pushing that off on the developer.

As far as I can tell, OpenCL 2.0 is only targeting virtual memory pointers at integrated GPUs.

http://www.anandtech.com/show/7161/...pengl-44-opencl-20-opencl-12-spir-announced/3

That wouldn't really help the OP. I'm pretty sure most of his use is going to be OpenGL, as there's no guarantee that anything else will ever leverage it. Regarding the issue of blurring, it depends what kind of blur. A motion blur typically requires a vector obtained by the difference in point position between frames. A blur based on z-depth obviously needs to know what occludes what and either standard distance from camera or you could probably do it with a polar coordinate distance if applicable. Displacement would be a bigger issue, as it can affect position and necessary subdivision. Where I'm confused is that I haven't seen much effort from Adobe or any of the others to do simple raw lighting calculations, then pass that data back. Given the parallel nature of stochastic ray tracing or GI, it seems like a good fit if they could get away with loading only geometry and displacement channels.


That said, the performance win for when you're working with data that is far bigger than a discrete GPU's frame buffer is obvious. Integrated GPUs are looking better and better.

I could see that given the time lost to loading and unloading data. This is part of why I'm slightly puzzled by the memory configuration. I kind of understand the concept, but the D300s are somewhat limited for certain things. 2GB is certainly not bleeding edge, and while it may be a base model, it's a base model that begins at $3k. I'm sure someone will claim that the word Xeon adds 40% in spite of being a $300 base cpu:rolleyes:. Since the D700 upgrade wasn't priced into the stratosphere, it could always be an option. It's sort of interesting to me to debate whether the OP could in fact leverage it. Unfortunately I don't know enough about software development in a couple years to guess.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.