Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

elmarjazz

macrumors regular
Original poster
May 26, 2010
212
114
Since I'm trying to economise... but need this to work well at the same time...

Is it worth getting the top end iMac with the ADM Radeon Pro 580, or is the lesser (and less money) 575 enough? Add an SSD and self installed RAM on both.

The boost in CPU 3.5 vs 3.8 is a plus, but does it make that much of a difference?

Photo, video editing
Thanks
 
The whole one TFLOP in GPU + 2x the VRAM is definitely a lot bigger difference than the 0.3Ghz in CPU
Screen Shot 2017-06-09 at 3.44.45 PM.png

http://creators.radeon.com/radeon-pro/
 
  • Like
Reactions: Glideslope
Since I'm trying to economise... but need this to work well at the same time...

Is it worth getting the top end iMac with the ADM Radeon Pro 580, or is the lesser (and less money) 575 enough? Add an SSD and self installed RAM on both.

The boost in CPU 3.5 vs 3.8 is a plus, but does it make that much of a difference?

Photo, video editing
Thanks
video editing is more dependent on cpu, not gpu.
 
AMD 580...hmm. Sorry but it is a lot slower than what would be possible with an Nvidia GPU. Why can't Apple offer both (obviously Nvidia 1070/1080 would be more expensive), so there would be more options?
 
Well in terms of raw Terraflops, the 580 is close to the 1070 (6.2TF to 6.5TF) though Apple's woefully out-of-date Open CL support means that in real world it's not going to be anywhere near as good as under Windows Open CL (to say nothing of Windows CUDA).
 
Last edited:
Well in terms of raw Terraflops, the 580 is close to the 1070 (6.2TF to 6.5TF) though Apple's woefully out-of-date Open GL support means that in real world it's not going to be anywhere near as good as under Windows Open GL (to say nothing of Windows CUDA).

Actually Apple use metal and OpenCL opengl is just there for compatibility and I can see it gone next year...
 
  • Like
Reactions: pinchu71
Radeon Pro 580 is the only GPU for iMac that can be used for Virtual reality. 570 and 575 are not recommended for that. Radeon Pro 580 is not Radeon RX 580. The Pro version is more power efficient (around 33%), but also a bit slower (~10%) than RX versions. RX 580 has a TDP of 185W (oc models even more), Pro 580 ~125W.
 
Last edited:
I wonder if anyone chosen the i7+580 combination and if we are going to see again the heating problems of m295x all over again
 
I wonder if anyone chosen the i7+580 combination and if we are going to see again the heating problems of m295x all over again
Radeon Pro 580 should have a similar thermal envelope as M395X had, and it didn't have those heating problems as the previous model had.

But we really need to see more reviews, and they've been scarce so far...
 
I wonder if anyone chosen the i7+580 combination and if we are going to see again the heating problems of m295x all over again

So far no for heading issues. I got mine today and its significantly cooler running the both previous gen. iMac. I have only played a game twice on it so far to test and the fan was no more then 1800 RPM.

My previous iMac with the m295x would have the fan at max almost right away after getting in the game. The m395x wasn't much better then that.
 
Last edited:
If I had to choose, I would definitely pick the CPU over the GPU. They're going to officially support eGPUs, and you can spend the money on something that will be upgradable, and way better than a 580 pro. Official support of eGPUs was one of the most unexpected and awesome announcements from Apple.
 
AMD 580...hmm. Sorry but it is a lot slower than what would be possible with an Nvidia GPU. Why can't Apple offer both (obviously Nvidia 1070/1080 would be more expensive), so there would be more options?
Agreed 100% that we should have NVIDIA options, but I don't thing the 580 is "a lot slower" than a 1070. In fact, the 580 cards sit between the 1060 and the 1070 cards. The 580s are respectable cards from AMD.
 
If I had to choose, I would definitely pick the CPU over the GPU. They're going to officially support eGPUs, and you can spend the money on something that will be upgradable, and way better than a 580 pro. Official support of eGPUs was one of the most unexpected and awesome announcements from Apple.
On an iMac I am not sure if an eGPU is going to be such good deal. You end up having to use an external monitor thus negating the beautiful built in screen. Unless Apple allows the eGPU to drive the internal monitor...
 
AMD 580...hmm. Sorry but it is a lot slower than what would be possible with an Nvidia GPU. Why can't Apple offer both (obviously Nvidia 1070/1080 would be more expensive), so there would be more options?
Agreed 100% that we should have NVIDIA options, but I don't thing the 580 is "a lot slower" than a 1070. In fact, the 580 cards sit between the 1060 and the 1070 cards. The 580s are respectable cards from AMD.
I think this video will be very helpful for anyone who is asking himself if the 580 is worth the money.


I suggest EVERYONE watches this video. RX 480 is the SAME chip that is in Radeon Pro 580.

And here is comparison of CUDA(GTX 1060) vs OpenCL(RX 480)
https://wiki.blender.org/index.php/Dev:Source/Render/Cycles/OpenCL
Timings.png

Guys, please give them a credit, where is due ;).

Software is catching up ;).
 
Last edited:
  • Like
Reactions: Darajavahus
On an iMac I am not sure if an eGPU is going to be such good deal. You end up having to use an external monitor thus negating the beautiful built in screen. Unless Apple allows the eGPU to drive the internal monitor...

Yea, it seems internal acceleration is not an option at the moment.
 
I'm looking into an iMac and probably going to end up buying the 580 but now that macos supports external graphics I might look into that route in the future. There looks to be a substantial performance jump so I think the 580s worth it
 
video editing is more dependent on cpu, not gpu.

Using what program? It's all GPU these days. If you mean encoding and exporting using multithreading, yes that is CPU dependent, but the actual editing process, very GPU dependent.

Which editing program are you specifically referring to?
[doublepost=1497217197][/doublepost]
Actually Apple use metal and OpenCL opengl is just there for compatibility and I can see it gone next year...
OpenGL is a legacy product or way teach people horrible methods of coding. OpenGL will soon be gone forever. It's a dinosaur.
[doublepost=1497217417][/doublepost]
On an iMac I am not sure if an eGPU is going to be such good deal. You end up having to use an external monitor thus negating the beautiful built in screen. Unless Apple allows the eGPU to drive the internal monitor...

Using the eGPU to drive the internal monitor will be supported in High Sierra.
 
  • Like
Reactions: faneos

I don't code anymore but I have a friend familiar with how Apple is rolling out the eGPU and Metal2 framework, he is very close to these developments. He will be coding using the eGPU dev kit from Apple among other people in his team.

Basically the only way Apple can do their "component" eGPU in the iMac Pro and future MacPro is by having eGPU drive an internal or specific graphic window. I.E. a headless eGPU.

How this will roll out with Metal2 and High Sierra will be wait to be seen, if it's in the current Metal2 framework or being figured out.

As people start getting the eGPU Metal2 developed boxes Apple is releasing we will know for sure if it's happening now or when, but I do know for certain they have been working on it, when it reaches us is up to Apple.

I can try to find out more information, but it probably won't be until Metal2 comes out because even though they are starting to ship, the eGPU dev kit, I think they are being deliver with Metal not Metal2.
 
  • Like
Reactions: faneos
Using what program? It's all GPU these days. If you mean encoding and exporting using multithreading, yes that is CPU dependent, but the actual editing process, very GPU dependent....

In general video editing is very CPU-dependent. This should be obvious -- that's why the Mac Pro is available with 12 cores and the upcoming iMac Pro with 18 cores. If it was "all GPU" they wouldn't need all those cores.

Anybody can see this themselves in Premiere by simply importing some H264 4k content and using the JKL keys to scrub forward and back on the timeline. All CPU cores will often be pegged.

Likewise here are two videos where Dave Dugdale (learningvideo.com) is complaining about how slow Resolve 12 is on 4k -- even using a GTX-1080 Ti on a Windows PC. If it was "all GPU" certainly the 1080 Ti would be fast enough.


FCPX is more efficient than either Premiere or Resolve, but even it requires a lot of multi-core CPU horsepower. On my top-spec 2015 iMac 27, H264 4k content can be sluggish to skim through using FCPX. That's with no effects whatsoever, and is likely more a CPU than GPU limitation.
 
In general video editing is very CPU-dependent. This should be obvious -- that's why the Mac Pro is available with 12 cores and the upcoming iMac Pro with 18 cores. If it was "all GPU" they wouldn't need all those cores.

Yeah. Not true. I'm a professional colorist. I work in tv and film I have for almost 20 years. I work in a studio with 20+ edit bays and we pump out tv commercials and spots on a daily basis. Our technical expertise in this field is light years beyond these youtubers.

This Dave Dugdale is an amateur weddding video guy who probably clickbaits you to give him amazon discounts on gear. Looking through his website he is not an authority on anything but being a wedding DP.

First he's comparing apples and oranges when one compares premiere to davinci to Final Cut Pro x as it relates to rendering in CPU and GPU.

Resolve holds the entire image and any color nodes in its GPU to playback and then create a desired effect, where as Premiere uses the CPU then GPU to decode the frame and hold it in its memory for you to see, once you see it in Premiere the process is done, where Resolve is still holding all the images and nodes in memory. A different technology. Apple Final Cut Pro X uses proxy's and background rendering to achieve most of its speed. None of these technologies are 1 to1. A YouTube novice can say "hey option a exports in 1 minute and option b in 1.5 minutes, option a is better!" But it's not. It's two totally different technologies.

For example. When I grade feature films and I am using say a RAW 4K r3d file, this entire image is held in RAM, when I add nodes and effects, if I have enough GPU power it plays back in real or greater than real time. The CPU is only used in the same way a CPU is used to run a computer, or if you have a CPU dependent Source Codec. There are usually Consumer codecs like AVCHD or h264 variants or codecs with innerframe compression. Say you are using DPX files, their is barely any compression so your CPU is not a variation of your speed. In Premiere for example the CPU is not used to hold this image in memory. You decode the image, do the mathematical operations to see your results, and your final play back and/or render. A higher core or faster CPU will not make the GPU encoding process any faster. Since in Resolve this is all done in the GPU.

If you take my example of the feature films I grade and take that same r3d file add the same level of effects and color, you can not play that back in realtime, it has to be rendered, then if you change rendered again. When Premiere added the mercury playback using and your using CUDA, that is a similar technology as CUDA in resolve, but it's not program wide and doesn't work for all effects and operations inside the program.

A novice video editor like these youtubers doesn't work heavily enough in this program to have a valid opinion. He or she not using the program in a way a professional colorist is using it and they aren't even pushing it close to a level I would.

I usually have about 40+ color operations on complicated project and it plays back in realtime on the GPU. Put 40+ color operations an r3d file in timeline in premiere and you be waiting forever for the CPU to render this.

Also I work directly with Blackmagic a lot and the drivers for Pascal GPUs are not as fast on MacOS as the drivers for windows or even Linux. So if your comparing Pascal on MacOS it's not using its full power.

Also Final Cut Pro X is kinda a toy and it cheats. For you to do one to one comparisons with Davinci and Premiere you have to disable all the background rendering and proxy features. Rendering a 4K image that is actually a 2k of HD file, being held in proxy, is not a real comparison.

A simple process timing of a process in these tests is not a portrayal of these programs. A quick little YouTube snippet of an export or a process shows nothing. Ask real experts that uses these programs all day everyday to produce tv and commercial content under extreme pressure and real deadlines and you will know the truth.

To say a CPU is more dependent of GPU is just silly and not at all accurate. Also I don't know who the YouTube guy is or what is his angle but he obviously doesn't use any of the stuff professionally.

My take is don't trust YouTube click bait.
 
Last edited:
Yeah. Not true. I'm a professional colorist. I work in tv and film I have for almost 20 years. I work in a studio with 20+ edit bays and we pump out tv commercials and spots on a daily basis. Our technical expertise in this field is light years beyond these youtubers.

This Dave Dugdale is an amateur weddding video guy who probably clickbaits you to give him amazon discounts on gear. Looking through his website he is not an authority on anything but being a wedding DP.

First he's comparing apples and oranges when one compares premiere to davinci to Final Cut Pro x as it relates to rendering in CPU and GPU.

Resolve holds the entire image in its GPU memory to playback and create an effect, where as Premiere uses the CPU and GPU to decode the frame and hold it in its memory for you to see. Apple Final Cut Pro X uses proxy's and background rendering to achieve most of its speed. None of these technologies are 1 to1. A YouTube novice can say "hey option a exports in 1 minute and option 2 in 1.5 minutes, option a is better!" But it's not. It's two totally different technologies.

For example. When I grade feature films I am using a RAW 4K r3d file, this entire image is held in RAM, when I add nodes and effects, if I have enough GPU power it plays back in real or greater than real time. The CPU is only used in the same way a CPU is used to run a computer. The CPU is not used to hold this image, do the mathematical operations to see your results, or to finally play back and render. A 12 core CPU will not make this process any faster as compared to a 6 core CPU. This is all held in the GPU.

If you take my example of the feature films I grade and take that same r3d file add the same level of effects and color, you can not play that back in realtime, it has to be rendered, then if you change rendered again. When Premiere added the mercury playback using and your using CUDA, that is a similar technology as CUDA in resolve, but it's not program wide and doesn't work for all effects and operations inside the program.

A novice video editor like these youtubers doesn't work heavily enough in this program to have a valid opinion. He or she not using the program in a way a professional colorist is using it and they aren't even pushing it close to a level I would.

I usually have about 40+ color operations on complicated project and it plays back in realtime on the GPU. Put 40+ color operations an r3d file in timeline in premiere and you be waiting forever for the CPU to render this.

Also I work directly with Blackmagic a lot and the drivers for Pascal GPUs are not as fast on MacOS as the drivers for windows or even Linux. So if your comparing Pascal on MacOS it's not using its full power.

Also Final Cut Pro X is kinda a toy and it cheats. For you to do one to one comparisons with Davinci and Premiere you have to disable all the background rendering and proxy features. Rendering a 4K image that is actually a 2k of HD file, being held in proxy, is not a real comparison.

A simple process timing of a process in these tests is not a portrayal of these programs. A quick little YouTube snippet of an export or a process shows nothing. Ask real experts that uses these programs all day everyday to produce tv and commercial content under extreme pressure and real deadlines and you will know the truth.

To say a CPU is more dependent of GPU is just silly and not at all accurate. Also I don't know who the YouTube guy is or what is his angle but he obviously doesn't use any of the stuff professionally.

My take is don't trust YouTube click bait.

You nuked most of your credibility when you called FCP X a toy tbh. You can turn the proxy function off and work with the footage natively and it's still GPU accelerated and fast as hell. I agree with your industry views on youtubers and of course your description of Resolve's GPU activity is fairly accurate.

What you're missing here is that all of these compressed codecs (h.264, prores, etc) are CPU-dependent because they need to be decompressed and compressed (a cycle) during playback. To get realtime playback a strong CPU is needed especially when the files are a 4K+ resolution. This is a universal phenomenon on all NLE's/apps because this CPU cycle is intrinsic to the codec's behavior.

So that's why CPU's are important when working with digital media files, and why mac pros and imac pro give you 8 core, 12, 18, etc. core options.
 
You nuked most of your credibility when you called FCP X a toy tbh. You can turn the proxy function off and work with the footage natively and it's still GPU accelerated and fast as hell. I agree with your industry views on youtubers and of course your description of Resolve's GPU activity is fairly accurate.

What you're missing here is that all of these compressed codecs (h.264, prores, etc) are CPU-dependent because they need to be decompressed and compressed (a cycle) during playback. To get realtime playback a strong CPU is needed especially when the files are a 4K+ resolution. This is a universal phenomenon on all NLE's/apps because this CPU cycle is intrinsic to the codec's behavior.

So that's why CPU's are important when working with digital media files, and why mac pros and imac pro give you 8 core, 12, 18, etc. core options.

Most of the process of the CPU dependent codecs are PRO-SUMER codecs. MXF, DPX and ProRes are not as CPU dependent. Most of the professional codecs use less CPU power, and yes, Final Cut Pro X is a toy. No one in any professional edit or tv environment or film for that matter uses Final Cut Pro X. It is a toy.

I could name a million reasons why Final Cut Pro X is a toy. Just ask why we can't or will never use Final Cut Pro X for professional tv work. It can't be done.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.