Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Hello,

Which kinds of apps will see the most differences between these two GPUs?

(I do some photoshop work and some gaming (WoW)).

Thanks.

Loa
I’d like to know the answer to this as well, is the 580x considered a high end powerful video card? I ask because that’s the card Apple includes in the expensive $2299 27 inch iMac configuration.
 
  • Like
Reactions: orbital~debris
I’d like to know the answer to this as well, is the 580x considered a high end powerful video card? I ask because that’s the card Apple includes in the expensive $2299 27 inch iMac configuration.

the 580 is far from being a High end powerful video car. At most I Would say it's a mid tier, but I would even adventure to say its performance is lower mid tier, being below a 1060 with the vega 48 being between the 1060 and the 1070.

I could be wrong as I'm far from an expert
 
Hello,
Which kinds of apps will see the most differences between these two GPUs?
(I do some photoshop work and some gaming (WoW)).
Thanks.
Loa

Adobe Lightroom can use a neural network to enhance details.

https://www.engadget.com/2019/02/12/adobe-lightroom-cc-ai-enhance-details/

https://www.pugetsystems.com/labs/a...C-2019-Enhanced-Details-GPU-Performance-1366/

Note that the Vega 64 is up there with the RTX 2080Ti.

AMD says the Radeon Pro 580 is theoretically capable of 5.5 TFLOPS (FP32). But the Vega cards can use half precision floats (FP16), which happen to be the datatype of choice in AI.

No firm numbers on the Vega 48 used in the macs, unfortunately.

But Tech powerup lists a PC Vega 48 card that performs like this:

FP16 (half) performance: 15.97 TFlops
FP32 (float) performance: 7.987 TFlops
FP64 (double) performance: 499.2 GFlops
so, even if the floating point performance of a iMac's Vega48 is merely comparable to a 580(X), and if Adobe uses FP16 instructions,-- it'll be twice as fast when enhancing the details using AI.

If you don't use lightroom, I can't point to a photoshop specific use case atm.
 
Last edited:
  • Like
Reactions: Returnoftheimac
Adobe Lightroom can use a neural network to enhance details.

https://www.engadget.com/2019/02/12/adobe-lightroom-cc-ai-enhance-details/

https://www.pugetsystems.com/labs/a...C-2019-Enhanced-Details-GPU-Performance-1366/

Note that the Vega 64 is up there with the RTX 2080Ti.

AMD says the Radeon Pro 580 is theoretically capable of 5.5 TFLOPS (FP32). But the Vega cards can use half precision floats (FP16), which happen to be the datatype of choice in AI.

No firm numbers on the Vega 48 used in the macs, unfortunately.

But Tech powerup lists a PC Vega 48 card that performs like this:

FP16 (half) performance: 15.97 TFlops
FP32 (float) performance: 7.987 TFlops
FP64 (double) performance: 499.2 GFlops
so, even if the floating point performance of a iMac's Vega48 is merely comparable to a 580(X), and if Adobe uses FP16 instructions,-- it'll be twice as fast when enhancing the details using AI.

If you don't use lightroom, I can't point to a photoshop specific use case atm.

This I find really interesting and it's kind of the first test that's tipping me toward a vega. Conventional wisdom has been that PS and LR use gpus in a pretty limited way and that once you have a decent gpu, higher end gpus add little or nothing to the mix. One could imagine we'll see much more of this kind of stuff appearing in photo editing software.
 
Adobe Lightroom can use a neural network to enhance details.
[...]
If you don't use lightroom, I can't point to a photoshop specific use case atm.

I'm eager to see real world results pitting two iMacs against each other.

And, yes, I'm only using PS, not LR...

Loa
 
https://macperformanceguide.com/blog-2019-04.html#20190401_1214-ReaderComment-iMac5K2019 offers a different perspective
In the past, similar GPU upgrades have had no perceptible gains in performance for what I doand were generally around 20%, which could be shown on benchmarks but in the context of real work, were not meaningful. Would you, for example, pay $450 to take the time from 5 seconds to 4 seconds for an operation done 10 times a day?

Claims of speed improvements should be discounted heavily unless there is continuous GPU usage. Yet much if not nearly all of what I do as a photographer in Photoshop uses the GPU in brief bursts lasting only a fraction of the time of the total operation. Even if operations like Adobe Camera Raw Enhance Details (GPU intensive) are sped up by 25%, saving even 100 seconds a day (infrequently) does not impress. I’d far, far rather have 128GB memory; my budget is not wide open.
 
Adobe Lightroom can use a neural network to enhance details.

https://www.engadget.com/2019/02/12/adobe-lightroom-cc-ai-enhance-details/

https://www.pugetsystems.com/labs/a...C-2019-Enhanced-Details-GPU-Performance-1366/

Note that the Vega 64 is up there with the RTX 2080Ti.

To be clear, while the Vega 64 is not too far behind the RTX 2080Ti in the test you link to, the two cards are not in the same league overall. The 2080Ti is a far better and more capable card. The Vega 64 competes more closely with a vanilla GTX 1080 or RTX 2070.
 
the 580 is far from being a High end powerful video car. At most I Would say it's a mid tier, but I would even adventure to say its performance is lower mid tier, being below a 1060 with the vega 48 being between the 1060 and the 1070.

I could be wrong as I'm far from an expert

I read the Vega 48 is basically a slightly more powerful 1060... not even halfway between the 1070 and 1060.
 
based on Geekbench results for Open CL:

580X: 119K
1060: 127k
Vega 48: 141K
1070: 158K

For me, that sits right in the middle of those 2. maybe 5% closer to the 1060, but I would call it middle point. There are other numbers and it seems to point to the middle ground.
 
based on Geekbench results for Open CL:

580X: 119K
1060: 127k
Vega 48: 141K
1070: 158K

For me, that sits right in the middle of those 2. maybe 5% closer to the 1060, but I would call it middle point. There are other numbers and it seems to point to the middle ground.
From the benchmarks I’ve seen in games so far on my Vega 48 vs. a 1060 and a 1070, I would say this is accurate. It’s okay. I could get about 48fps in older games like The Division on 1440p ultimate settings. For newer games like Shadow of the Tomb Raider I got about the same at 47fps on 1440p but on high setting (not highest/ultimate/whatever). Oddly, adjusting to medium setting only raised the FPS by 1 to 48 so idk what is going on with that. Maybe thermally limited beyond a certain point?
 
From the benchmarks I’ve seen in games so far on my Vega 48 vs. a 1060 and a 1070, I would say this is accurate. It’s okay. I could get about 48fps in older games like The Division on 1440p ultimate settings. For newer games like Shadow of the Tomb Raider I got about the same at 47fps on 1440p but on high setting (not highest/ultimate/whatever). Oddly, adjusting to medium setting only raised the FPS by 1 to 48 so idk what is going on with that. Maybe thermally limited beyond a certain point?

That's quite helpful. In other game you can actually see the change when reducing the quality.

To be honest, 48fps on high for a modern game is quite decent knowing the limitations of whay you are buying and not being a gaming pc. It does seem that it won't age well and that 3 yeras down the line we will most probably need an eGPU or just suck it. But if you take into account I have been gaming until last year with my 2011 6970M I think I should be able to manage as long as the vega 48 holds for a while.

eGPU are damn big, take lot's of space I don't have and it's more cables running around, which is exactly what I try to avoid by getting an AIO. At that point, I might as well buy a PC as while I preffer MacOS, I'm not restricted to it, so... most probably Vega 48 it is
 
  • Like
Reactions: macduke
That's quite helpful. In other game you can actually see the change when reducing the quality.

To be honest, 48fps on high for a modern game is quite decent knowing the limitations of whay you are buying and not being a gaming pc. It does seem that it won't age well and that 3 yeras down the line we will most probably need an eGPU or just suck it. But if you take into account I have been gaming until last year with my 2011 6970M I think I should be able to manage as long as the vega 48 holds for a while.

eGPU are damn big, take lot's of space I don't have and it's more cables running around, which is exactly what I try to avoid by getting an AIO. At that point, I might as well buy a PC as while I preffer MacOS, I'm not restricted to it, so... most probably Vega 48 it is
Yeah, if I'm running into issues in 2-3 years I might just build a more compact (relatively speaking) gaming PC. Something that can fit a full size Nvidia card, but is otherwise fairly compact without a bunch of drive bays using blade SSDs. I might tuck it away in a large storage closet that I have so it's not noisy. I've seen these ethernet display extenders and I think they have cat 7 ones now that support 4K over 10Gbps so I could run that over my ceiling and down the wall and cut out a box for it behind my desk. Kinda wish I had wired up ethernet when I built my studio last summer but I didn't think I needed it. Also I bet it won't be long before I'll have 4K 120Hz and 10Gbps won't cut it anyway.
 
Does anybody know of any gaming benchmarks directly comparing the 580X to the Vega 48 in the new iMacs? I would love to see these and have been patiently waiting...
 
Unboxed my i9 and ran benchmarks, also my old 2017.

2017: i5 3.8 32GB, 2TB Fusion, Radeon 580
2019: i9 3.6 40GB, 512 SSD, Radeon 580X

Geekbench:
2017: 5136/16587
2019: 6366/34308
OpenCL Score: 119722/118202

Heaven Benchmark:
See below.

Have only had system powered up for 20 minutes. But basically same GPU performance.
 

Attachments

  • Screen Shot 2019-04-11 at 1.32.11 PM.png
    Screen Shot 2019-04-11 at 1.32.11 PM.png
    155.6 KB · Views: 1,109
  • Screen Shot 2019-04-10 at 10.02.41 PM.png
    Screen Shot 2019-04-10 at 10.02.41 PM.png
    2.4 MB · Views: 1,166
Unboxed my i9 and ran benchmarks, also my old 2017.

2017: i5 3.8 32GB, 2TB Fusion, Radeon 580
2019: i9 3.6 40GB, 512 SSD, Radeon 580X

Geekbench:
2017: 5136/16587
2019: 6366/34308
OpenCL Score: 119722/118202

Heaven Benchmark:
See below.

Have only had system powered up for 20 minutes. But basically same GPU performance.

Thanks for this. Could you please run Heaven on Windows? I'm colecting data and creating a topic for guidelines. If you could do a run under windows for each one OpenGL and one for Direct3D11 that would be great
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.