Adobe Lightroom can use a neural network to enhance details.
https://www.engadget.com/2019/02/12/adobe-lightroom-cc-ai-enhance-details/
https://www.pugetsystems.com/labs/a...C-2019-Enhanced-Details-GPU-Performance-1366/
Note that the Vega 64 is up there with the RTX 2080Ti.
AMD says the Radeon Pro 580 is theoretically capable of
5.5 TFLOPS (FP32). But the Vega cards can use half precision floats (FP16), which happen to be the datatype of choice in AI.
No firm numbers on the Vega 48 used in the macs, unfortunately.
But
Tech powerup lists a PC Vega 48 card that performs like this:
FP16 (half) performance: 15.97 TFlops
FP32 (float) performance: 7.987 TFlops
FP64 (double) performance: 499.2 GFlops
so, even if the floating point performance of a iMac's Vega48 is merely comparable to a 580(X), and if Adobe uses FP16 instructions,-- it'll be twice as fast when enhancing the details using AI.
If you don't use lightroom, I can't point to a photoshop specific use case atm.