If an application/process shows up in Activity Monitor as making use of the GPU, can one safely assume that that app/process is coded in a way optimized for the GPU (CUDA support for 750M on the current 15"?), and that, if the process were performed instead on the integrated Iris Pro, it would have taken more time?
(For example, Adobe Acrobat Pro seems to make use of my GPU when I OCR my PDFs...is it operating more efficiently using the GPU for this, or should I force it to use the integrated Intel GPU?)
Is the inference pattern 'if an app or process automatically makes use of the GPU it must be performing the given task the most efficient way' valid?
If it isn't valid, how do you know if/when apps are making efficient/wise use of the dGPU?
(For example, Adobe Acrobat Pro seems to make use of my GPU when I OCR my PDFs...is it operating more efficiently using the GPU for this, or should I force it to use the integrated Intel GPU?)
Is the inference pattern 'if an app or process automatically makes use of the GPU it must be performing the given task the most efficient way' valid?
If it isn't valid, how do you know if/when apps are making efficient/wise use of the dGPU?