Yes, totally meant to type "CPU". As I mentioned in the prior post, LR does not use the GPU.
I just read the prior ones. I wrote the other post in chunks, so when I responded before, I had only seen that one after hitting submit on my very lengthy post. I hadn't heard that about the supposed 70% faster, but I've noticed from other Adobe applications that in actual use, you typically require a very significant difference between gpus to make a real performance difference, with the big gap being between something running entirely on the cpu and that same function with extensive OpenCL calls.
In that sense the big difference comes down to whether a gpu is supported. I think with the current generation everything with the possible exception of the macbook air would make that cut, although the dedicated vram is nice. I'm still skeptical because you can never tell with Adobe, but it could be more significant for Lightroom than it was for photoshop. Lightroom has more in the way of highly parallel floating point computation problems. It uses a gamma 1.0 version of the original Prophoto RGB for raw images, presumably transformed from camera specific input profiles. I'm also inclined to assume they're storing in floating point values due to the extended range of the data.
I don't know the details of it, as Adobe hasn't published them and I haven't attempted to reverse engineer it. It would seem well aligned with gpu based computation either way. I think on Adobe's end they probably don't want all of this determined at runtime. OpenCL had some stability issues, and obviously they don't want slightly different results output if color correction computation is made on the gpu as opposed to the cpu, so they may have been waiting for the lowest system they intend to support to be capable of leveraging the gpu for certain functions. This is assuming they don't limit it to things such as filters where the level of user controlled precision is a bit coarser.
My responses are longer than intended today.