The article is basically accurate. A custom-built, liquid-cooled overclocked PC with an 8-core CPU will be much faster on Lightroom than a top-spec factory iMac 27. To a large degree this is because of how slow and inefficient the Lightroom code is.
I imported the same set of 42 megapixel raw stills from my A7RII into both Apple Photos 1.5 and Lightroom CC. With LR using previously-built 1:1 previews, the full-screen browsing rate in the Library module is about 0.81 photos per second, IOW 1.2 sec per photo.
In Apple Photos, the full-screen browsing rate is 6.4 photos per second, or about 7.9 times faster. This is on the exact same machine, a 2015 top-spec iMac 27 with 1TB SSD and M395X. This is roughly in line with some performance differences between Premiere CC and FCPX. The FCPX frame update rate when scrubbing a H264 4K timeline is much faster, and FCPX exports H264 about five times faster than Premiere.
The Adobe Lightroom product manager has recently publically apologized about the poor quality of their software. It is so bad that *disabling* the GPU in LR preferences will often speed up the operation. That is not a slow GPU, it is poorly written and poorly optimized code.
If you don't use high-megapixel raw stills you may not notice the difference. But if you are a professional wedding or event photographer and shoot thousands of high-megapixel raw stills per session, even a top-spec 2015 iMac 27 can periodically feel a bit pokey on Lightroom. The bottleneck is not in the disk subsystem and even a Thunderbolt SSD RAID 0 array will not help. It is largely inefficient code in Lightroom.
In this demonstration Adobe showed 8x performance improvements in After Effects by using Apple's Metal API, and Adobe committed to bringing this to Photoshop and other Adobe products. They have since backpedaled on this, and it is unclear if these enhancements using the Metal API will ever appear:
You can compensate by using a hugely powerful custom-built PC, which is not an option for a Mac unless you build a Hackintosh.