Who claims that? Isn't Cinebench R23 a good predictor of multiprocesssing (at least on PC CPUs)?
Only for a very narrow subset of what "multiprocessing" covers.
When trying to use multiple cores, there's two important patterns in software. One is called "embarrassingly parallel", and the other is basically everything else.
Embarrassingly parallel problems are those where the workload can easily be split up between an arbitrary number of cores, and there are no dependencies between the worker threads running on each core.
The harder problem is accelerating algorithms which have dependencies between threads. This arises when multiple threads need to edit the same data structure (it's very important that they don't both try to mess with the same thing at the same time), when one thread needs an interim result from another, and so forth. Thread synchronization points are both difficult to get right and a source of scaling problems.
If the entire software world consisted of nothing but embarrassingly parallel throughput algorithms, that would be great! CPU manufacturers could pack big chips full of hundreds of efficiency cores, and these would provide far more compute throughput than today's CPUs. (The colloquial name for such a chip is a "flock-of-chickens" design. Do you want to pull your plow with a flock of chickens, or a big strong ox?)
Unfortunately, in the real world, embarrassingly parallel is not the only thing. It's still very important to provide high single thread performance. There's lots of software which either doesn't scale at all, or scales poorly beyond three or four cores, so you want at least some of the cores in the system to be as fast at ST as is practical.
Another factor is that the rise of GPGPU allows us to move lots of embarrassingly parallel work off to GPUs, which actually are flock-of-chickens designs. The availability of GPU compute is an argument for CPUs to remain things which are mostly optimized for serial computation. (This is even more true with unified memory SoC designs like Apple Silicon, where the penalty for moving data between CPU and GPU essentially doesn't exist.)
Cinebench is an embarrassingly parallel CPU benchmark. It would be perfectly happy with a flock-of-chickens CPU design. CPU raytracing algorithms usually backwards cast from the screen - you fire rays from pixel coordinates and trace back bounces until you find a light source. So you just divide the workload up into blocks of pixels and toss those work units at CPUs. It scales extremely well, and the only thread-to-thread dependencies are work assignment, which is one of the esasiest multithreaded programming problems.
Cinebench even visualizes its workload splitting for you. It assigns each CPU a square block of pixels to render and draws a box around it. You should have as many boxes visible as you have hardware threads, and as each CPU core (or thread) finishes one box it gets assigned a new one.