No, I’m saying hardware encoding is very fast and getting a faster CPU doesn’t necessarily help as the the design of the hardware encoder block might be similar.
Software decoding is much slower but is dependent upon on how fast the CPU is.
eg. Slow CPU might software encode in 4 minutes whereas fast CPU can do it in 2 minutes, but a hardware encoder might do it in 1 minute (even on that first slow CPU).
Apple’s SoCs have had hardware h.264 encoders for a very long time now and h.265 8-bit hardware encoders since A10. (Actually earlier, but they didn't allow anyone to use it.) There is no h.265 10-bit hardware encoder though so if you had to encode that then it would be all software based and slow.
For example, I found this
benchmark that compares a number of Intel and AMD CPUs with and without hardware encoding, in Adobe Premiere.
Of interest is the inclusion of the i9-7980XE and the i7-8700K. The i9 has 18 cores (!) at 2.6 GHz with Turbo Boost to 4.4 GHz. The i7 has 6 cores at 3.7 GHz with Turbo Boost to 4.7 GHz. One key difference though is that the i9 has no hardware video encoder at all, whereas the i7 has a robust hardware video encoder. In general CPU usage, the i9 is much, much faster than the i7... but the same is not necessarily true with video encoding:
i9-7980XE software encoder: 30 minutes
i7-8700K hardware encoder: 33 minutes
i7-8700K software encoder: 46 minutes
The other benefit is that hardware decoding and encoding uses much, much less power, so better battery life.
BTW, this picture illustrates why I waited until 2017 to update my MacBook and iMac. While I didn't do it for VP9, it applies to various aspects of h.265 HEVC as well.
Software decoding with this chip uses 6 Watts of CPU+GPU power with 70-90% CPU utilization. Hardware decoding on the chip that replaces it uses only 0.8 Watts of CPU+GPU power with 5-25% CPU utilization, even though general CPU performance of these two chips is otherwise relatively similar.
I mentioned before that my 2017 Core m3 MacBook (Geekbench score 7000) can decode one
example 4K 10-bit HEVC video with 25% CPU usage (hardware decode), whereas a 2015 Core i7-6700K (Geekbench score 18000) can't do it cleanly even with 100% CPU usage (software decode). IOW, with this very specific test, my Core m3 MacBook beats all 2015 iMacs and all 2016 MacBook Pros, because my little MacBook has a hardware decoder to handle the job whereas all the pre-2017 models do not.