Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MuckSavage70

macrumors member
Original poster
Jan 10, 2013
60
99
Denver, CO
Figured I would post these results, as some folks are looking for some "real world" intel vs apple silicon results.

Encoding an MKV file using Handbrake, Super HQ 2160 4K HEVC Surround

Windows - 32Gb Ram / i5 12400F, 6 cores at 4ghz - 5 hours 59 minutes, average speed 8.57fps
Apple Mini M2 Pro - 16GB Ram / 10 cores, 6 performance at 3.5ghz, 4 efficiency at 2.5ghz - 6 hours 1 minute, average speed 8.51fps

No other apps running on either machine.
 
Back when I got my M1 MM, I posted some real world comparisons between the 2nd, 3rd, and 4th gen i7, and the M1 Mac Mini.

I did multiple encodes, varying in difficultly, and the M1 beat out all the Intel chips on every test, but I was a little disappointed in how well the old Intel chips did against a new AS chip.

Worse yet, as the difficultly increase, the gap between the Intels and M1 closed. I never tested it, but I wanted to do something crazy difficult and see if the added stress would have allowed the 3rd or 4th gen i7 to out perform the M1.

When I did HW encoding using Handbrake, the M1 destroyed the Intels, but I don't like the HW encodes.

I ordered the 12c M2 Pro MM today, and will be doing some Handbrake comparisons again with all my 2011 to 2023 Macs, maybe I will post the results here.
 
Figured I would post these results, as some folks are looking for some "real world" intel vs apple silicon results.

Encoding an MKV file using Handbrake, Super HQ 2160 4K HEVC Surround

Windows - 32Gb Ram / i5 12400F, 6 cores at 4ghz - 5 hours 59 minutes, average speed 8.57fps
Apple Mini M2 Pro - 16GB Ram / 10 cores, 6 performance at 3.5ghz, 4 efficiency at 2.5ghz - 6 hours 1 minute, average speed 8.51fps

No other apps running on either machine.
Did you use VideoToolbox option for M2 testing?
 
Depends on how you want your finished file. Anything with VideoToolBox in the name will be using hardware encoding.

My base model M1 mini was getting 20-24 frames per second encoding 4K using the VideoToolbox option.
 
Be careful that if you enable hardware encoding, the comparison becomes sort of meaningless because video encoding is more dependent on GPU than CPU. (M2 also has a dedicated video encoder) If you want to compare CPU performance, make sure to use pure software encoders.
 
Be careful that if you enable hardware encoding, the comparison becomes sort of meaningless because video encoding is more dependent on GPU than CPU. (M2 also has a dedicated video encoder) If you want to compare CPU performance, make sure to use pure software encoders.
If the point is to compare how fast you can get something done (real world performance), then yeah I think it should be enabled.
 
If the point is to compare how fast you can get something done (real world performance), then yeah I think it should be enabled.
Keep in mind that Nvidia or AMD GPUs are usually installed on a video editor’s Intel machines to improve performance, but you can’t do that to M1 or M2. I don’t think it’s fair to compare a SoC to a CPU. Nothing will stop you from doing the comparison though.
 
Be careful that if you enable hardware encoding, the comparison becomes sort of meaningless because video encoding is more dependent on GPU than CPU. (M2 also has a dedicated video encoder) If you want to compare CPU performance, make sure to use pure software encoders.
Thats not exactly fair. On Intel it is optimized for years to SSE instructions. I doubt that they did the same and are using NEON on ARM. It might be that Handbrake will run faster on Rosseta than on native arm ;) In that sense it is more fair to use M1 dedicated HW for video encoding.
 
Thats not exactly fair. On Intel it is optimized for years to SSE instructions. I doubt that they did the same and are using NEON on ARM. It might be that Handbrake will run faster on Rosseta than on native arm ;) In that sense it is more fair to use M1 dedicated HW for video encoding.
There is at least some NEON code in Handbrake. For example the x264 codec uses NEON for ARM but it is optimized as far back as the A4. That means it isn't likely using a lot of the performance that M1/M2 optimizations would bring.

It sounds like the x265 project has added more extensive optimizations. From the x265 blog on June 15, 2020:
x265 contains a significant amount assembly optimization for its compute kernels which enables speed-ups of the order of 5X when compared to the running pure C-code. While the support for the x86 architecture is extensive (there exist kernels right from SSSE3 all the way until AVX512), support for other architectures such as ARM is limited. Up until now, x265’s support for the ARM architecture, for example, has been limited to support for ARMv7 architecture (32b).

In the recently released v3.4 of x265, a fresh new set of hand tuned assembly implementation for the 64bit ARM architecture (aarch64) for some compute intensive kernels have been introduced . The figure below shows the acceleration across presets that these kernels enable. On an average, the kernels speed-up the encode by 10%, with up to 21% acceleration for the default medium preset.

So anyone wanting to really test CPU based compression should be using newer codecs like x265 and not x264 with its older NEON optimizations.
 
  • Like
Reactions: Juraj22
I don’t think comparing HW encoding to SW encoding isn’t necessarily “fair” or “unfair”, as long as the context is given, and there is understanding that there are big differences between the two.

For me, and I imagine for most people using Handbrake, SW encoding is what is important.

HW encoding is very fast, but the encode will have a larger file size than even a really fast SW encode settings.

If you use slower encode preset settings for SW encode, the file size can be a fraction of the size of HW encode at a similar quality.

Another problem comparing SW encoding to HW encoding on Handbrake is that it is nearly impossible to get identical settings to get consistent quality between the encodes.

Now, it does makes sense to compare HW encoding on one machine to HW encoding of another machine.

Still, unless you want to encode something fast and file size isn’t important at all, HW encoding isn’t worth it on Handbrake.

I think transcoding is where HW encoding is really useful.
 
There is at least some NEON code in Handbrake. For example the x264 codec uses NEON for ARM but it is optimized as far back as the A4. That means it isn't likely using a lot of the performance that M1/M2 optimizations would bring.

It sounds like the x265 project has added more extensive optimizations. From the x265 blog on June 15, 2020:


So anyone wanting to really test CPU based compression should be using newer codecs like x265 and not x264 with its older NEON optimizations.
x265 ARM optimization 10-20% speed up is a joke. Bad joke. Particularly if you compare to 5x on Intel. I would expect at least 2x with NEON. We have to wait for armv9 (M3 maybe?) where vectors can be really big for nice speed up. However, software devs must spend some time with it. In that sense Qualcomm new CPUs could help a lot, as also Windows world gets power efficient CPUs and that push devs to spend far more time for ARM.
 
x265 ARM optimization 10-20% speed up is a joke. Bad joke. Particularly if you compare to 5x on Intel. I would expect at least 2x with NEON. We have to wait for armv9 (M3 maybe?) where vectors can be really big for nice speed up. However, software devs must spend some time with it. In that sense Qualcomm new CPUs could help a lot, as also Windows world gets power efficient CPUs and that push devs to spend far more time for ARM.
I'm pretty sure that 10%-20% improvement is comparing the hand tuned assembly to the C compiled version. NEON was designed to work well with compiled code so that speed up is actually pretty good.

I'm also guessing that the 5x is coming from AVX512 which isn't widely available.
 
Yeah got to say my next desktop is going to be a i7-13700. I can build a decent quality 32Gb/1TB desktop for 40% of the cost of a M2 Pro box.
 
Yeah got to say my next desktop is going to be a i7-13700. I can build a decent quality 32Gb/1TB desktop for 40% of the cost of a M2 Pro box.
I'm not sure if I watched performance benchmarks of the 13th gen chips when they were released. I was travelling during that time. How do they perform?

I was surprised to watch this video the other day and seeing one of 13th gen mobile chips offer over TWICE the performance of the M2 Max in Cinebench. I know CineBench =/= HandBrake performance, but surely that'd be some kind of indicator? Here's the benchmarks:

I thought about making a thread about it for discussion but I wasn't sure where to post it.
 
  • Like
Reactions: Danfango
I have a 12850HX and wouldn't want to cram an Intel with that sort of power in a laptop to be honest. They get hot. Also mine gets very noisy. I'm sure the M2 Max does as well but this thing sounds like it's going to take off.

If I want compute it's going to be in a desktop machine where I can very large very slow fans doing the same job.

But yeah the 13th gen stuff is impressive from Intel. The power consumption is of course an issue but realistically that's well offset by the reduction in cost.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.