If you render a 2 minute long 3D animation at 1 minute per frame on a PC, that’s:
PC: 60 Hour Render
Mac: 600 Hour Render
It’s the difference between being able to render locally versus being forced to render on a render farm. In that aspect, then it is definitely not fast enough.
The mistake you're making is explaining to enthusiast fanbois a point intended for business.
I'm using my Mac Mini for work and the minor work I do in blender (mostly verifying work I start in another app) it's adequate and any update is necessary.
However for heavy baking, etc. we have a couple of machines with Nvidia GPUs running. A 10x difference or even 2x difference itself is worth thousands of dollars over a given month.
If I were just rendering 3D objects for fun, ran Blender overnight, the difference between 2 hours w/M1 and 12 minutes w/Nvidia GPU wouldn't matter much.
However, if the difference is doing 5 $20k jobs a quarter or 50 $20k jobs a quarter, we're talking about losing millions of dollars a year.
Do we know the reasons behind the big gap in rendering performance between the M1 Max and the Nvidia GPUs that are in its weight class? It seems crazy to me that we would see such a big difference in GPU rendering performance even though other GPU task put the M1 Max in 3070m territory.
Seems like the big gap could be due to a couple different things:
1. Hardware ray-tracing: If the rendering engine makes use of this technology, it can drastically reduce rendering times. IMO it was a big miss for Apple to not include ray tracing technology in a pro level chip. If this is the case, the gap may continue until we get into the M2 or perhaps M3 generation.
2. Rendering engines are just more optimized for Nvidia chip sets. If this is the case, there may be some hope that the rendering performance gap will lessen as software creators optimize performance on Apple Silicon.
Great questions. Hopefully Apple's involvement in Blender development will answer those questions.
1. True. Hopefully not, but very likely. If they honestly missed that and just started now, it likely won't exist in a chip for a couple years at the earliest. If they have the technology but don't know how to implement it on the software side, we might be in luck. Not impossible, but not likely.
2. Exactly my hope. With Apple making $2 billion every week, you'd think they'd have a few million left over for software optimization.