Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
How can a native Mac M1 render engine be bad? How can reduction in rendering from 26 minutes to less than a minute be bad? I agree, this MR article is poor on details but it does not matter does it? Absolute render time is much much lower on a Mac Pro.

Any link to the article?
Reduction in render times is always good but there’s no way this is down to Apple hardware, especially when it can’t compete with 3090 cards. In the article they’re comparing CPU rendering to GPU rendering. This article is hugely misleading and makes it sound like Redshift on a Mac is the reason for such an insane improvement in render times when it’s not at all. Macrumor’s article is a joke.
 
  • Like
Reactions: 4487549
I'll tell you the same as I told the other fellow. Just go to Lunar's website and read the entire article. They're completely transparent, and never did they say it was a comparison of Redshift to Redshift. This is the FIRST time Redshift is native on a Mac. As very clearly demonstrated in the article on their website, they were comparing their render times via CPU before Redshift and Octane were available options vs. Redshift and Octane. They have separate examples for both cases. All ya have to do is go read the actual blog on their website lol. You're welcome.
It’s funny how this article doesn’t link to their article. Macrumors are trying to make it sound like it’s down to how amazing Apple hardware is when it’s purely down to switching to GPU rendering. It’s like me using the Physical based renderer and then firing up my 3090s and saying how much faster it is on Windows. It’s nothing to do with the platform, you’re just using a faster render engine.
 
Funny, I can tell a Redshirt render because it’s not using physics like Octane. Big difference. Octane is easier to use too. But the bloody thing crashes like a MF
On a recent project I explored using ACES and looked at using Octane. It is ever so slightly nicer looking but if you learn how to use Redshift you can get really incredible results and it rarely crashes.
 
On a recent project I explored using ACES and looked at using Octane. It is ever so slightly nicer looking but if you learn how to use Redshift you can get really incredible results and it rarely crashes.
I hear what you say and you couldn’t be more right - Octane is a bitch for crashing. There are some glitches for example the node editor (it’s crap when compared to Macon’s beautiful UX).

But! It’s non-biased so it’s not “pretending” - you’re getting a physically accurate renders. That means you can play with real physics of light and not approximate for it.

That also means it’s easier to use, as the RedShift learning curve is artificially steep because it’s an artificial process.

I find it fast as F too.

If only OTOY would stop releasing beta software and calling it stable. What a laugh - it’s getting a little better but you’d better have auto save set to two minutes cause this bitch crashes like a drunk on a hyperbike in a hurry to get home for a piss.
 
It's not apple's "amazing" hardware at work, it's code optimization. People here think the M1's have some legit graphics chips, lmao, it's about as good as the old as heck GTX 1050, which is pathetic in 2021.

For a 10Watt GPU, it’s fairly “legit” :) Nvidia or AMD need like 3 times more power to deliver comparable performance.
 
For a 10Watt GPU, it’s fairly “legit” :) Nvidia or AMD need like 3 times more power to deliver comparable performance.
That’s all fine and yes we all think the M1 GPU is great for an entry level laptop, but I want to see if Apple delivers something similar to a 3080/90.

I highly doubt they will [even if they can] based on their previous incarnations. What they will do is create mid range solutions that most people think are awesome because Safari is smooth and FCP is awesome. TBH I couldn’t care less what wattage my chips use in a desktop - its there to crank out the work.
A laptop, yes minimum wattage for best performance is great.
 
That’s all fine and yes we all think the M1 GPU is great for an entry level laptop, but I want to see if Apple delivers something similar to a 3080/90.

I guess it depends on how far Apple wants to go... they have the technology and the money. There is no practical reason why Apple would not be able to GPU of a size of, say, Ampere... and since Apple has an edge in both the process and perf/watt, they could potentially extract more performance from it. E.g. an 128-core GPU should fit on a 700mm2 or smaller die, consume under 250W of power (including RAM) and deliver more than peak 40TFLOPS...

Given the niche status of Mac Pro, I doubt that building a really large GPU in itself would be profitable for Apple. But then again, they could do it just for the bragging rights anyway. Recently announced Nvidia Grace gives a good idea of where such systems are headed: large CPUs and GPUs with very fast interconnect and high-bandwidth unified memory.
 
Last edited:
I hear what you say and you couldn’t be more right - Octane is a bitch for crashing. There are some glitches for example the node editor (it’s crap when compared to Macon’s beautiful UX).

But! It’s non-biased so it’s not “pretending” - you’re getting a physically accurate renders. That means you can play with real physics of light and not approximate for it.

That also means it’s easier to use, as the RedShift learning curve is artificially steep because it’s an artificial process.

I find it fast as F too.

If only OTOY would stop releasing beta software and calling it stable. What a laugh - it’s getting a little better but you’d better have auto save set to two minutes cause this bitch crashes like a drunk on a hyperbike in a hurry to get home for a piss.
Haha that’s a great metaphor for Octane’s reliability.

Im about to use Octane (potentially) for an automotive project as there are loads of close ups of cars and I want them to look as realistic as possible but one great way of explaining Redshift’s approach is that it aims for the most accurate looking renders but isn’t unbiased so it can cut a few corner to speed up the render whilst also delivering you an awesome render.

I’ve yet to use Octane with my new 3090s so I’ll be able to see what it’s like in terms of speed as it’s been a few years since I’ve used it on a client project.
 
  • Like
Reactions: amartinez1660
Reduction in render times is always good but there’s no way this is down to Apple hardware, especially when it can’t compete with 3090 cards. In the article they’re comparing CPU rendering to GPU rendering. This article is hugely misleading and makes it sound like Redshift on a Mac is the reason for such an insane improvement in render times when it’s not at all. Macrumor’s article is a joke.
“cant’t compete with 3090”. Also in this test when the Mac pro gets sub minute render? Anyone knows that there is no Apple CPU/GPU In the Mac pro. Who cares if it is GPU or CPU? You use the fastest given that all functions are supported so it is down to software. Agree, this MR article is trash.
 
Last edited:
On the bright side : now the 3D render software houses are at least trying to optimize for hardware in a Mac. That is novel. Even the M1 gets some love.
 
  • Like
Reactions: singhs.apps
“cant’t compete with 3090”. Also in this test when the Mac pro gets sub minute render? Anyone knows that there is no Apple CPU/GPU. Who cares if it is GPU or CPU? You use the fastest given that all functions are supported so it is down to software. Agree, this MR article is trash.
The issue with this article is it makes out like using a Mac suddenly has brought down render times to a ridiculously quick level. It hasn't. They changed render engines from a CPU to a GPU. This has nothing to do with the macOS platform but the tech inside the render engines. If the article made this clear, it wouldn't be an issue but they're misleading people by heavily implying it's down to switching to Macs. When I switched from Cinema 4D's native CPU-based render engine to Octane, which is GPU based, I had similar drops in render times but that has nothing to do with the OS.
 
  • Like
Reactions: vinegarshots
I'll tell you the same as I told the other fellow. Just go to Lunar's website and read the entire article. They're completely transparent, and never did they say it was a comparison of Redshift to Redshift. This is the FIRST time Redshift is native on a Mac. As very clearly demonstrated in the article on their website, they were comparing their render times via CPU before Redshift and Octane were available options vs. Redshift and Octane. They have separate examples for both cases. All ya have to do is go read the actual blog on their website lol. You're welcome.

It's crazy to me is that a production studio would be fine with wasting 25 minutes per frame rendering content that they could have done in a fraction of the time if they'd just used a PC with a Nvidia GPU installed instead of using CPU rendering on a Mac Pro. I understand why people like MacOS and Mac hardware, but that is ridiculous.
 
The issue with this article is it makes out like using a Mac suddenly has brought down render times to a ridiculously quick level. It hasn't. They changed render engines from a CPU to a GPU. This has nothing to do with the macOS platform but the tech inside the render engines. If the article made this clear, it wouldn't be an issue but they're misleading people by heavily implying it's down to switching to Macs. When I switched from Cinema 4D's native CPU-based render engine to Octane, which is GPU based, I had similar drops in render times but that has nothing to do with the OS.
Not for me, it just tells me that Mac Pro has a fast hardware/software for rendering. How fast would a 3090 be for the said scene?
 
Not for me, it just tells me that Mac Pro has a fast hardware/software for rendering. How fast would a 3090 be for the said scene?
I know this isnt quite an apple to Apple comparison, but:
On my iMac Pro, rendering the BMW test scene in Blender using CPU rendering (Blender removed OpenCL rendering support, which is the only way you could GPU render on a Mac using Blender Cycles renderer) took somewhere around 3.5 minutes. On my PC with a 3080 using CUDA/Optix, that same scene takes 12 seconds to render.
 
Last edited:
I know this isnt quite an apple to Apple comparison, but:
On my iMac Pro, rendering the BMW test scene in Blender using OpenCL GPU rendering took somewhere around 3.5 minutes. On my PC using a 3080, that same scene takes 12 seconds to render.

So, unless Metal ends up being around 17X faster than OpenCL, I wouldn't expect miracles.
Awesome - I should have hit ‘refresh’ :)

But yep, we run macs for fun stuff, and PC’s to do the heavy GPU rendering [and need to upgrade those to 3090’s as running out of memory on our 2080 supers]
 
  • Like
Reactions: amartinez1660
Awesome - I should have hit ‘refresh’ :)

But yep, we run macs for fun stuff, and PC’s to do the heavy GPU rendering [and need to upgrade those to 3090’s as running out of memory on our 2080 supers]
I had to edit my post because i forgot that Blender removed OpenCL rendering. Prior to that, the BMW benchmark was even slower on iMac Pro GPU...like 20, 30 minutes if i remember right. But all the later benchmarks I did were on the iMac Pro CPU (8 core)
 
I know this isnt quite an apple to Apple comparison, but:
On my iMac Pro, rendering the BMW test scene in Blender using CPU rendering (Blender removed OpenCL rendering support, which is the only way you could GPU render on a Mac using Blender Cycles renderer) took somewhere around 3.5 minutes. On my PC with a 3080 using CUDA/Optix, that same scene takes 12 seconds to render.

I had to edit my post because i forgot that Blender removed OpenCL rendering. Prior to that, the BMW benchmark was even slower on iMac Pro GPU...like 20, 30 minutes if i remember right. But all the later benchmarks I did were on the iMac Pro CPU (8 core)
Jesus, the difference in speed is just... wow, ridiculous. On the OpenCL part, you mean the OpenCL version when it was still supported it was even slower than today’s Cycles CPU-mode?

Anyways, I really hope Cycles and Co finds its way to Metal GPU driver support somehow and brings that time gap closer, I don’t care if it is twice as long for now, 25seconds that BMW scene let’s say, because as of now it’s like 20x slower...
 
I mean if you just go to Lunar's website they did an incredibly extensive dive into it with a case study and full on screen recorded videos showing this exact thing in real time. There's nothing suspect or misleading about it. Go to their website and click the news section and read the article. I'm not sure why people just say stuff without doing this, but you'll be much more educated once you've done so on the subject than you are now, which will help for future opinions you may share :)
Can you link that article you mention? I would love to take a look at what is it about. Did a quick search on the phone (“Lunar Redshift”) but it gave some results and even some Rene Ritchie’s YouTube video on this.
Is it the “Lunar Animation” GPU rendering with a Mac Pro case study with many DCC tools and renderers?
 
Not for me, it just tells me that Mac Pro has a fast hardware/software for rendering. How fast would a 3090 be for the said scene?
But it’s not the Apple hardware rendering, it’s just the GPUs and the GPU’s listed are available on PC. They’re also slower than Nvidia’s 3090 cards. It’s impossible to know how quickly they would render on the 3090 but they would definitely be quicker. Here’s a comparison

The article shouldn’t make people think that this is down to the Apple hardware when it’s down to the fact that developers have finally released a much faster type of render engine on a Mac that has been released on Redshift for years. It’s hilarious that they still used CPU rendering on Macs when they could have joined the GPU revolution years ago. I prefer macs but I only use mine for pre-production and animation. As soon as it comes to texturing, lighting and rendering I move to my PC.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.