Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Well, it must depend on what and how you compare.

Comparing GPUs is difficult, as different architectures have different strengths and weaknesses. But a few generally rather acceptable data points:
  1. Apple M1 GPU is generally a bit faster than ancient GTX 1050 Ti.
  2. RTX 3090 is approximately seven times as fast as GTX 1050 Ti.
  3. M1 GPU maximum power consumption is approximately 10 W.
  4. RTX 3090 maximum power consumption is approximately 350 W.
Now, it is evident that RTX 3090 is much faster than M1 GPU. But if you look at the numbers, M1 is much more power-efficient. RTX 3090 offers 7-fold performance with 35-fold power consumption. From the technological point of view, I would call Apple's chip more advanced while Nvidia is just using more power and using more parallelism.

So, I would not say Nvidia or AMD are wildly ahead. They have been concentrating on raw processing power whereas Apple has concentrated on making a very power-efficient GPU.
Nice summary. Would like to comment that also in real world performance on M1’s is also higher than what a synthetic score might tell... on one of the recent Max Tech comparisons, an LG Gram compared to an M1 Mac (
at around min 10):
- LG Gram GPU Geekbench ~15000
- M1 MBP ~21000
- LG Gram gaming like rendering tests: ~45fps
- M1 Mac close to 90fps

So for about maybe 50% more in synthetic scores it got double the output in the other ones.

Would be curious to see benchmarks like the Cinebench one (on which the Mac is over 2x the score of the LG gram) but with these gpu renderers.
 
Competing with high ned NVIDIA and AMD? That is not so sure and at least very interesting to see how they do that.

I think the only question is how far Apple wants to push it. Their GPU design is around 1.5-2x more efficient than most efficient Nvidia or AMD GPUs (just looking at pure compute performance, not rasterization where TBDR makes it even more efficient). Main reasons for this are probably the relative simplicity of Apple GPUs compared to their big brethren as well as superior process. Moreover, Apple has already proved that they have all the necessary tech to build fast, large caches and high-performance memory subsystems. I see no major reason why they wouldn't be able to put together larger and larger GPUs if they wanted. Yields could be an issue, but then again 5nm is maturing quickly and Apple could afford to pay more money for the chips now that they don't buy it from a third-party. Ordering their own chips even at inferior yields is still going to be cheaper for them then buying top-of-the line binned and die-thinned chips from AMD...
 
Octane X is the world's first and fastest unbiased, spectrally-correct GPU production renderer...
I assume these word mean something to savvy people? o_O
 
So, the news say that it's "free" for Mac Pro/MacBook Pro/ iMac Pro users.

But it seems like it's actually a subscription where the first year is free.

I signed up for their subscription with the free tier, reading through a ton of ToS and creating two accounts in the process, but the app still says that there's no license after logging in with the same account I created the license with now (They seem to have a "forum account" that also manages the license and then a general account.)

Did anyone get this to work?

This is terrible user experience. Subscriptions like this should be managed through the App Store and not third party billing and user accounts... I don't know why apple even allows this. This looms like a clear violation of the apple App Store agreement to me since they effectively bill outside of the App Store.

EDIT: ok, got it to work. The trick is to not use the "Mac Pro" offer but to just click the "Free trial" once signed up.
EDIT2: they also lie about their data collection practices. it says "data not collected" in the App Store, but their user agreement states otherwise.
 
Octane Server does not work for M1 yet. Only standalone works at the moment. Of course it sucks.
Thanks for the info! im curious, anywhere it says this info where I can follow for updates of when it will work?

I never said it sucks - I said cycles sucks! that's why im trying this supposedly awesome metal based renderer, too bad its not available for my computer yet. I will be patient!
 
Nice summary. Would like to comment that also in real world performance on M1’s is also higher than what a synthetic score might tell... on one of the recent Max Tech comparisons, an LG Gram compared to an M1 Mac (
at around min 10):
- LG Gram GPU Geekbench ~15000
- M1 MBP ~21000
- LG Gram gaming like rendering tests: ~45fps
- M1 Mac close to 90fps

So for about maybe 50% more in synthetic scores it got double the output in the other ones.

Would be curious to see benchmarks like the Cinebench one (on which the Mac is over 2x the score of the LG gram) but with these gpu renderers.
That is an interesting observation. But when comparing CPUs or GPUs there are no simple metrics. Back in the old days CPU manufacturers competed in MIPS numbers, and at that time it was realized that the acronym comes from Misleading Information Provided by Sales...

Exactly the same applies to TFLOPs and other similar metrics. The only real metrics are actual performance figures in real world tasks, number of seconds and number of joules used to carry out the specific task. Best systems are such that there are no systems available which would be both faster and consume less energy.

And here the problem is the vast number of possible tasks. You can easily pick (or craft) tasks which prefer some architecture. If you want to make M1 look good, pick a suitably small dataset and a suitable ML segmentation algorithm. With this combination M1 CPU+GPU+ANE puts up a good fight in speed against high end GPUs and wins them hands down in energy consumption.

But is that a realistic scenario? Not for me, at least, but I do not know about you. Certainly, it is not comparing GPU to GPU.

There are worse and better chips in general. M1 is a beast when it comes to energy efficiency, and it is interesting to see how it scales. I am much more interested in ML inference or video encoding than gaming, so Apple likes me.

To my eye it seems that Apple has been able to make a magnificent small SoC. This includes all the infrastructure around it. What I am saying is that Apple is not behind AMD or Nvidia, it may even be a bit ahead. But I am not sure if they play the same game.

(And, yes, I am very curious about high end MBPs with Apple Silicon. How they turn out to be tells a lot about the scalability of Apple’s technology.)
 
You have a Star Trek ship on your desk and a Star Wars icon on your handle.

The Sith in me calls this "a disturbance in the Force".

;)
Yeah, I like both. I just finished a Fine Molds Millennium Falcon at 1/72 scale too. Both franchises have lots of merit.

Not disturbing the force, just enhancing it. :)
 
May I ask why you had to have an Nvidia card? Because of CUDA? Is that a requirement for the software you use or does it just speed things up?
I dip my toes in VFX and 3D rendering from time to time using After Effects and Cinema 4D on my iMac 2019 with a AMD Radeon Pro Vega 48. It works great for motion graphics graphics, more advanced stuff requires some patience, hehe.
Yep, up until recently CUDA was required. It will still be a while before these render engines work well enough with METAL but we’re on our way it seems.
 
  • Like
Reactions: RammyXX
What matters is that OTOY bothered to make a Metal Port in the midst of a ASi transition. Sounds like a bad decision unless they have some inside information. I mean, how can OTOY expect to get their money back on the few MP users that uses Octane.
Metal is supported in Apple Silicon and has been mandated since Mojave, part of the reason you need to upgrade the graphics card to get those machines up to that release. I think they're expecting Apple to start delivering pretty compelling rendering options on the M1 and the way to access that is via Metal. Since it's developed by Apple, it also means that each new GPU they develop should have Metal support in a way that potentially expands capabilities without having to update code not dissimilar to how it handles the differences in the rendering architecture between the AMD GPUs and the M1 or how when you use the Accelerate framework it automatically tries to give you the most optimised solution for the platform the code runs on.

What they're doing is getting to the market early and trying to capture attention so that when the next Mac Pro launches they've already got a ready to go solution. Who knows, they might even be angling for an on stage presentation as well. "Octane X on the new Mac Pro with M1X is faster than any of the comparable options for our other platforms" is just the sort of line Apple likes to play at launch.
 
So…. Is Octane currently the best rendering option for the new MacBook Pro with M1 Max?
 
Screen Shot 2022-03-21 at 12.30.18 PM.png


So, that's a 15 second render time for the procedural chess demo on the M1 Ultra. That ties it for 3rd place with the Radeon Pro W6800 X and beats both the Radeon VII and Radeon Pro Vega II.
 
View attachment 1978309

So, that's a 15 second render time for the procedural chess demo on the M1 Ultra. That ties it for 3rd place with the Radeon Pro W6800 X and beats both the Radeon VII and Radeon Pro Vega
View attachment 1978309

So, that's a 15 second render time for the procedural chess demo on the M1 Ultra. That ties it for 3rd place with the Radeon Pro W6800 X and beats both the Radeon VII and Radeon Pro Vega II.
Thank you for this info. My 16” Intel MBP with 2 x 6900XT in eGPUs and 2 x 5700XT (in a cMP node) chess scene render is 3 seconds. Hopefully Apple will enable discrete graphics for Octane users.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.