Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Photo realistic 3D rendering super fast. I happily pay for the Cinema 4D version for the PC. In the past you had to have a nvidia card and since Apple won’t work with them anymore we vfx people had to run to windows machines. This offers big hope to those who would like to return to macs some day. Not me though, I actually like windows now for work and my Mac is just a toy I like to play with from time to time.
May I ask why you had to have an Nvidia card? Because of CUDA? Is that a requirement for the software you use or does it just speed things up?
I dip my toes in VFX and 3D rendering from time to time using After Effects and Cinema 4D on my iMac 2019 with a AMD Radeon Pro Vega 48. It works great for motion graphics graphics, more advanced stuff requires some patience, hehe.
 
Lol yeah people don’t realize how far ahead Nvidia and AMD are when it comes to graphics. Like wildly ahead.

Well, it must depend on what and how you compare.

Comparing GPUs is difficult, as different architectures have different strengths and weaknesses. But a few generally rather acceptable data points:
  1. Apple M1 GPU is generally a bit faster than ancient GTX 1050 Ti.
  2. RTX 3090 is approximately seven times as fast as GTX 1050 Ti.
  3. M1 GPU maximum power consumption is approximately 10 W.
  4. RTX 3090 maximum power consumption is approximately 350 W.
Now, it is evident that RTX 3090 is much faster than M1 GPU. But if you look at the numbers, M1 is much more power-efficient. RTX 3090 offers 7-fold performance with 35-fold power consumption. From the technological point of view, I would call Apple's chip more advanced while Nvidia is just using more power and using more parallelism.

So, I would not say Nvidia or AMD are wildly ahead. They have been concentrating on raw processing power whereas Apple has concentrated on making a very power-efficient GPU.
 
Never said it was. It is others overestimating them.

can you show us the charts with the M1 competing with the 30 series cards?
From the ones I have seen, it doesn’t get close to my current 16” MBP, let alone a 30 series Nvidia.
You're missing the point. Of course a 200 watt chip (3060 Ti) is going to beat a 10 watt chip. More power is more performance. But if you scale up the M1 linearly, or scale down the 30 series cards, the performance per watt is awesome. Future Apple GPUs have a ton of potential, and discounting that is like discounting the NVIDIA Titan because the AMD RADEON 6900 XT beats an NVIDIA 3060 handily. Of course it does, it is not a fair comparison of the underlying technology.
 
I like the Enterprise Refit so much, I have a 3 foot one that I built next to my desk. It's awesome!

50241913288_b492e49d0e_b.jpg
You have a Star Trek ship on your desk and a Star Wars icon on your handle.

The Sith in me calls this "a disturbance in the Force".

;)
 
You're missing the point. Of course a 200 watt chip (3060 Ti) is going to beat a 10 watt chip. More power is more performance. But if you scale up the M1 linearly, or scale down the 30 series cards, the performance per watt is awesome. Future Apple GPUs have a ton of potential, and discounting that is like discounting the NVIDIA Titan because the AMD RADEON 6900 XT beats an NVIDIA 3060 handily. Of course it does, it is not a fair comparison of the underlying technology.
I am not missing the point at all thanks, and am fully aware of the power usage.

we will wait and see what Apple releases. But for now, I won’t predict that they will compete with a 3080/90 as I can bet they won’t bother making cards to that speed.

I couldn’t care less on the power usage when it is in a desktop sat under my desk.

If the M1X or whatever is in the 16”
MBP doesn’t deliver the same as a mobile 3080 I will say Apple failed in the GPU.
 
  • Like
Reactions: peanuts_of_pathos
Constitution class refit, IMO, the best looking Enterprise of them all. Enterprise refit = Enterprise-A > Enterprise-E (Soveriegn) > Enterprise-D (Galaxy)> Enterprise-B (Excelsior) > Enterprise (Constitution) > Enterprise-C (Ambassador) > Enterprise (NX).
What about the Enterprise J?

And I agree with you, except I’d flip the top two. The Sovereign class is just beautiful!
 
  • Like
Reactions: peanuts_of_pathos
Well, it must depend on what and how you compare.

Comparing GPUs is difficult, as different architectures have different strengths and weaknesses. But a few generally rather acceptable data points:
  1. Apple M1 GPU is generally a bit faster than ancient GTX 1050 Ti.
  2. RTX 3090 is approximately seven times as fast as GTX 1050 Ti.
  3. M1 GPU maximum power consumption is approximately 10 W.
  4. RTX 3090 maximum power consumption is approximately 350 W.
Now, it is evident that RTX 3090 is much faster than M1 GPU. But if you look at the numbers, M1 is much more power-efficient. RTX 3090 offers 7-fold performance with 35-fold power consumption. From the technological point of view, I would call Apple's chip more advanced while Nvidia is just using more power and using more parallelism.

So, I would not say Nvidia or AMD are wildly ahead. They have been concentrating on raw processing power whereas Apple has concentrated on making a very power-efficient GPU.
There is no comparison.
The M1 does not use off chip memory and uses a shared on chip memory with the CPU.
The external cards must drive PCIe lanes and access DDR. Those alone require quite a bit of power.
You also assume scaling performance causes a linear increase in power consumption and that is definitely not the case with silicon.

Wildly different.
 
Been trying to set this up for hours with blender (cycles is awful on M1) and I can't get for the life of me get OctaneRender Server to open. I double click it, nothing. Without it open, blender says it can't find the 127.x.x.x server. Anyone smarter than me know what I need to do next? 😂

Edit: Just tried it on my older intel Mac, worked like a charm. Must be a bug in the Server app, hope they fix it soon!
Octane Server does not work for M1 yet. Only standalone works at the moment. Of course it sucks.
 
There is no comparison.
The M1 does not use off chip memory and uses a shared on chip memory with the CPU.
The external cards must drive PCIe lanes and access DDR. Those alone require quite a bit of power.
You also assume scaling performance causes a linear increase in power consumption and that is definitely not the case with silicon.

Wildly different.
You have pointed out the issues with the performance of a modular system quite well. The overheads.

In fact, multi GPUs scale very well between power draw and performance. This is well known for rendering jobs. Apple could make a cluster of multiple M1 GPU (not the CPU) for rendering jobs as these jobs are so perfect for parallelism.

I/O will be a challenge but it is premature to axe Apple GPUs by looking at the M1, which, after all, is design for portables and long battery life not for gaming and rendering.
 
I am not missing the point at all thanks, and am fully aware of the power usage.

we will wait and see what Apple releases. But for now, I won’t predict that they will compete with a 3080/90 as I can bet they won’t bother making cards to that speed.

I couldn’t care less on the power usage when it is in a desktop sat under my desk.

If the M1X or whatever is in the 16”
MBP doesn’t deliver the same as a mobile 3080 I will say Apple failed in the GPU.
Strange reasoning. So all people with a 16 inch need a 3080 mobile GPU? Said GPU draw something 150W? In a portable. DOA.

There are also other with less needs for rendering and already a 5700, octane is usable for smaller project. So why not be happy? No, as usual there are those with a NVIDIA (gaming) card that needs to show themselves. You guys have been around for awhile and are tiresome.

There is also a rumour that Octane X will come to iOS devices which is interesting considering the lated rumour of a VR/AR glasses. I can see a good match there.
 
  • Like
Reactions: theSeb and jeanlain
You also assume scaling performance causes a linear increase in power consumption and that is definitely not the case with silicon.
It entirely depends on the code you run on that silicon.
3D rendering is highly parallel. Performance mostly reflects the number of compute units and power consumption scales relatively linearly with that number (at some core frequency). On CPUs, it may not be the case because code is generally not multithreaded in a way that allows that (to a few exceptions, like Cinebench, which BTW is also 3D rendering).

Divding the performance of an RTX 3090 by its wattage and comparing it the same ratio obtained for the M1 doesn't mean much, I agree.
What's relevant however, is that the M1 soundly outclasses competing GPUs (i.e. those with similar wattage that are meant to equip thin laptops) and even certain dGPUs with much higher power ratings. And it's not as if competitors didn't want to achieve the highest possible power efficiency in that segment. The Intel Xe is no match for the M1, and it's even worse for AMD APUs (Vega graphics).
Apple's approach to GPU design (Tile-Based Deferred Rendering) is the factor here. It uses the caches, the compute units, and it avoid unecessary computations (hidden surface removal) much more efficiently than an Immediate Mode Renderer.
TBDR GPUs have been tradiionnaly restricted to phones and tablets, which is why some think that this design can't scale. But the M1 already proves it can to some extent, and there is no obvious reason why TBDR shouldn't scale beyond that. Apple clearly hinted at WWDC that all future Mac GPUs will use that design. More memory bandwidth will be required, but that is nothing that couldn't be achieved with HBM.

What's more, the TBDR design may be particularly well suited for multiple-GPU realtime rendering with different GPUs working on a different tiles. This is much harder to achieve on IMR GPUs without spikes in frame pacing (hence Nvidia dropping SLI).
 
M1 GPU doesn’t do Raytracing.

M1 with 8 x GPU does 2,600 GFLOPS
Nvidia RTX 3900 does 35,580 GFLOPS.
M1 would need 114 GPU’s to match an RTX 3900 in GFLOS.
I know GFLOPS isn’t the only metric to compare maybe not even the most important but it is at least one metric available.

8 x M1 GPU cores use about 10W power, which would be approx (if scaled linearly) 137W total for 114 GPUs.

Apple has mentioned they’d hope to do a Mac Pro with up to 128 GPUs, whether they can or not we’ll have to wait and see.
 
What matters is that OTOY bothered to make a Metal Port in the midst of a ASi transition. Sounds like a bad decision unless they have some inside information. I mean, how can OTOY expect to get their money back on the few MP users that uses Octane.

Why would it be a bad decision? I think the timing couldn't be better — with new powerful Apple GPU on the horizon Octane will be the first renderer that can take full advantage of them, likely netting them a big chunk of the mark share.
 
M1 with 8 x GPU does 2,600 GFLOPS
Nvidia RTX 3900 does 35,580 GFLOPS.
M1 would need 114 GPU’s to match an RTX 3900 in GFLOS.
With Apple GPUs employing TBDR, it probably does not need as much grunt to achieve the same performance, as I assume a lot of the computations done by IMR CPUs would not be needed for TBDR GPUs. Probably the more complex a scene is to render, the more TBDR will win out compared to IMR. It could very well be that a 64-core Apple GPU will be performant enough for their Mac Pros, as I'm sure the higher end desktops will have quite a big bandwidth pipe to feed all the hungry processing cores of their next Silicon. Of course, for pure compute tasks, FLOPS are FLOPS, but feeding the cores with timely data is important too, tho Apple have other IPs like NPU to pick up the slack.

Existing high end GPUs are all hobbled by the 32GB/s thruputs of the PCIe bus, even tho. they have VRAMs with monstrous bandwidth. The M1 on the other hand already have over 60GB/s thruput at its disposal with it dual-channel LPDDR4X memory. Later versions of Apple Silicons will definitely have more bandwidth to gobble up data.

I don't think Apple is aiming to get ahead of nVidia or AMD in the GPU race tho. They likely have their own target they are shooting for for each class of their products. If they come out ahead, that'll probably be a bonus.
 
  • Like
Reactions: iPadified
M1 GPU doesn’t do Raytracing.

Metal has state of the art raytracing support. It is currently implemented via compute shaders on all GPUs, including M1, so you can definitely use it for content creation workflows. I do not know whether Nvidia's RT can be used for rendering, since it is a gaming-related technology and likely includes some tradeoffs to make it faster at the cost of accuracy.

M1 with 8 x GPU does 2,600 GFLOPS
Nvidia RTX 3900 does 35,580 GFLOPS.
M1 would need 114 GPU’s to match an RTX 3900 in GFLOS.
I know GFLOPS isn’t the only metric to compare maybe not even the most important but it is at least one metric available.

8 x M1 GPU cores use about 10W power, which would be approx (if scaled linearly) 137W total for 114 GPUs.

First of all, the 35000 GFLOPS in the Ampere only exists in the fantasies of Nvidia's marketing department. Yes, you could reach those figures by carefully crafting compute shaders that just do a very long sequences of multiplication and addition and nothing else, but that's not how real stuff works. If you have any kind of memory access or integer computations, Ampere FLOPS throughput is effectively halved. You can also see this clearly in the benchmarks: Ampere GPUs are generally around 120-160% faster than their Turing counterparts for compute, but there is also an 30% increase in power consumption. So overall improvement is "only" 20% (which is not bad at all — but a far cry from 100% s implied by Nvidia's advertisement).

Basically, "amortized" throughput of an RTX 3090 is somewhere close to 20000 GFLOPS while that of M1 (provided it's not memory bound, which it almost aways is) is around 2000.

A hypothetical 128-core Apple GPU (with fast enough memory) should be more than enough to match and even outclass the 3090, at around half the power consumption.
 
Last edited:
  • Like
Reactions: iPadified
Why would it be a bad decision? I think the timing couldn't be better — with new powerful Apple GPU on the horizon Octane will be the first renderer that can take full advantage of them, likely netting them a big chunk of the mark share.
Sorry, I did not express myself clearly. That is what I meant. I see this as a sign that powerful GPUs will follow from Apple side. Otherwise, this effort does not give any meaning to me.
 
With Apple GPUs employing TBDR, it probably does not need as much grunt to achieve the same performance, as I assume a lot of the computations done by IMR CPUs would not be needed for TBDR GPUs. Probably the more complex a scene is to render, the more TBDR will win out compared to IMR.

TBDR works for games (rasterization), not for compute work. If you are doing raytracing, you could theoretically utilize persistent shader memory (exclusive Apple GPU feature enabled by their TBDR architecture) to your advantage. There are some optimization opportunities, but it's difficult to judge how and even whether they will be effective.

For general compute tasks however, TBDR does not matter, in fact, it can prove indirectly harmful as TBDR GPUs tend to have less memory bandwidth.

I don't think Apple is aiming to get ahead of nVidia or AMD in the GPU race tho. They likely have their own target they are shooting for for each class of their products. If they come out ahead, that'll probably be a bonus.

I agree. Apple GPUs are very different and the provide some benefits that others simply don't have. I have little doubt that they will punch way about their nominal weight in tasks like content creation and gaming (although this is hobble by lack of titles and the requirement to use Metal for maximal performance). I have doubts for their viability as a HPC GPGPU solution, but then gain Apple has there technologies to offset these limitations for domains like ML (their AMX accelerators are crazy fast for example).
 
Sorry, I did not express myself clearly. That is what I meant. I see this as a sign that powerful GPUs will follow from Apple side. Otherwise, this effort does not give any meaning to me.

Ah, yes, but it's a fairy safe bet. It would be nonsensical for Apple to even attempt the transition if M1 was the best they can do. They have certainly planned a viable answer across all their product ranges, and if M1 is something to go by w should expect at least 50-100% more performance for each AS model that replace the old Intel one.
 
Wait!!! Stop all this dick measuring for a minute - are they giving Octane Enterprise away FREE for a year to new Mac users?? This is hundreds of dollars worth!! Is this true?
 
Ah, yes, but it's a fairy safe bet. It would be nonsensical for Apple to even attempt the transition if M1 was the best they can do. They have certainly planned a viable answer across all their product ranges, and if M1 is something to go by w should expect at least 50-100% more performance for each AS model that replace the old Intel one.
Better than M1 - of course. Competing with high ned NVIDIA and AMD? That is not so sure and at least very interesting to see how they do that.
 
Apple M1 GPU is generally a bit faster than ancient GTX 1050 Ti.
I didn't know that, and as a GTX 1050 Ti owner, i'm quite impressed with M1 GPU now. Yeah, i'm going to upgrade my 1050Ti next winter, but it's honestly still ok for casual Rocket League and Warframe
 
  • Like
Reactions: amartinez1660
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.