Render,Man.Ok, but what can we do with this?
Render,Man.Ok, but what can we do with this?
May I ask why you had to have an Nvidia card? Because of CUDA? Is that a requirement for the software you use or does it just speed things up?Photo realistic 3D rendering super fast. I happily pay for the Cinema 4D version for the PC. In the past you had to have a nvidia card and since Apple won’t work with them anymore we vfx people had to run to windows machines. This offers big hope to those who would like to return to macs some day. Not me though, I actually like windows now for work and my Mac is just a toy I like to play with from time to time.
Lol yeah people don’t realize how far ahead Nvidia and AMD are when it comes to graphics. Like wildly ahead.
You're missing the point. Of course a 200 watt chip (3060 Ti) is going to beat a 10 watt chip. More power is more performance. But if you scale up the M1 linearly, or scale down the 30 series cards, the performance per watt is awesome. Future Apple GPUs have a ton of potential, and discounting that is like discounting the NVIDIA Titan because the AMD RADEON 6900 XT beats an NVIDIA 3060 handily. Of course it does, it is not a fair comparison of the underlying technology.Never said it was. It is others overestimating them.
can you show us the charts with the M1 competing with the 30 series cards?
From the ones I have seen, it doesn’t get close to my current 16” MBP, let alone a 30 series Nvidia.
You have a Star Trek ship on your desk and a Star Wars icon on your handle.I like the Enterprise Refit so much, I have a 3 foot one that I built next to my desk. It's awesome!
![]()
I am not missing the point at all thanks, and am fully aware of the power usage.You're missing the point. Of course a 200 watt chip (3060 Ti) is going to beat a 10 watt chip. More power is more performance. But if you scale up the M1 linearly, or scale down the 30 series cards, the performance per watt is awesome. Future Apple GPUs have a ton of potential, and discounting that is like discounting the NVIDIA Titan because the AMD RADEON 6900 XT beats an NVIDIA 3060 handily. Of course it does, it is not a fair comparison of the underlying technology.
What about the Enterprise J?Constitution class refit, IMO, the best looking Enterprise of them all. Enterprise refit = Enterprise-A > Enterprise-E (Soveriegn) > Enterprise-D (Galaxy)> Enterprise-B (Excelsior) > Enterprise (Constitution) > Enterprise-C (Ambassador) > Enterprise (NX).
There is no comparison.Well, it must depend on what and how you compare.
Comparing GPUs is difficult, as different architectures have different strengths and weaknesses. But a few generally rather acceptable data points:
Now, it is evident that RTX 3090 is much faster than M1 GPU. But if you look at the numbers, M1 is much more power-efficient. RTX 3090 offers 7-fold performance with 35-fold power consumption. From the technological point of view, I would call Apple's chip more advanced while Nvidia is just using more power and using more parallelism.
- Apple M1 GPU is generally a bit faster than ancient GTX 1050 Ti.
- RTX 3090 is approximately seven times as fast as GTX 1050 Ti.
- M1 GPU maximum power consumption is approximately 10 W.
- RTX 3090 maximum power consumption is approximately 350 W.
So, I would not say Nvidia or AMD are wildly ahead. They have been concentrating on raw processing power whereas Apple has concentrated on making a very power-efficient GPU.
I like the Enterprise Refit so much, I have a 3 foot one that I built next to my desk. It's awesome!
Octane Server does not work for M1 yet. Only standalone works at the moment. Of course it sucks.Been trying to set this up for hours with blender (cycles is awful on M1) and I can't get for the life of me get OctaneRender Server to open. I double click it, nothing. Without it open, blender says it can't find the 127.x.x.x server. Anyone smarter than me know what I need to do next? 😂
Edit: Just tried it on my older intel Mac, worked like a charm. Must be a bug in the Server app, hope they fix it soon!
You have pointed out the issues with the performance of a modular system quite well. The overheads.There is no comparison.
The M1 does not use off chip memory and uses a shared on chip memory with the CPU.
The external cards must drive PCIe lanes and access DDR. Those alone require quite a bit of power.
You also assume scaling performance causes a linear increase in power consumption and that is definitely not the case with silicon.
Wildly different.
Strange reasoning. So all people with a 16 inch need a 3080 mobile GPU? Said GPU draw something 150W? In a portable. DOA.I am not missing the point at all thanks, and am fully aware of the power usage.
we will wait and see what Apple releases. But for now, I won’t predict that they will compete with a 3080/90 as I can bet they won’t bother making cards to that speed.
I couldn’t care less on the power usage when it is in a desktop sat under my desk.
If the M1X or whatever is in the 16”
MBP doesn’t deliver the same as a mobile 3080 I will say Apple failed in the GPU.
It entirely depends on the code you run on that silicon.You also assume scaling performance causes a linear increase in power consumption and that is definitely not the case with silicon.
What matters is that OTOY bothered to make a Metal Port in the midst of a ASi transition. Sounds like a bad decision unless they have some inside information. I mean, how can OTOY expect to get their money back on the few MP users that uses Octane.
With Apple GPUs employing TBDR, it probably does not need as much grunt to achieve the same performance, as I assume a lot of the computations done by IMR CPUs would not be needed for TBDR GPUs. Probably the more complex a scene is to render, the more TBDR will win out compared to IMR. It could very well be that a 64-core Apple GPU will be performant enough for their Mac Pros, as I'm sure the higher end desktops will have quite a big bandwidth pipe to feed all the hungry processing cores of their next Silicon. Of course, for pure compute tasks, FLOPS are FLOPS, but feeding the cores with timely data is important too, tho Apple have other IPs like NPU to pick up the slack.M1 with 8 x GPU does 2,600 GFLOPS
Nvidia RTX 3900 does 35,580 GFLOPS.
M1 would need 114 GPU’s to match an RTX 3900 in GFLOS.
M1 GPU doesn’t do Raytracing.
M1 with 8 x GPU does 2,600 GFLOPS
Nvidia RTX 3900 does 35,580 GFLOPS.
M1 would need 114 GPU’s to match an RTX 3900 in GFLOS.
I know GFLOPS isn’t the only metric to compare maybe not even the most important but it is at least one metric available.
8 x M1 GPU cores use about 10W power, which would be approx (if scaled linearly) 137W total for 114 GPUs.
Sorry, I did not express myself clearly. That is what I meant. I see this as a sign that powerful GPUs will follow from Apple side. Otherwise, this effort does not give any meaning to me.Why would it be a bad decision? I think the timing couldn't be better — with new powerful Apple GPU on the horizon Octane will be the first renderer that can take full advantage of them, likely netting them a big chunk of the mark share.
With Apple GPUs employing TBDR, it probably does not need as much grunt to achieve the same performance, as I assume a lot of the computations done by IMR CPUs would not be needed for TBDR GPUs. Probably the more complex a scene is to render, the more TBDR will win out compared to IMR.
I don't think Apple is aiming to get ahead of nVidia or AMD in the GPU race tho. They likely have their own target they are shooting for for each class of their products. If they come out ahead, that'll probably be a bonus.
Sorry, I did not express myself clearly. That is what I meant. I see this as a sign that powerful GPUs will follow from Apple side. Otherwise, this effort does not give any meaning to me.
Better than M1 - of course. Competing with high ned NVIDIA and AMD? That is not so sure and at least very interesting to see how they do that.Ah, yes, but it's a fairy safe bet. It would be nonsensical for Apple to even attempt the transition if M1 was the best they can do. They have certainly planned a viable answer across all their product ranges, and if M1 is something to go by w should expect at least 50-100% more performance for each AS model that replace the old Intel one.
I didn't know that, and as a GTX 1050 Ti owner, i'm quite impressed with M1 GPU now. Yeah, i'm going to upgrade my 1050Ti next winter, but it's honestly still ok for casual Rocket League and WarframeApple M1 GPU is generally a bit faster than ancient GTX 1050 Ti.