Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Real “my use cases don’t work well on a Mac and Apple has personally wronged me, so people who defend them must be lying” vibes.

And no, nobody other has ever claimed Bluetooth “sounds better”.
But a lot have claimed that the 3.5mm headphone jack is "outdated", "unnecessary", "another point of failure"

Funny that the headphone jack is preset in all Mac machines. They are even present in all Pro machines, including the Mac Studio.
 
I agree this is the issue.
This is basically a rerun of the argument a few years ago about whether Apple was *really* faster than Intel. Obsessing over minor benchmarking details and whether a particular comparison was "fair" or not missed the more important points:
- that Apple was providing competitive performance -- in a PHONE chip...
- that Apple was on a very different annual improvement trajectory than Intel.

We see the exact same thing here. This is Apple's *first* run at a desktop class GPU and already it's "competitive" with nVidia. Soon (end of this year? this time next year?) we'll be seeing M2's with the GPU that's the successor to the A15 GPU. And that A15 GPU was a substantial boost over the A14 GPU...

Most of these benchmarks are on code that was converted to Metal sometime in the past year and has not yet had the full optimization treatment applied. That, and Apple's annual trajectory, are the issue, not whether one can find a benchmark showing this curve goes higher than that curve.
Nvidia is not Intel. They are not resting on their laurels.

I for one am interested on their upcoming RTX 4xxx GPUs that are built on TSMC 5nm.

IMO, as long as they keep pace with TSMC node improvements like Apple, I doubt they will be getting left behind anytime soon.
 
Nvidia is not Intel. They are not resting on their laurels.

I for one am interested on their upcoming RTX 4xxx GPUs that are built on TSMC 5nm.

IMO, as long as they keep pace with TSMC node improvements like Apple, I doubt they will be getting left behind anytime soon.
Apple has already surpassed Nvidia…all you have to do is scale Apple silicon to the same TDP as what Nvidia is putting out to see this. If they really wanted to Apple could make a discrete GPU with 256 cores @400w. It’d have over 80 Tflops of raw compute vs 35 Tflops of the 3090. If Nvidia was so far ahead, how come it was Apple that built the first working chiplet GPU…?
 
  • Like
Reactions: neuropsychguy
Compute benchmarks should scale pretty linearly with the number of GPU cores, so it's concerning that the Ultra is getting way less than twice the score of the Max.
If all cores were interconnected equally, it might scale linearly. The current "dual cpu" design has its limits. I mean, the M1 Ultra is basically a dual M1 Max. Besides... handling so many cores probably will eat some of the total processing power.


Forget the actual results. The worrisome part for Nvidia is Apple has a chip that is quite good and might overtake them at sometime even though it is not a discrete graphic card. I mean this stuff is built into the chip. Everyone used to laugh at built-in graphics. Not so much anymore.
Indeed. From a technological stand point it has always been the goal to try to integrate everything onto one chip - its just a efficient/economical thing to do. Too bad for all those companies producing the huge PC casings with all the lights and cooling fans. ;)
 
  • Haha
  • Like
Reactions: Mr.PT and VulchR
If Apple Silicon is overall cheaper as you imply, what is stopping cloud providers from using it themselves that and providing a cheaper service(by economies of scale) than owning the hardware outright?
At the moment, software support and rack space requirements. Give it time…
 
In the latest video of Max Tech, you can see that the max ultra has not shown its true potential yet. It is still limited by MacOS /software. They could not push the gpu beyond 60 watt usage, while at full power the gpu should have a wattage of 100 or more. There is still a lot of headroom and software should be updated for optimizing performance.
 
Apple has already surpassed Nvidia…all you have to do is scale Apple silicon to the same TDP as what Nvidia is putting out to see this. If they really wanted to Apple could make a discrete GPU with 256 cores @400w. It’d have over 80 Tflops of raw compute vs 35 Tflops of the 3090. If Nvidia was so far ahead, how come it was Apple that built the first working chiplet GPU…?
AFAICT the AMD MI200 series is the first MCM “GPU”. Nvidia isn’t going to have MCM until Hopper as far as anyone can tell.
 
If Apple Silicon is overall cheaper as you imply, what is stopping cloud providers from using it themselves that and providing a cheaper service(by economies of scale) than owning the hardware outright?
Apple is a "cloud provider"...
You don't think this is part of their ULTIMATE roadmap...?
 
But … but… he knows a guy who worked for the government!
DISA hasn’t made STIGs for MacOS 12 yet, though on the face of it you probably could use the ones from 11. Aside from that DoD tends to not want storage to go back to the vendor even in unclass systems let alone classified systems. Not sure if Apple has contracts in place to account for not getting hardware back when it is replaced due to being broken.
 
I wonder how he was able to get a higher score on his 3090 at a higher resolution than The Verge did.

EDIT: oh I see the M1 Ultra doesn't have 100% utilization of the GPU only 83%, hmm
Yeah poking around in the settings for this game it is clear that GPU Bound needs to read 99 or 100 percent. Otherwise you are showing how CPU limited you are for the benchmark.
 
Apparently the Macrumors editors don't understand the difference between "performance" and "performance per watt".

Alright then.
 
Seems to indicate it shows as 1 GPU to the host system, just like how Apple does it. Do you have something showing otherwise?
1647829289840.png

It’s different…the second GPU isn’t accessible directly
 
Yes, and IT departments rarely waste time installing upgrades. They buy machines on a lease and replace them at a cycle.
You have to be careful when you try to categorize all IT departments. The organization I worked for had hundreds of thousands of machines. The vast majority were owned, not leased. We couldn't afford (not money, but lost productivity) to have too many machines down. We line item (modular) repairs in house, installed licensed software, and performed security updates every other night.

None of what I just wrote is possible with Apple machines. You would have to rely solely on Apple support, which my previous organization would NEVER agree to. Plus Apple doesn't go everywhere we went. Apple is a fair weather company when it comes to their machines and support.
 
I'm aware of that entirely.

We don't do anything without extensive qualification but the point is you throw an idea on the table and test it to destruction.
I mentioned this because you seemed a little to starry eyed. As you know, things are very different when you're dealing with someone else's money and resources. Plus things that work well for the consumer on a one or two machine basis doesn't necessarily scale well to thousands of machines.

This is also the same company that couldn't understand that some people/families had multiple iPods. Their iTunes would try to upload the same Playlist to multiple iPods. That doesn't breed confidence in their ability to handle large data centers with multiple connected machines.
 
I mentioned this because you seemed a little to starry eyed. As you know, things are very different when you're dealing with someone else's money and resources. Plus things that work well for the consumer on a one or two machine basis doesn't necessarily scale well to thousands of machines.

This is also the same company that couldn't understand that some people/families had multiple iPods. Their iTunes would try to upload the same Playlist to multiple iPods. That doesn't breed confidence in their ability to handle large data centers with multiple connected machines.

Completely agree. My point is more that we'd be able to scrap some of the local cost and give our developers what they want (which is basically "anything but ****ing windows please"). We would still run workloads in the cloud on AWS GPU instances.

As for their ability to run DC infra, they have improved in that space recently. They are running a lot of stuff on Kubernetes under an SRE working model. They learned from their early mistakes, which were mostly related to trying to scale up heaps of excrement on WebObjects or things that looked like it. I work with people who regularly cross pollinate with them...
 
Apple has already surpassed Nvidia…all you have to do is scale Apple silicon to the same TDP as what Nvidia is putting out to see this. If they really wanted to Apple could make a discrete GPU with 256 cores @400w.
Yeah yeah and Nvidia can also scale with TSCM with lots of TFLOPs. If you can speculate, I can too.

If Nvidia was so far ahead, how come it was Apple that built the first working chiplet GPU…?
If your goalpost is making a working chiplet GPU, sure.

If you are actually doing industrial work like 3D Modeling, Data Crunching, Machine Learning, etc. pray tell me, how is Apple ahead?

Come to think of it, your reasoning is just like Apple's vague "X times better than Nvidia 3090".

Better in what?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.