Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Well they are benchmarking against old generation AMD, ATI cards

those cards had been released at 2017

Nowadays both AMD, Nvidia play muuuch higher

amazing chipsets for a mobile phone, no doubt

Xbox Ones Series S = 4 teraflops at 300 euros (and it includes other hardware apart from the graphic card)

Xbox One Series X = 12 teraflops

Custom PC = 20 teraflops (and isn't the highest you can get with a single video card)
They are comparing integrated GPU performance and running at 10x lower power!
 
  • Like
Reactions: philosoraptor1
Lets be honest.

First, Apple put crappy GPUs in Airs and even iMacs and Mac Pros.

Second, M1 is good enough for the majority of Air and Mini users, even Macbook pro 13 users.

Third, 1050ti is an old graphics card, so is rx560, we are currently at 3080ti from Nvidia and 6900xt from AMD, which both obliterate M1 GPU power in comparison, however both of them are not intended for mobile computing and Air-like computers.

So in truth you can game on Macbook Air, but not in 8k and/or high frame rates, as this is reserved for 3080ti and 5900xt GPUs, plus if you really want to game, you are better off with PS5, Xbox X or a PC with high end GPU.

However Macbook Air will run Fortnite (I know the legal battle between Epic and Apple) or PubG without any problems and that is fine for most people.

ps: However it will be interesting to see, what the future M chips will bring to Macs (pro and iMac models) and how quickly or even if they will try to close this gap.
"we are currently at 3080ti from Nvidia and 6900xt from AMD, which both obliterate M1 GPU power in comparison, however both of them are not intended for mobile computing and Air-like computers."
As in any comparisons in life you need to have common ground to compare upon, whoever brings up the not yet released AMD 6900xt (should start rolling out soon though) and do any comparisons is being snarky (or.....) , you don't say that the 5600M is a ****** GPU (best AMD has as a discrete mobile GPU) because the 6900XT exists , same as you don't say the new Zen3 CPU`s sucks in multithread because we have a Milan server part.
But you can say that Zen3 is best in class CPU for his MARKET and the 5600M is best in class in its MARKET.
And now you can say that M1 is best in class in its MARKET , will ppl try and see how much above its weight can it punch ? sure , its always the case with the best in class products , its like in boxing , you have the MW champ , can he go up one category and take on the best LHW ?? its usually "NO" but sometimes we have special boxers that can.

Lets try and stop throwing around 350W 700$ GPU`s names in a 10W machine performance analysis , even if you say "I know its not intended to be in a laptop" , like its not so obvious that you can skip this entire claim.
 
Their laptops as a whole. They've been stuck with what Intel can offer them (although not really, Ryzen has been available for a while now) and stuck with Intel onboard graphics for most of their machines (although, again, not really, there are plenty for thin and light laptops out there making use of dedicated graphics) and yet they've still barely updated their laptop designs in 4 years, even when its become clear that they run pretty hot.

But to be clear, the takeaway here is a positive one - I'm happy they're trying again.
Fair enough, I guess they probably could have made more of an effort to design a more suitable enclosure sooner given it was quite obvious Intel's 10nm roadmap was belly-up by at least 2017. At least they're now back to doing what they do best, designing the whole system hand in glove :)
 
All it means for me it that exept Intel, Apple will also ditch AMD's GPUs. Until now, I could not imagined Apple's GPUs will be faster than ones form AMD/nVidia.

Good news!
Keep imagining...because it's not true now, or likely ever. The fastest Nvidia cards are 50 x more powerful than the iGPU on the M1. For an iGPU it's impressive, but that's where it ends for now
 
A lot of people seem to mis that this smokes the Intel Iris and Intel UHD 630 graphics that ship in all mac books to this day. It also seems to come close the the discrete graphics in my MBP16 when you compare to the 5500M if you are looking at onscreen performance. Doesn't do as well in offscreen; without a reference I'm not sure exactly what that's referring to... I'm assuming off screen is the additional rendering done to create reflections and other such effects
 
If M1 can surpass low end desktop GPU, with merely 5W power, that's astonishing feat. Imagine the future variant of M1 or whatever it's called, it might reach Vega 56~64 / 2070 tier.

But still shame, I can't put all those power to play my favorite MSFS...
 
No it’s not what you seem to think.
Apple’s license allows them to make major changes to the chip design it’s just the base instruction set that Apple license.

To explain it for the majority of people that don’t understand it correctly.

Intel and AMD CPU’s both use the x86 and x64 instruction sets. But as you may know, the chip design between Intel and AMD are quite different. But they both are designed to support the same instruction set but deliver it via different core architecture designs.
I am sorry, but I did not quite understand your comment. I wrote "instruction set architecture", not "core". Does that not specify quite accurately what Apple licensed? (Of course, I have not seen the license, so I rely on publicly available information.)

If — and that is a big if — Nvidia is allowed to acquire ARM, Nvidia holds the most important processor core and ISA IPR in the world ranging from Cortex-M0 to high end processor cores and ISAs. If Apple manages to demonstrate that ARM ISA is superior to x86/x64 in personal computing, and that the transition is not too painful, it is very bad news to Intel and good news to ARM.

So, at the moment Apple M1 may be Nvidia's best friend. Apple is not a real competitor in GPUs, and it is unlikely Apple will release a discrete GPU. GPU architectures are highly optimized modern architecures, and there is not so much space for making something radically better. At the moment M1 benefits from the integrated approach (iGPU is more power-efficient as there is no need to drive the bus) and TSMC's great chip process. The first one does not scale to discrete GPUs, and the second one is not Apple's technology.
 
Keep imagining...because it's not true now, or likely ever. The fastest Nvidia cards are 50 x more powerful than the iGPU on the M1. For an iGPU it's impressive, but that's where it ends for now
The good news is that apple also as a ”discrete” GPU coming shortly. Additional cooling and more die area should allow it to compete with even much higher end GPUs.
 
This is impressive performance, but these articles lack so much context. It's disappointing when they just a shill for Apple. Readers might take this so much more seriously if they came across a little more honestly. These chips seem to show so much promise, but why compare them with old, budget cards while pretending they're beating "desktop graphics?" The m1 is faster than an Apple II desktop and uses less energy...whoa!

What does this article assume about its audience?
I guess they're trying to compare this to machines it could replace - nobody will buy a MacBook Air to replace a real gaming computer, so it doesn't need to be compared to an RTX 2070, but a lot of productivity machines still use variants of the GTX 1050 so it's a relevant comparison.
 
I guess they're trying to compare this to machines it could replace - nobody will buy a MacBook Air to replace a real gaming computer, so it doesn't need to be compared to an RTX 2070, but a lot of productivity machines still use variants of the GTX 1050 so it's a relevant comparison.
Actually, I was wrong in my post. I overlooked that they did give context.
 
For me, the big question isn't how good the M1 is (it's clearly very, very good), but rather where is the ceiling? In other words, just how far can Apple go with the fully integrated SOC and will they offer alternatives when that ceiling has been reached.

The big test for Apple will be in the coming months when they start rolling out their desktop machines and I'm intrigued to see it play out

I'm also ridiculously excited to see what the next Mac Pro will offer: I expect my current Mac Pro to last another 3 years, which gives them plenty of time to iron out any wrinkles :)
 
Quite impressive for a iGPU. Though we need to see if Apple can keep up with things like shaders, ray tracing, and other graphical niceties. 300 FPS is great, but not if that means the game has to miss out on tessellation.
So your presumption is that professional testers would overlook this in a graphics test? The only time I actually heard anything like this was a guy testing the battery life of a Samsung (in power save mode) to an iPhone. Pretty dumb, right, but most professional tests know what they are doing. to see how it is done wrong, look at the database on geekbench and see the obvious low balls - probably running handbrake on their Dell while doing a test on Geekbench, not very professional - dah
 
I wonder how much of Apple’s stellar performance is due to their near monopoly on 5nm silicon.
TSMC certainly enable what Apple is doing with their silicon, but I think it's also down to them being able to work through every bottleneck to squeeze out more potential than a more 'off the shelf' part can manage with the same resources.
 
  • Like
Reactions: adamw
The real test is the 27-inch iMac. The AMD 6700 XT is out already. Can Apple hit a moving target and somehow surmount AMD’s future GPUs? That would be incredible, but I doubt it.
 
It is mind boggling how many people are trying to downplay the performance by comparing to a 3080 or other dedicated cards. This is an integrated GPU. I don't know of another integrated GPU that is that good.
I think you'll find it's the article comparing the M1 with dedicated GPU's rather than the people....I guess the people are saying well if you want to compare like that, then compare it with cards that are seriously much faster rather than the one from 5 years ago that are not?
 
  • Like
Reactions: ohio.emt
So your presumption is that professional testers would overlook this in a graphics test? The only time I actually heard anything like this was a guy testing the battery life of a Samsung (in power save mode) to an iPhone. Pretty dumb, right, but most professional tests know what they are doing. to see how it is done wrong, look at the database on geekbench and see the obvious low balls - probably running handbrake on their Dell while doing a test on Geekbench, not very professional - dah

You presume a lot.
 
This is great news for laptop products. Will need to do a ton better to compete with the current cards from NVidia and AMD.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.