Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
They won't. Why would you won't them to use slower GPUs?
So we will be stuck with a useless GPU
Apple M1 is equivalent to intel xe 2020 GPU
The equivalent of GTX 570…. Released 10 years ago as a low end card

Why should I not be able to use a 1080, 2080 or 3080?
A 3080 is 1.000% faster than M1 GPU
A 1080 is 500% faster.
If apple releases a GPU equal to a RTX3060 it would be amazing
 
So we will be stuck with a useless GPU
Apple M1 is equivalent to intel xe 2020 GPU
The equivalent of GTX 570…. Released 10 years ago as a low end card

Why should I not be able to use a 1080, 2080 or 3080?
A 3080 is 1.000% faster than M1 GPU
A 1080 is 500% faster.
If apple releases a GPU equal to a RTX3060 it would be amazing
As long as Apple ignores the gaming industry, they will not invest in a GPU like RTX series.
 
Why should I not be able to use a 1080, 2080 or 3080?
A 3080 is 1.000% faster than M1 GPU
A 1080 is 500% faster.
If apple releases a GPU equal to a RTX3060 it would be amazing

A desktop 3080 is 10 times faster and uses over 30x as much power. A mobile 3080 is about 6 times faster and consumes 5x as much power.

Apple currently holds a massive efficiency advantage over Nvidia and AMD. A carefully binned Navi2 might be able to get within 30-40% of Apple G13 efficiency, but AMD can't ship millions of such chips. Apple can. Of course, you won't be getting as many (they will never give you a 3080-class GPU in the MacBook Air for example), but you will be getting the best choice in it's class. You might complain all you want about the M1 but the simple fact is that it's significantly better than anything you get in the premium segment for that price.

For more professionally-oriented laptops expected this fall, my guess is that Apple will offer up to 30-40W GPUs, which should be somewhere in the ballpark of mobile RTX 3060-3070.
 
The problem hasn't been hardware or performance for a while now. It's about utilizing it. iOS or even iPadOS is simply not using all that performance. No pro apps. Locked down OS. At this point, all these advances in performance is almost a mockery, as if they want to say "Yes we have the best chip - but you can't really use it to its max potential".
The faster the chip, the less time it is working, thus saving power.
Performance for the purpose of saving battery.
 
making the next Mac sound even more appealing. My little M1 Mac mini is chugging along but it was always a placeholder albeit a very competent placeholder. starting to save up now because apple canada dumped their payment plans.
 
It’s a great phone but I definitely noticed it gets hot easy.
My 13 Pro Max did on day 1 and while restoring my XS Max backup (this past Friday) been pushing the phone fairly hard (games and other intensive tasks) and even in a case it’s been cool. I’ve heard others say it gets really warm.

Maybe it’ll happen to me eventually but so far so good.
 
We are getting to a point where a majority of us do not need to concern ourselves with the processor in our devices. Apple knows this, hence why they didn’t make a big deal out of the A15. Who cares what it can do? Just know that it can do everything you need and more. I bought a Mac Mini in the spring and it hasn’t missed a beat from my Intel Mac. Yes, it does some things faster, but what is most impressive is that Rosetta runs Intel apps faster than they did on my Intel machine. That’s incredible and offers a (mostly) seamless experience from my Intel MacBook Pro.

This is what I don’t get too eager for an M2 or M1X. The M1 is already pretty good. What’s more interesting to me is the graphics performance. It will be awesome when the day comes that we don’t need to concern ourselves with graphics cards. We’re still a few years away from that, but the day is coming.

(Emphasis added above) It may be more accurate to say that, at least in the realm of mobile, it's a matter of being able to do everything Apple needs, and more. The overall capability of the SoC is a computing environment, not just traditional CPU/GPU benchmarking based on desktop computing models. If Apple wants more on-device AI then it boosts neural network capability, locks down hardware security with the Secure Enclave, etc. Power efficiency drives battery life, etc., etc.

There's far more power in today's Apple devices than I personally can find ways to use. I think the overall thrust for Apple on the power side is a smooth, friction-free experience for the user - trying to avoid anything that would divert the users attention from the task to the machine that's trying to execute the task. That brings overall customer satisfaction.
 
  • Like
Reactions: leman
I can develop a Java program that will run everywhere, but a powerful technical iOS app only runs on iOS.
Yes and no. While not as seamlessly portable as a Java program, a powerful app developed for iOS/iPadOS can also run on macOS. I don’t even mean Catalyst or the run-iOS-apps-on-M1-Mac gimmick. The core code base can be the same across platforms with the interface customized for the target platform. Lightroom is a good example - the iOS/iPad version shares much of its code base with the desktop non-Classic version.
 
Very impressive. The improvement from the A15 to the A16 with next year’s iPhone 14/14 Pro should be even more impressive. That along with a redesign and significant improvement with the camera should make that a compelling upgrade for even iPhone 13 Pro owners.
 
Just a small nitpick that I’ve seen reported everywhere. No, this year’s event is not the first time Apple compared their processors to the competition instead of their own previous generation. They did the same thing last year as well during the iPhone 12 event. Just want to point this out because it has been annoying to read/hear about the doom and gloom about such comparison—especially now that it’s unfounded—when iys such an easy bit of information to fact check.
If I remember correctly Apple also significantly understated the performance gains of the A14 when they did discuss it (relative to the A12 in the prior generation iPad Air).

I’d imagine part of it is because they don’t want to send the message “Look how lousy last year’s chip is compared to this year’s chip! Now let’s turn it over to Tim for a new product based on last year’s said chip!”
 
  • Like
Reactions: leman
So we will be stuck with a useless GPU
Apple M1 is equivalent to intel xe 2020 GPU
The equivalent of GTX 570…. Released 10 years ago as a low end card

Why should I not be able to use a 1080, 2080 or 3080?
A 3080 is 1.000% faster than M1 GPU
A 1080 is 500% faster.
If apple releases a GPU equal to a RTX3060 it would be amazing
The M1 GPU uses tile-based deferred rendering (TBDR) whereas most mainstream PC GPUs use immediate mode rendering (IMR). The two rendering modes are fundamentally incompatbile, which is likely why eGPU support is not an option on M1 Macs even though the Mac can “see” the GPU.

AMD and nVidia GPUs will never be available on M-series Macs due to this incompatibility. Apple will more likely release their own higher-powered GPU options which, given the inherent efficiencies of TBDR over IMR, should keep pace with nVidia’s offerings.
 
A desktop 3080 is 10 times faster and uses over 30x as much power. A mobile 3080 is about 6 times faster and consumes 5x as much power.

Apple currently holds a massive efficiency advantage over Nvidia and AMD. A carefully binned Navi2 might be able to get within 30-40% of Apple G13 efficiency, but AMD can't ship millions of such chips. Apple can. Of course, you won't be getting as many (they will never give you a 3080-class GPU in the MacBook Air for example), but you will be getting the best choice in it's class. You might complain all you want about the M1 but the simple fact is that it's significantly better than anything you get in the premium segment for that price.

For more professionally-oriented laptops expected this fall, my guess is that Apple will offer up to 30-40W GPUs, which should be somewhere in the ballpark of mobile RTX 3060-3070.
Nono of course I’m happy with apples GPU power in their slim laptops. I would love perhaps 50% more GPU power and dedicated Ray tracing( this is the future after al) for better light rendering. Perhaps the equivalent of a GTX 1060 in a MacBook Air( apple GpU

I’m okey with a dedicated GPU taking more power, it’s after all a dedicated thing. The problem is apples M1 chips do not support the use of dedicated or external GPUs, I wouldn’t be able to plug in a GPU with thunderbolt in an iMac M1 if I wanted to.
I just hope their next update has this fixed.

a MacBook Air/ iMac/ Mac mini are all the same internally. Extremely good for what you get, Unfortunately a dealbreaker when it comes to its gaming ability( might change in the future) and no external GPU support

but I welcome apples CPUs and GPUs as a market breaker agains the stagnant status quo.
 
The M1 GPU uses tile-based deferred rendering (TBDR) whereas most mainstream PC GPUs use immediate mode rendering (IMR). The two rendering modes are fundamentally incompatbile, which is likely why eGPU support is not an option on M1 Macs even though the Mac can “see” the GPU.

AMD and nVidia GPUs will never be available on M-series Macs due to this incompatibility. Apple will more likely release their own higher-powered GPU options which, given the inherent efficiencies of TBDR over IMR, should keep pace with nVidia’s offerings.
For things like compute and rasterization I think Apple will catch up. They haven’t started ML AA/SS or hardware RT, so that remains to be seen on how well they can compete there.
 
Good to hear. I'm on the fence for a new iPad mini. How do you like it in general; ie screen quality/size, usability, sound, weight, etc?

Does it hit the "a joy to use" threshold?
So far so good on all. I did a mini review of sorts in the iPad forum, and I am more than pleased with the capabilities this device has out in the field for my workflow. Since then, I have learned to enjoy using it for basic media consumption and some of the more graphic intensive ios games. The ”jelly” issue just isn’t something that impacts my photo or video editing. I do see it (when swiping through photos), however the m1 MacBook Pro does it even more and still doesn’t bother me. It doesn’t impact my video editing one bit.

Never had an iPad this small before, but with the full iPad OS and the speed of this system, multiple applications and full drag n drop support is awesome for portable workflow. Since I often shoot events with a photo vest, or , cargo pants on, the pocket space on both articles offer plenty of space for this device. I am honestly considering an iPhone 13 mini for my main phone now, the pro models not having USB-C kills any pro productivity for me (camera won’t connect via lightning).

biggest benefits
- USB-C
- screen size
- processor speed
- build quality
- full iPad os support
- screen image quality
- on screen keyboard is nice (typed this post using it)

biggest drawbacks
- screen could be brighter
- 512gb should be a storage option
- Touch ID button location is not ideal.
- could use a touch more battery life
 
A company that downplays their product performance and does not advertise an in theory in perfect lab condition environment results? have my respect....
 
  • Like
Reactions: dgdosen
I think there's more to it than flat out perf, though. Race-to-sleep is a big part of power efficiency, and the better the peak perf, the more time the SoC will spend in a lower power state. Perf-per-watt is another measure, and the A15 does better there as well. Sure, many people would like to see software be able to flex the A15 flat out, but like automobiles, most people don't use their things that way. Most folks don't buy a 300hp car and then drive with their foot to the floor nonstop, which is why auto manufacturers try to tune for partial throttle efficiency.
Interesting point. It got me to thinking that consumer and pro chips are likely optimized to maximize efficiency at different points in the load curve. For instance, chips used in computer clusters do indeed spend most of their time running at 100% (assuming the cluster is heavily used, which has been the case for every cluster I've ever used). They should thus be optimized to maximize their efficiency at maximum load, while chips made for consumer devices could give up some efficiency at max load in order to optimize it at average load.
 
Last edited:
In terms of GPU and ML performance, Apple leads the industry by a wide margin. Unfortunately, none of this performance will help Siri. Apple has lost an edge there and they are unlikely to recover it any time soon. But their computational photography etc. is not bad.
I recall reading a big reason the digital assistants from Google, Amazon, etc. had outpaced Siri was because weak internal user privacy protections among the former enabled them to make extensive use of user data, giving them far larger training sets. By constrast, Apple's privacy guidelines disallowed that.
 
Last edited:
Man, oh, man, you shoulda heard Linus Tech Tips whine and carry on about how Apple made “vague claims” in their Apple 13 Keynote event. As is often the case for Linus, so confident, so wrong.
 
  • Like
Reactions: GuruZac
A company that downplays their product performance and does not advertise an in theory in perfect lab condition environment results? have my respect....
The best type of obsolescence is when a vendor improves a product by so much, you're compelled to upgrade. I'm hoping we get a decade of Apple Silicon advancements that fit this category.
 
I recall reading a big reason the digital assistants from Google, Amazon, etc. had outpaced Siri was because weak internal user privacy protections among the former enabled them to make extensive use of user user data, giving them far larger training sets. By constrast, Apple's privacy guidelines disallowed that.

I am sure that the large bodies of training data less privacy-focused companies have contributed to it. But there also might be an issue of talent. Google for example always had a very strong focus on language technology and general AI. Apple's recent excellence in ML seems to be more associated with topics such as photography and audio processing.
 
  • Like
Reactions: GuruZac
Does anyone know a way to contact AnandTech? I need to ask them to include the video hardware encoders and decoders to know if the A15 has indeed VP9 and AV1 hardware decoding.

Also, I’d like them to do a little research to know if the NAND used on the new iPhone 13 SSD is TLC or QLC, because QLC memory has a much shorter lifespan.
 
Does anyone know a way to contact AnandTech? I need to ask them to include the video hardware encoders and decoders to know if the A15 has indeed VP9 and AV1 hardware decoding.

Also, I’d like them to do a little research to know if the NAND used on the new iPhone 13 SSD is TLC or QLC, because QLC memory has a much shorter lifespan.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.