Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
If it is in fact M2 and not M1X, I can see that:

2021 - M1 Low end, M2 High end
2022 - M2 Low end, M3 High end etc…

I still think M1X makes more sense though.
Why not just have variations within each generation though? M1 with 4+4 CPU / 8 GPU, and M1 with 8 + 4 / 16 GPU cores etc.

I read some comments from people with industry connections that suggest this is how it will roll.

Ultimately, the name is irrelevant. As long as the performance moving up in big leaps, it doesn't matter. The naming pattern will become clear with the next generation or two.

Personally, I am looking forward to a second generation "M2" that has some architectural improvements over the M1 (e.g. faster single core speed, higher frequency etc.) plus more CPU cores and much better GPU performance.
 
  • Like
Reactions: jwdsail
Almost pulled the trigger on the current Mac Mini M1 😂
I've been using one very happily since December and am impressed by its performance. I doubt another Mini will come out this year (I expect a 2-3 year cycle as previously), so I'm happy with my purchase. For the price, it is a phenomenally good computer.

My next Apple computer will be the smaller MacBook Pro...let's see if this year's offerings are a big leap on the previous top-end Intel models. If so, I may be tempted. If not, I'll wait.
 
The way I think this is gonna go is the same way the iPhone's used to go.

M series every 2 years, X series in the in between years.

M1 Mac right now, M1X releasing this summer on MacBook Pro's "14 and 16" and iMac Pro in the winter. Then 2022 will see M2 introduced with the Mac Pro, "and that chip will also go into the M series Pro line, and the M1X will then ship in the 2022 MacBook Air, 13" inch MacBook Pro, and iMac...then 2023 will see the M2X series release on the MacBook Pro "14 and 16 inch" and that winter in the iMac Pro. 2024 will then see the M3 in the Mac Pro.

After that I don't think we will see another refresh until 2026.

In the meantime the A series chips will be regulated to iPhones only, and the iPad Pro will follow suit with the M series lineup throughout all of this, with the budget end iPads having the previous years M chip.
 
I would not go crazy about the processor until we know the real spec's on the chip. I am hoping the M2 will be pared with a better graphics chip. If not, the current MacBook Pro 16 with the AMD 5600 mobile could beat it, in head to head testing. Including multithreaded applications.
This is my purchase parameter too. If the next-gen chip (whatever it's called) can get very close or beat the Radeon 5600M in the top-end MBP16, and have approx 50% better CPU performance than the 8-core i9, but in a 14" body, I'll probably buy it.

This is quite a tall order I think - making an integrated GPU match or beat the best available dedicated mobile GPU - but if they do it, it would be a great achievement.
 
Well, I doubt that. M1 = A14 variant which was released 2 months after iPhone
M2 = A15 variant so will most likely be released around the same time.

Apple will want to have it around similar schedule as iPhones as there would be no logic to have iPhone get A15 and then Mac would have to wait a long time. Nah, I don't see it like that.
In fact, if anything, as mac is more demanding I could see that Mac will get it first around summer (wwdc or so) and iPhone just 2 or so months later.
I imagine that the CPU/GPU core design of the A15 and next Apple computer SoC (tentatively "M2") is the same, so their releases will be quite close. The computer chip may come before the phone chip, or vice versa, depending on volumes and complexity required to make them.
 
I doubt SD cards are a common enough feature for the iMac to need one at this point. Why waste the space on something a small fraction of users need?
I wonder what the fraction is? How many Mac users have cameras (or drones) with SD cards? I have several cameras, but I expect most people just use a phone these days...the quality is getting seriously good even with those tiny sensors.
 
Why don't you think a M2 can't also have a 4+4 CPU cores and 7/8 GPU cores variant, but with more performant cores?

Similarly the M1 cores can be scaled up as well.
I agree. The "M<x>" will be the generation of the chip, with incremental architecture improvements and single-core performance increases each generation (once per year?).

Within each generation there will be variations with more less cores, differing GPU capability and different power envelopes. Pretty similar to Intel & AMD.
 
I hope the bigger imac will have more ports. I mean a desktop shouldn't have to compromise on ports. sd card atleast?
same. I mean it's a desktop - idc how thin it is, I want tons of ports for versatility and an internal power supply. Hopefully the new 27" iMac replacement will be drastically different than the new 24" iMac, but I'm not optimistic. Standing by to order a top-spec 2020 iMac if the new one is as bad as the new 24".
 
My 2015 Macbook Air has a black bezel. Self adhesive plastic, bought it off Ebay for about £5. Best thing I ever did. I'm sure they will be available for the iMac as soon as someone has one to measure up.
That assumes the next iMac has bezels at all. Would not surprise me if the bezels are gone on the former 27in. putting a 3-4mm wrap round the front of the case, and installing the monitor from rear of the AIO along with the boards has a lot of benefits in terms of constructing the AIO. It only needs 3-4mm to hold the screen in place and would do away with the argument about black/white bezels and would also make the iMac far easier to mass produce without the need for glueing in the screen, which can be a nightmare if you want any upgrades.

Having the unit just put into a wrapped frame is much easier for automated production. The M1 iMac in my opinion is like all first attempts...a learning curve.

So I'd expect changes in the next iMac, especially if its a larger former 27in. device as changes to construction there on a new M2 platform would be more distinguishable.
 
I use *only* safari and have had no difficulties with it. I suspect external monitor support improvements are coming soon. Thunderbolt port on the ipad pro is a hint at things to come.
So many comments about external monitor support, but the iMac is an AIO? You might have a need for more external monitor supports on a MBP etc., but so many seem so keen on buying an AIO then wanting another monitor?
 
It's quite possible that M1, M2, M3 will simply be the generation of each processor, similar to A13,14, 15 or Intel's 9th,10th, 11th generation core-i chips.

Within each generation there will be different specifications with differing capabilities in terms of CPU & GPU cores and other features of the SoC.

So we might get a 12 or 16 CPU core (8 power/4 efficiency, 12 power/4 efficiency) in an M1 that would be more powerful than an entry-level M2 etc.

We would expect each generation to have incremental improvements in architecture, so an M2 core might be 10-20% faster than an M1 core, but 8 M1 cores would have better multi-core performance than 4 M2 cores.

Whether Apple uses generation name suffixes like M1X, M1Z etc remains to be seen. Some people who may have inside information say that this is unlikely.
Suspicions are that Apple will have a minimal number of chips in a generation, and instead use past generations for some devices (same as with their non-mac devices today). The rest of their differentiation will be binning (e.g. 7 vs 8 GPU cores active on the M1/A12XZ).

E.g. we might not see a M2 Macbook Air until late 2022, when M3 Macbook Pros are shipping.
 
I don't see how M2 could beat discrete graphics of 5600m on 16" MacBook Pro. I think it will take a discrete gpu or m3 to match the speed, same thing to replace 5700Xt in iMac...Apple is at least 1-2 years behind in terms of graphics performance.
Discrete desktop graphics has followed an evolutionary trajectory that is at odds with most parts of the industry.
The latest batch of desktop chips from AMD and Nvidia (5700xt belongs to the previous one), pull from 250-300+W.
Performance in this segment has to a large extent been bought by increased power draw, which is why the iMacs have down clocked their higher end GPU options to bring them to a more reasonable point on the frequency vs power curve.

The M1 GPU draws some 10W or so in the fan cooled implementations.

Even if Apple is going to provide a discrete GPU (which I'll doubt until I see it), where will it end up in terms of power draw? Will they provide a solution in the 300-500W span? Or will they offer something that can be cooled quietly in a somewhat larger iMac enclosure, say 50W or so?

(Unless they go for the higher power draw solution, I can't see why they would go discrete. The Xbox sx has a die size of 360mm2 and the PS5 310mm2, manufactured on TSMC 7nm and sold for $499 in boxes with 16GB GDDR6 and fast 1TB SSDs and BluRay drives. Given what TSMC has said about 5nm process yields, Apple should have no issues getting good yields on SoCs that would be Very Performant Indeed, and laugh all the way to the bank when people pay their margins.)

I don't see Apple responding to the scaling problems of lithography by increasing power draw indiscriminately, thus I doubt that they will produce GPUs that compete directly with Nvidias 3090 for instance. If that is the "graphics performance" you refer to, it seems likely that they will stay behind. And for people doing desk side molecular dynamics, that's a bit of a pity. But on the whole, it may be the saner path.
We'll see how the chips fall within a year or so.
 
  • Like
Reactions: Fomalhaut
On the contrary, I'd bet that the next generation will have RAM outside the processor, so it's basically unlimited. 8GB or 16GB on chip will be used for a huuuge cache, which gives >90% of the speed advantage. The M1 has no RAM outside the processor because that makes the design simpler; that's fine for low-end Macs but not for the high end.
One reason the M1 proves to be a very efficient processor is because it does not have non unified memory outside the processor. Unified memory. Add external RAM and that efficiency may reduce. So many comments about 8Gb and 16GB in comparison to upgradeable RAM, but the efficiency of the RAM is important.
 
Last edited:
  • Like
Reactions: Fomalhaut
Maybe, or maybe the M2 IS a enough of a jump for the pro users. For all anyone here knows, it is a 16 or 32 core monster. No one in the industry expected the A7 to be a 64-bit monster or be released as soon as it was.

There's absolutely nothing that says Apple couldn't double (or quadruple) the performance of the M1 and call it an M2. Just like there isn't anything that says they won't do the same thing again and call it an M3.

They already sell the M1 in a configuration with a core disabled. They can do the same thing, and potentially reduce the clock speed, in a year when they're ready to move the processor down to the lower-end machines where they need better thermals and battery life.
Tend to agree although the jumps you mention are major. The M1 was the start of this particular evolution into Apple designed silicon, and its likely that testbed will germinate fast improvements in the next generation of chip as no amount of workshop testing gives the sort of feedback that consumers provide. Whether its called M1X, M2, Pinocchio1 doesn't really affect what the chip does, and I doubt any of us know yet.

I do though, expect that the testbed M1 will provide a platform for a bigger pro rata in the next chip, as Apple will have learned so much from their transition from Intel in the M1.
 
Why not just have variations within each generation though? M1 with 4+4 CPU / 8 GPU, and M1 with 8 + 4 / 16 GPU cores etc.
They do. They have for example the A14 and M1. These chips do have some differences, but surprisingly there are a lot of "M1" features left in the A14. For example, the iPhone and iPad Air chips supposedly have virtualization and paravirtualization hardware support, as well as the alternate memory ordering mode used to accelerate Rosetta 2. These are all disabled on boot.

There are limitations to the M1 outside of cores: 16GB max memory, a limited number of thunderbolt ports, max two displays (e.g. internal and external). These are normally generational changes.

You can also only throw so many items on a memory or I/O bus before it can't keep up, and designing a bus for say 32 cores when you are also shipping a 4 core design is extremely wasteful/inefficient (and may impact things like required memory architecture and thus cost).

This becomes even more of a architectural impact for chips which are built with a unified memory architecture like Apple's chips.
 
I just heard the Apple SoC won’t have anywhere near the number of processors as even one of the lowest end supercomputers. What kind of snake oil are those guys selling???
So many posters seem only interested in the number of cores matching Threadripper, or the amount of GB of RAM. These, whilst important are only important its performance that counts.
 
The way I think this is gonna go is the same way the iPhone's used to go.

M series every 2 years, X series in the in between years.

M1 Mac right now, M1X releasing this summer on MacBook Pro's "14 and 16" and iMac Pro in the winter. Then 2022 will see M2 introduced with the Mac Pro, "and that chip will also go into the M series Pro line, and the M1X will then ship in the 2022 MacBook Air, 13" inch MacBook Pro, and iMac...then 2023 will see the M2X series release on the MacBook Pro "14 and 16 inch" and that winter in the iMac Pro. 2024 will then see the M3 in the Mac Pro.

After that I don't think we will see another refresh until 2026.

In the meantime the A series chips will be regulated to iPhones only, and the iPad Pro will follow suit with the M series lineup throughout all of this, with the budget end iPads having the previous years M chip.

I think the budget end iPads will stick with the A series chips going forward with the M series being strictly in the Pros, it’s a way of differentiating the 11” Pro to the Air.

Its great for marketing, if a customer in an Apple shop sees the air and 11” Pro next to each other it sounds a lot better from their perspective that the Air has an iPhone chip in it and the Pro has a Mac chip in it along with the other extra bells and whistles.
 
One reason the M1 proves to be a very efficient processor is because it does not have RAM outside the processor. Unified memory. Add external RAM and that efficiency may reduce. So many comments about 8Gb and 16GB in comparison to upgradeable RAM, but the efficiency of the RAM is important.
The M1's RAM actually sits outside of the SoC/processors. There're images posted online showing the LPDDR4X RAM ICs sitting next to the M1 SoC package.
 
  • Like
Reactions: Fomalhaut
I don't see how M2 could beat discrete graphics of 5600m on 16" MacBook Pro. I think it will take a discrete gpu or m3 to match the speed, same thing to replace 5700Xt in iMac...Apple is at least 1-2 years behind in terms of graphics performance.
Well, the current 8-core M1 GPU is only very slightly behind the AMD Radeon 5300M in Metal benchmarks. If it had twice the GPU cores, and scaled linearly, it would just beat the 5600M.

I think it possible that an M2 could have 16-GPU cores, that might even be slightly improved on current ones. If the memory bandwidth were also increased, it is certainly conceivable that the next Apple Silicon generation could beat the AMD 5600M

Bear in mind that the PS5 and X Box X both use integrated CPU/GPUs (admittedly with >180W TDP) and are a lot faster than any laptop discrete GPU (and a lot of desktop GPUs), so there is no inherent reason why an integrated GPU can't scale to similar performance, within the thermal limits of the platform.
 
  • Like
Reactions: EntropyQ3
I don't see how M2 could beat discrete graphics of 5600m on 16" MacBook Pro. I think it will take a discrete gpu or m3 to match the speed, same thing to replace 5700Xt in iMac...Apple is at least 1-2 years behind in terms of graphics performance.
For one, Apple will likely optimize for specific tasks. A GPU shader version vs native hardware version of say a AV1 encoder will likely be no contest.

That aside, the reason you have 300W GPU cards is that the cores do not do much cross-communication and thus are way more parallel than say CPU cores. Outside of protecting from GPU monopolization in their unified memory architecture, I imagine the number of GPU cores to be based on how much power they want to draw and how much heat they want to give off - e.g. very similar to discrete cards.

The difference is that now Apple, not Intel, is setting that performance target.

Apple may also have deferred some performance optimizations this generation, as metal support for M1 macs required new hardware features for backward compatibility.

I have no delusions that Apple is going to have something competitive to gamer-class GPUs in their designs. I also have no expectation of Apple designs being capable of dealing with the heat and power draw of such GPUs even if they could source the parts for free ;-)
 
Some professionals want/need the best multicore performance they can get. It's not hard to saturate even 32 cores in many professional workflows.

If Apple released a top of the line iMac in 2021 that didn't soundly beat the top end Intel iMac Pro from 2017 in every metric, especially in multicore performance, then the Apple Silicon endeavor could be rightly called a fiasco from the professionals that want the fastest Mac they can get.
NO. Real professionals would not stipulate any number of cores. Real professionals are interested in performance, not how many cores or RAM in a 'my thing is bigger than yours' ego trip. Real professionals have multiple uses for computers, some real professionals will be more than happy for the new iMac if it serves the professional use they put it to. Other professionals might require more powerful machines, but I doubt any REAL professional stipulates it has to have more cores, more ram, etc. because real professionals are interested in whether any new computer will serve their needs.
 
Maybe Apple is rewriting the rules of the game by making it so that I don’t have to eat so much to begin with?

I am still trying to figure out how exactly the M1 chip works, but it seems that it allows PCs to make do with fewer ram what would we would normally be accustomed to. Apple seems to be pretty good when it comes to managing this sort of bottlenecks.
Unified RAM
 
Maybe Apple is rewriting the rules of the game by making it so that I don’t have to eat so much to begin with?

I am still trying to figure out how exactly the M1 chip works, but it seems that it allows PCs to make do with fewer ram what would we would normally be accustomed to. Apple seems to be pretty good when it comes to managing this sort of bottlenecks.
You may have seen this one, but I think this has a lot to do with it. If you need 32 gigs, then you need 32 gigs, no doubt about it. But for the vast majority using 1-2 gig chunks at a time, they’d be able to do more with 8 gigs than a comparable Intel system.

Retain and release are tiny actions that almost all software, on all Apple platforms, does all the time. ….. The Apple Silicon system architecture is designed to make these operations as fast as possible. It’s not so much that Intel’s x86 architecture is a bad fit for Apple’s software frameworks, as that Apple Silicon is designed to be a bespoke fit for it …. retaining and releasing NSObjects is so common on MacOS (and iOS), that making it 5 times faster on Apple Silicon than on Intel has profound implications on everything from performance to battery life.
Broadly speaking, this is a significant reason why M1 Macs are more efficient with less RAM than Intel Macs. This, in a nutshell, helps explain why iPhones run rings around even flagship Android phones, even though iPhones have significantly less RAM. iOS software uses reference counting for memory management, running on silicon optimized to make reference counting as efficient as possible; Android software uses garbage collection for memory management, a technique that requires more RAM to achieve equivalent performance.
 
NO. Real professionals would not stipulate any number of cores. Real professionals are interested in performance, not how many cores or RAM in a 'my thing is bigger than yours' ego trip.
I'd go farther and say real professionals are interested in productivity and not in performance metrics. Performance metrics are what marketing uses because it is hard to convince someone the new hardware will for example "save them 2 hours a week".
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.