Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
When I buy Apple products, I expect the very best and fastest hardware possible. The 5 months has already "eaten" into my expected longevity life. I buy Macs because I can use them for YEARS and still get fast performance, otherwise I can just buy a Windows PC that will only last 3~4 years. Therefore M1 is already a step back for the new iMac
And what chip should Apple have put in it? The one that doesn't exist yet???

Apple had two options:

1) Release an entry-level iMac with the best Apple Silicon that they have available now (which is very capable for the intended market segment) -> and get the benefits of those sales starting in May

2) Wait until they have a better chip in mass production (July-October?), lose all those sales, and put in a chip that is more power (and more expensive) than 90% of the entry-level iMac users will ever need.

If you are in the business of making money for Apple, the answer should be obvious.

This iMac is clearly not for you...wait a few more months and buy the latest one that will have the newer faster chip.

Expect it to cost quite a lot more than the iMac 24, and it may be only available in the larger iMac, for which you will pay even more.
 
  • Like
Reactions: Christopher Kim
The M1's RAM actually sits outside of the SoC/processors. There're images posted online showing the LPDDR4X RAM ICs sitting next to the M1 SoC package.
It’s sitting next to a chip that has M1 written on it. BUT that RAM is all a part of the package as seen in Apple’s Mission Implausible :)
62C7E182-A5BF-4458-876E-94C641E20B55.jpeg
 
So many posters seem only interested in the number of cores matching Threadripper, or the amount of GB of RAM. These, whilst important are only important its performance that counts.
Because a lot of people seem weirdly obsessed with benchmarks and d*ck-swinging contests instead of actually benefiting from this performance when running their applications, and actually "doing stuff".
 
  • Like
Reactions: grahamwright1
Did you read this article?


They are turning the PC sales model on its head, yet again.
Yes, and there’s really nothing Intel or AMD can do about it without cratering their bottom lines. As the industry shifts more and more to mobile (Apple’s customers aren’t the only ones preferring laptops to desktops), Apple will be able to continue to extend their lead as Intel and AMD continue to offer crippled mobile processors just to keep propping up their high dollar range on desktops (and those “laptops” that have to be plugged in to work at full power).
 
  • Like
Reactions: SuperMatt
Broadly speaking, this is a significant reason why M1 Macs are more efficient with less RAM than Intel Macs. This, in a nutshell, helps explain why iPhones run rings around even flagship Android phones, even though iPhones have significantly less RAM. iOS software uses reference counting for memory management, running on silicon optimized to make reference counting as efficient as possible; Android software uses garbage collection for memory management, a technique that requires more RAM to achieve equivalent performance.
People copy this around like crazy as justification for the M1 needing less memory than Intel Macs, and it is complete hogwash.

The use of reference counting memory collection saves memory vs systems which use other garbage collection, like Java which uses a generational collector (copy collector plus mark and sweep, historically). This is a significant reason why native iOS apps require less memory than equivalent Android apps.

However, the same data takes approximately the same amount of memory on intel and M1-based macs. Reference counting is typically immediate - the M1 may speed up operation of the program, but the optimizations do not make it able to free any additional memory once it is not needed, or to free it sooner within program execution.

Potential examples which _would_ impact the need for memory (note, many of these are speculation)
- faster flash storage access enabling more efficient use as virtual memory
- changes in page size managed by the MMU and being backed up in virtual memory
- the unified memory model eliminating duplication of resources between the GPU and CPU memory sets.
- the GPU design allowing for expensive resources such as textures to be swapped out as virtual memory or to be treated as compressed data through usage.
- convincing early adopters that they should use the native Safari browser rather than Chrome
 
  • Like
Reactions: EntropyQ3
Yes, and there’s really nothing Intel or AMD can do about it without cratering their bottom lines. As the industry shifts more and more to mobile (Apple’s customers aren’t the only ones preferring laptops to desktops), Apple will be able to continue to extend their lead as Intel and AMD continue to offer crippled mobile processors just to keep propping up their high dollar range on desktops (and those “laptops” that have to be plugged in to work at full power).
This is a classic example of "The Innovator's Dilemma". When Intel came out with the Celeron line they couldn't sell it - not that there weren't people interested in buying a lower cost Intel chip, but that their existing sales people refused to sell something that had a lower commission, and most of their sales process was geared toward the higher margin server market anyway.

They basically had to spin up a good portion of a second company focused on design and sales of the chip to make it work (which likely meant some interesting internal competition for recognition, for resources, etc).

Intel and AMD _can_ solve the problem, but it almost looks like them creating another company to focus on the mobile and ultraportable space. Otherwise, it is simply too hard to deal with the incentivisation toward the status quo.

Maybe Intel's new leadership is smart enough to recognize the importance of that gamble, albeit over 10 years too late. AMD's leadership likely is perfectly happy trying to eat away at Intel's slice of the revenue pie rather than take that sort of initiative.
 
  • Like
Reactions: Unregistered 4U
NO. Real professionals would not stipulate any number of cores. Real professionals are interested in performance, not how many cores or RAM in a 'my thing is bigger than yours' ego trip. Real professionals have multiple uses for computers, some real professionals will be more than happy for the new iMac if it serves the professional use they put it to. Other professionals might require more powerful machines, but I doubt any REAL professional stipulates it has to have more cores, more ram, etc. because real professionals are interested in whether any new computer will serve their needs.
To add to these excellent points, "real professionals", i.e. businesses, are very concerned with cost, and performance/$.

My clients are always looking for bang for buck, and to size compute resources efficiently. I've just saved a customer over $6500/month by making their cloud database smaller, i.e. fewer cores, less memory. Their database was too large for their workload, and sometimes "less is more"!

ARM-based servers are going to see a sharp increase in usage once older x86 hardware is phased out during refresh cycles in data centers. The performance of modern ARM processor cores is getting close (or sometimes exceeding) Intel & AMD for many workloads, and they cost significantly less, either in capital hardware costs or cloud service costs (because their purchase and running costs are less).

AWS Graviton 2 instances are a good example. Some of them are half the cost of Intel Xeon instances of a similar size.

You choose the smallest and cheapest option that will do the job well enough for the task in hand.
 
It’s sitting next to a chip that has M1 written on it. BUT that RAM is all a part of the package as seen in Apple’s Mission Implausible :)
View attachment 1765517
Correct. It's part of the package, but not part of the silicon die on which the CPU/GPU cores and other gubbins live.

You can see why the M1 is limited to 16GB. Until the individual chips with >8GB capacity are widely (and cheaply) available, the maximum RAM is limited to 16GB by the size of the SoC package.

It follows that the next gen M2 will need to be significantly larger in order to have 32GB or more RAM, or use a newer smaller generation of RAM chips.
 
So many comments about external monitor support, but the iMac is an AIO? You might have a need for more external monitor supports on a MBP etc., but so many seem so keen on buying an AIO then wanting another monitor?
You give up a lot versatility when you get a desktop instead of a laptop. Why would someone do that? One common reason is because you're a power user and want more performance for the money than you can get with a laptop. And many of these power users like to run multiple monitors. E.g., suppose you'd like to run three 27" screens, and you want the central one to be full retina to be easier on the eyes, which means 5k. Since the 5k's are expensive, maybe you'll put a 27" 4k on either side.

Doing that with a 27" iMac costs you about $2000 less than doing that with a 16" MBP. With the 27"iMac, you don't need to shell out an extra $1300 for the 27" 5k LG monitor (current B&H price). And a lower-end iMac is as powerful as a top-end 16" MBP, yet will cost you (very roughly) about $1k less. Alternately, if you really need the power, the you will definitely want to go with an iMac instead of a MBP. Hence you can see why there is certainly a subset of AIO Mac buyers who would plan on running multiple monitors.
 
I never said that a 12-core (8+4) or a 16-core (12+4) A14-based SOC won't beat the 18-core Skylake.

My original point (it got lost after so many quotes and replies) is that this year Apple is going to launch the M1X, which is going to be a different class of SOC made specifically for the "pro" Macs.

The M2 cannot be such processor. Why? Because a 12-core or a 16-core SOC based on the A14 (or even A15) architecture won't scale low enough to be put inside an iPad or a fanless MacBook Air. They just do not have the thermals or the battery to support such a beast.

I agree with this premise, but I think you're making assumptions (that I'm inclined to agree with) about their naming scheme.

It could also be that the M2 is the higher-end SoC, and other Macs stay on the M1 for now, and only in the third or fourth year does Apple actually segment that line.

 
Suspicions are that Apple will have a minimal number of chips in a generation, and instead use past generations for some devices (same as with their non-mac devices today). The rest of their differentiation will be binning (e.g. 7 vs 8 GPU cores active on the M1/A12XZ).

E.g. we might not see a M2 Macbook Air until late 2022, when M3 Macbook Pros are shipping.
Quite probably. Assuming the MBP14 and MBP16 will be released at the same time (or very close to each other) it will be interesting to see what options of the "M2" are produced.

If they kept an approx 25-30W TDP for the MBP14, then it could well have twice the performance CPU cores and twice the GPU cores, which would make it a formidable machine. It would be interesting if a MBP16 with 50-80W TDP would use the same sized SoC with more cores crammed in, or whether there could be different sized SoCs within the same generation. E.g. an M2 with up to 32GB RAM and and a larger M2-Pro with up to 64GB and more CPU/GPU cores
 
The whole PC world is holding its breath again waiting for Apple to release the next “big thing”
I can affirm that no other thing after the iPhone would be comparable to M2 to the date changing the whole future of an industry.

Microsoft would be on fire announcing its next Windows ARM version right after Apple release the M2.

All OEM in the world would want to start the transition to ARM, and ARM market would reach Apple’s performance in 2-3 years.

In 5 years nobody would be shelling X86

“We are not afraid Apple coming to mobile bussines”
Nokia dixit...
“We are not afraid Apple coming to the car bussines”
Wolkswagen dixit...
“We are not afraid Apple coming to chips bussines”
Intel didn’t dixit XD
 
“We are not afraid Apple coming to mobile bussines”
Nokia dixit...
“We are not afraid Apple coming to the car bussines”
Wolkswagen dixit...
“We are not afraid Apple coming to chips bussines”
Intel didn’t dixit XD
"It's a f***ing computer"
Blackberry said when they examined the first iPhone.
 
So many comments about external monitor support, but the iMac is an AIO? You might have a need for more external monitor supports on a MBP etc., but so many seem so keen on buying an AIO then wanting another monitor?
Of course. Why would I want only one monitor? I would actually love it if Apple sold a monitor that looks exactly like an iMac, to put them side by side. 4.5K monitor at a reasonable price would be great. It's not just an all-in-one, it's also just a Mac that comes with a keyboard, mouse and an excellent monitor.
 
One reason the M1 proves to be a very efficient processor is because it does not have non unified memory outside the processor. Unified memory. Add external RAM and that efficiency may reduce. So many comments about 8Gb and 16GB in comparison to upgradeable RAM, but the efficiency of the RAM is important.
You get the speed gains from a limited amount of RAM with the processor. If you want 64 GB RAM, and there are people who want that, or if you want 1.5 TB of RAM, and Apple has actual customers who bought that, you get most of the speed gains with a small (8 to 16 GB) amount of RAM on the chip. It's called a "cache". The 8 or 16 GB that are the complete RAM on an M1 will be an enormous L3 cache on the high-end chip.

What makes M1 fast is that it has unified memory on the chip, not that it has no non-unified memory outside the chip. Apple decided that M1 is low-end, so they could get away with excluding the hardware to access memory outside the chip, and live with a 16 GB limit instead. For a high-end chip, limiting RAM to what fits on the chip is an absolute NO. Doubling the on-chip memory to 32 GB wouldn't be enough for a high-end chip.
 
Last edited:
I never said that a 12-core (8+4) or a 16-core (12+4) A14-based SOC won't beat the 18-core Skylake.

My original point (it got lost after so many quotes and replies) is that this year Apple is going to launch the M1X, which is going to be a different class of SOC made specifically for the "pro" Macs.

The M2 cannot be such processor. Why? Because a 12-core or a 16-core SOC based on the A14 (or even A15) architecture won't scale low enough to be put inside an iPad or a fanless MacBook Air. They just do not have the thermals or the battery to support such a beast.
This just shows again that there is nothing but confusion about naming. Just forget about "M1X" or "M2" and call it a "high-end processor", no matter what Apple calls it. We have this years low-end M1 and hopefully will get this years high-end chip, whatever it is called, which will be HUGELY faster than M1. Twice or three times faster. And you can choose to buy a Mac with the low-end chip or the high-end chip, and high end will cost you a lot more. Next year we will get an improved low-end and an improved high-end chip. They will be 10 percent or 15 percent faster than this year's chip, and they will be a low end and a high end chip.

Not like the iPhone where you had one chip which got better every year, you will have two chips with HUGELY different performance and cost, both getting better every year.
 
I just downloaded the source code to Kubernetes onto both my fully maxed out 2019 16" MBP (which has the 2.4 GHz i9), 64GB RAM, etc, and my M1 13" MBP (16GB RAM). Only thing not maxed on the intel box is the SSD, which is 2TB (rather than the potential 8TB max).

Fresh source code trees on both machines. I killed every app running, waited for the CPUs to level off, and built Kubernetes from scratch on both machines.

i9: make 1736.99s user 445.69s system 928% cpu 3:54.97 total M1: make 819.16s user 156.12s system 434% cpu 3:44.71 total

On both machines, all cores were heavily utilized. Not only was this entry level, low powered M1 faster than a maxed out i9 MBP, it was significantly more responsive to window movements. The cost of that 16" was multiple times the cost of the M1.

When the M2 machines come out later this year? They're going to absolutely smoke the intel MBP.
In my experience with open source / Linux based build systems, there's a lot of single threaded code going on. Like the build system taking ages figuring out what to build, all single threaded, with my old MacBook reporting 25% CPU usage, and then it starts building with 80% CPU usage, for a very short time. I don't build Kubernetes but some libraries, and it would be a lot faster with higher single threaded performance. (It would be HUGELY faster if the build system was multi-threaded).
 
  • Like
Reactions: Unregistered 4U
i need a M# chip with more then 16g ram mac mini server !....i know it's very efficient for normal mac computing. but i always run 2 or 3 vm for testing ....16 g just very little. ;( yes i know windows can do more ram but windows OS just no where near as stable as mac os!
 
I'm very happy with my M1 and it does everything I ask of it, I don't see a reason to need an M2 or M3 at this time in the same way I probably don't need a new iPhone every year because they are all so fast now.
It’s funny how some people dismiss the M1 as lowend while it delivers up to 3.5x faster system performance, up to 6x faster graphics performance, and up to 15x faster machine learning, while enabling battery life up to 2x longer compared to previous-generation Macs.
 
Who said the M2 is destined for ipads?
If it is not... then what will you put inside the next iPad Pro?

Can you define a coherent roadmap for future Apple products if you call the "pro" processor "M2"? What is going to happen when the M3 is released?

No sarcasm or anything. I'm genuinely asking these questions. I'll give you my version of what I think the roadmap will be for the next 3 years:

2021
M1 iPad Pro
A15 iPhone
M1X (12-core M1 + strong GPU) MacBook Pro and iMac Pro

2022
A16 iPhone
M2 (A16X) MacBook Air, Mac mini and iMac

2023
M2 iPad Pro
A17 iPhone
M2X (12-core M2 + strong GPU) MacBook Pro and iMac Pro

Can you imagine a different roadmap that still makes sense based on what we know so far? If so, please elaborate, I would love to explore different scenarios.
 
  • Like
Reactions: Christopher Kim
You get the speed gains from a limited amount of RAM with the processor. If you want 64 GB RAM, and there are people who want that, or if you want 1.5 TB of RAM, and Apple has actual customers who bought that, you get most of the speed gains with a small (8 to 16 GB) amount of RAM on the chip. It's called a "cache". The 8 or 16 GB that are the complete RAM on an M1 will be an enormous L3 cache on the high-end chip.

What makes M1 fast is that it has unified memory on the chip, not that it has no non-unified memory outside the chip. Apple decided that M1 is low-end, so they could get away with excluding the hardware to access memory outside the chip, and live with a 16 GB limit instead. For a high-end chip, limiting RAM to what fits on the chip is an absolute NO. Doubling the on-chip memory to 32 GB wouldn't be enough for a high-end chip.
The M1 RAM is only "on the chip" in the sense that it is on the same package. It it not on the same die as the CPU & GPU cores in the way that L2 (& usually L3) cache is. I'm not sure if it correct to call the LPDDR4 RAM "L3 cache" - AFAIK it behaves exactly the same as ordinary DDR4 RAM, interfacing with memory controllers on the silicon die but with both the same package, which shortens the connection paths and no doubt speeds things up.

Whether Apple Silicon will support a combination of on-package and standard (soldered or DIMM) RAM remains to be seen. I would agree that it would be a hard sell to get future Mac Pro users to shell out Apple prices for 1.5TB of soldered RAM, so it's certainly possible that this hybrid RAM solution would be available on the Mac Pro...but maybe only on this machine. I can see them limiting a future iMac to 64GB of on-package RAM.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.