Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
But yet, 99% of people buying Macs either won’t know it’s in there or won’t care. And 90% of the remaining 1% won’t be using their Macs for anything that couldn’t already be done efficiently on the last generation of Macs.
Apple has a tendency to keep newer features on a newer hardware (which is extremely obvious). You just can’t see it now. But may be 4 years from now, Intel Mac will be treated as a second-class citizen. And that’s when you notice you have Apple Silicon.
 
Totally average bit of journalism by MacRumours. For example,

What's Different About the M1​

Unlike Intel chips built on the x86 architecture, the Apple Silicon M1 uses an Arm-based architecture

Is that it? Is that really the best you can do?

EDIT:
I see a couple of people have posted negative likes. I can only assume they either haven't read my post properly or work for MacRumours. I am not being critical of the M1 chip or Apple. I am being critical of the missed opportunity to explain how ARM's RISC design will always triumph over the complex instruction set used by Intel. As a (now retired) computer journalist I followed the development of RISC during the 1980s and 1990s when Joel Birnbaum worked at IBM's TJ Watson Research Centre and then joined Hewlett-Packard to create PA-RISC. Bio: https://ethw.org/Joel_S._Birnbaum
I have to agree that the article has a bit of an Apple marketing spin to it.
While we are all well aware of and excited about the Mx‘s potential, are there really no downsides that need to be looked into?
 
I feel something very interesting will be revealed now.

Basically the question is, Can Intel actually produce much better (define better how you wish) chips is it really needs/wants to ?
One could easily argue, and many have that Intel simply due to lack of competition dragged their heels over the past decade or two.
And, whilst it's something I don't like, who can blame them.

Think about it. If you sell a product, and have no real competition, can put in minimal effort each year, and sell your product and make lots of money, then why would you as a business do anything else?
You'd be stupid to do so. Why spend billions to develop new things to destroy your current products when you won't make THAT much more money?

We all know there have been and are companies that do exactly this.
Even Apple has been accused to holding back something that could easy do today, so they can add in in next year and use it to sell more products.

So, with that all said. Apple making the M1 is going to be excellent for everyone I'm sure.
Apple will product amazing designs, so that keeps Apple folks happy?
And we will hopefully see what Intel can actually do, when it's faced with either MAJOR change or lose out.

The only sad part (good for Intel) is that Apple is such a tiny fraction of the worlds computer market that, unless someone does something like an M1 for Windows (W1) ;)
Then I'm not sure it will have enough of a negative effect on Intel for them to really care.
 
I feel something very interesting will be revealed now.

Basically the question is, Can Intel actually produce much better (define better how you wish) chips is it really needs/wants to ?
One could easily argue, and many have that Intel simply due to lack of competition dragged their heels over the past decade or two.
And, whilst it's something I don't like, who can blame them.
Complacency is indeed a factor. The thing is, these decisions take a many years, and if they're wrong, they take all those years to finally come back to bite you.

Around 2005-2015, they were hard to compete with in the laptop, desktop and server areas. It wasn't really clear until the early 2010s that their choice to reject Apple's contract to make the iPhone CPU may have been a major mistake.

As I understand it, they let go of some engineering staff thinking they were doing well enough without them.

Then, on top of that, they ran into issues moving from 14nm to 10nm, having to postpone over and over. Despite that, they (originally) made the choice to not come up with a newer microarchitecture (after 2015's Skylake) for their 14nm CPUs, thinking a few minor changes would be enough to buy time. Those minor changes resulted in the many weird families of Kaby Lake, Amber Lake, Comet Lake, Whiskey Lake, Cascade Lake, Cooper Lake.

They didn't ship any 10nm at all until the 2018 weird one-off Cannon Lake 8121U.

Things have actually started looking a lot better since 2019: Ice Lake in 10nm started ramping up (indeed, the early-2020 MacBooks Air and Pro use it). Then they shipped Tiger Lake (which Apple hasn't yet used, and maybe never will), which comes with quite a performance boost. Tiger Lake is way ahead of AMD in single-core performance per watt. It's only when you use a much higher-wattage AMD CPU that they beat Intel on single-core performance.

However, Tiger Lake still doesn't scale up to higher-end laptops, much less desktops and servers. Desktops will soon instead get the weird retrofit Rocket Lake architecture, which is still 14nm but inherits many improvements Tiger Lake has had, such as PCIe 4.0. It's only for the generation after that that things become more interesting: Alder Lake is slated to unify the line-up, bringing it to 10nm, and also adding a heterogenous setup where some cores are high-performance, and others are low-power.

I'm guessing even with Alder Lake, they'll be slower than the Apple M1. AMD will be, too. And Qualcomm? Not even playing the same game.

TL;DR: Intel made strategic mistakes out of complacency, because they were pretty damn good for many years. On top of that, they had engineering issues with 10nm, which lost them several years. They're slowly recovering, and it's unclear if that will be enough.
 
  • Like
Reactions: Piggie
Great post thanks.
One thing I wish would be made more clear in headlines is this track size topic.
Many know it's not as clear as headlines make out, but vastly more don't understand.
I'll admit, I would like it explained more to me.
It's not as simple as Intel are using 10nm tracks, someone else is using 7 and someone else using 5.
Reality of chip design is simply not like that.
Headlines even on these forums will say something like apple using 5nm intel having trouble with even 10 and readers lap this up.
Different parts of chip tracks/aspects are at one size other aspects at other.
One companies 10nm may be equivalent in some ways to someone else's 7.
But this does not make good headlines.
I'm not defending intel but it's been said Their track? Size is in areas the same as the lower number from someone else.
This is never explained properly and I wish it would as we're being mislead by headlines.

To put it another way, intel could not take a 14nm chip design, hand it over to say tsmc and they'd bang out a 5nm version of it as Intel can't do it.
 
Can't wait to ditch this ****** lemon of a computer (2018 Mac mini) and get one of those M1 models. Video has problems recognizing my monitor, bluetooth sucks, thermal throttling... etc...etc... Worst Mac I have ever owned. I hope this new mini is better.
If you think yours a lemon, I’m still on the 2011 MacBook Pro
 
One thing I wish would be made more clear in headlines is this track size topic.
Many know it's not as clear as headlines make out, but vastly more don't understand.
I'll admit, I would like it explained more to me.

Well, I'm not an electrical engineer.

I would say absolute numbers have become more of a marketing "we have the next generation" thing than an objective fact. So in recent years, they're more of a placeholder on how far behind/ahead a manufacturer is.

That said, a smaller number enables more transistors to fit in the same area, which in turn means you can either do more processing in the same area, or use less area for the same amount of processing. But on top of that, due to physics, you also produce less heat.

One companies 10nm may be equivalent in some ways to someone else's 7.

Yes, that's correct.

However, in the 14nm era, Intel was still in their tick-tock cycle, where one CPU generation would shrink the process, and another would then add a new microarchitecture. With Intel's 10nm being late (according to their own slides), this assumption stopped working. They shipped the 22nm Haswell microarchitecture in 2013, the 14nm Broadwell shrink in 2014 (incidentally, the basis of the ill-fated Core M in the 2015 12-inch MacBook), and the Skylake microarchitecture in 2015, but failed to deliver 10nm in 2016. Instead, they had to keep shipping small improvements to Skylake, still on 14nm, and will continue to do so until at least 2021, even on high-end CPUs.

It doesn't really matter that other manufacturers not only did ship 10nm (and have since gotten around to 7nm and 5nm); it matters that Intel's roadmap was too optimistic and rigid.

To put it another way, intel could not take a 14nm chip design, hand it over to say tsmc and they'd bang out a 5nm version of it as Intel can't do it.

It's an interesting scenario that likely wouldn't happen. ;)
 
  • Like
Reactions: Piggie
Basically the question is, Can Intel actually produce much better (define better how you wish) chips is it really needs/wants to ?
One could easily argue, and many have that Intel simply due to lack of competition dragged their heels over the past decade or two.
And, whilst it's something I don't like, who can blame them.
Just having faster processors isn't going cut it anymore. Apple moving the CPU, GPU, RAM, etc... into the SoC, made it more efficient and quicker than anything currently on the market.

IMHO, in order to compete with Apple's SoC, intel and AMD will have to create their own SoC's, which will completely upend the entire x86 business model and how x86 customers think and upgrade their computers.
 
Ordered my base Mini yesterday. Couldn't bear my gradually slowing 2015 MBA any longer. Still interested in where Apple takes the redesign of the MBP, so using this little machine in the meantime. Really looking forward to the speed boost over my dual core i5 with 4 gigs of RAM ;)

Me too.

Really looking forward to more powerful SOCs from Apple as well as a redesigned MBP.

No need to keep TDP at 10W... how high will Apple go for the next round ? 20-25W ? Or all the way to 30-40W ?
 
  • Like
Reactions: TechRunner
Timing wise, does anyone has a guess to when Apple will tape the M2 ?

Will apple use N5 or N5P for their next Apple silicon? N5P should enter production in 2021.

“[TSMC] is preparing a new N5P node that’s based on the current N5 process that extends its performance and power efficiency with a 5% speed gain and a 10% power reduction.”

 
What Apple came out with is impressive to say at least. But it is only one side of the coin. Without 3rd party developers utilizing all that those SoC offer the benchmarks only tell half a story. I would really like to see Adobe and ON1 to squeeze every last drop out of that system. Otherwise, it doesn't make sense to buy just yet.
 
  • Disagree
Reactions: NetMage
All the devs eventually will come to M1.... in a year or two, there will be no new intel machines any more.
Sooner or later, they have to adopt newer silicon chips.
 
What Apple came out with is impressive to say at least. But it is only one side of the coin. Without 3rd party developers utilizing all that those SoC offer the benchmarks only tell half a story. I would really like to see Adobe and ON1 to squeeze every last drop out of that system. Otherwise, it doesn't make sense to buy just yet.

Adobe and Microsoft are coming soon.

Docker, Fusion and Parallels are being updated.

The software transition seems well underway.
 
Perhaps the article should have been titled "Everything We Know" rather than "Everything You Need to Know". In particular, it doesn't explain how the M1's unified memory works (because that's not yet known).

Sure, it's supposed to share RAM between the CPU and GPU. But how does that differ from how Intel/AMD chips with integrated graphics (IAC's) work?

Is the only advantage that the RAM is in closer proximity to the CPU and GPU, giving lower latency? Or are there other things going on that allow better coordination (i.e., passing of data) between the CPU and GPU than is seen in IAC's?

For instance, is it the case that, in IAC's, while the CPU and GPU share RAM, the RAM is partitioned such that both the CPU and GPU don't simultaenously have access to the same RAM addresses, while they do have such access in the M1? And/or does the M1 also offer shared cache for both the CPU and GPU, while IAC's do not? [Anandtech suspects the latter: "The M1 also contains a large SLC cache which should be accessible by all IP blocks on the chip."] Etc., etc.
 
Last edited:
It is not worth purchasing an Intel Mac mini or 13-inch MacBook Air at this point because the M1 chip outperforms all of the chips used in Apple's portable devices and it is only outclassed by desktop processors.
Unless you have a piece of software that won’t be re-written for Apple Silicon, and you don’t feel like waiting for Rosetta 2, or you run windows natively?
 
I have my eyes set on the MacBook Pro M1 with 512 GBs. Currently, I have a Early 2015, 13 in MacBook Pro. Honestly, I could keep it going for a few more years and save the money and get Mac laptop I truly wanted, which is the 16 inch. The waiting is what I can't deal with right now. LTT did a review of the Pro and to be honest, the lines have been blurred with the Air, but for me, it feels like the logical choice to upgrade to: brighter screen (not by much), better speakers (I heard the difference) and Touch bar (lol).
I suspect the jump now would be like moving from a G3 PowerBook to the G4 Titanium PowerBook you have in your signature.
 
I suspect the jump now would be like moving from a G3 PowerBook to the G4 Titanium PowerBook you have in your signature.
For what I do, the Early 2015 is still quite speedy. Its that, I keep reading all these reviews and I can't stand it anymore ,lol! I want some of the fun too!
 
  • Like
Reactions: alien3dx
Hi, I've seen that the 'entry-level' M1 MacBook Air has a 7 Core GPU, whereas the other model has an 8 Core GPU. What is the difference between the two? Is there an article explaining the difference? Thanks
 
  • Like
Reactions: amartinez1660
Can't wait to ditch this ****** lemon of a computer (2018 Mac mini) and get one of those M1 models. Video has problems recognizing my monitor, bluetooth sucks, thermal throttling... etc...etc... Worst Mac I have ever owned. I hope this new mini is better.
My M1 Mini has known problems with--according to Apple Support and my experience--Bluetooth and in recognizing my 27" LG monitor. The problems might be addressed in the upcoming Big Sur update. Not impressed so far, but I had squeezed all of the life I could out of my late 2013 Pro.
This Mini is a huge frustration. But my my experience with my Motorola based Mac--which I bought just before the intel machines launched--told me that software developers will stop supporting the intel machines soon enough. So, I'm stuck for now with a very glitchy machine that doesn't find its own Apple bluetooth keyboard unless its plugged in and is burning through Apple mouse batteries because of all of the connection reconnection issues?
 
Hi, I've seen that the 'entry-level' M1 MacBook Air has a 7 Core GPU, whereas the other model has an 8 Core GPU. What is the difference between the two? Is there an article explaining the difference? Thanks
Not a big deal, depending on what you are using it for.
If you got some time and go through some of the MaxTech youtube channels regarding the MacBook Air M1 reviews and tests, they do show games, rendering benchmarks, video benchmarks, etc.
Compared to the Air with 8-cores gpus (and all the others for that matter), the performance penalty in GPU-only scores is, for practical purposes, almost linear: 12.5% (1/8) less synthetic GPU benchmark scores.
The Mac Mini and MBP M1s are not that far off ahead overall, their benefit is in sustained high stress sessions because of their active cooling system (which some say it happens to be overkill for the most part and fans get spun far down a sustained session).

If you were looking to save some dollars I don’t think 7 or 8 GPU cores is that much of a deal, however, 16GB RAM (I use 64GB on an iMac) and storage l do think it will have more weight on the long term usability of it.

My .02 cents
 
  • Like
Reactions: dimittar
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.