Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I see a lot of people selling their 16" to pick up a 13"
Having had a 13inch and then upgrading to a 16inch, I really don't think I would want to make the trip back to the 13inch. The screen has spoiled me, and the speakers are something else.

I think I will be waiting until next year when we have a 16inch M machine before I consider anything. My current machine is great, if not a bit noisy if I do anything graphically intensive, but hey, I have silence to look forward to.
 
  • Like
Reactions: Six0Four
Here is what Apple didn't discuss in the November event and what happen if Intel is ready to launch a CPU build with 7nm.



View attachment 1668982

You're going to need to provide a link to the original Intel page on which this graphic appeared. I would be astonished if Intel were ready to launch a 7nm competitor to the M1 (with 3x the performance) within the next 18-24 months. Anyone would basic graphic design skills could produce this image.
 
How the heck is Apple so far ahead in performance? It's incredible how much of a lead they have it's like alien technology.
Alien.jpg
 
I think it's far more than that actually. A LOT of companies use Macs for development and a lot of them are dependant on VMs and virtualization like Docker. Losing those customers is probably more significant than you think.
Apple has consistently sold 50% of Macs each year to folks that are entirely new to the platform. That’s 9 million new people, and 9 million upgraders. If there are 9 million devs that don’t upgrade, that “might” be significant. Otherwise, just part of the yearly churn.
 
  • Like
Reactions: NetMage
For quite some sime, Apple was ahead but only in single core. That started with the iPhone 5S that was the first 64 bit chip. Before that, Apple was behind in both single core and multi core. It took a bunch of years until it started beating other ARM chips in both single threaded and multithreaded workloads. Although the experience was better, in more intensive workloads, Apple was behind. Ok, that's several years ago, but don't say always...
8 years ago:

iPad 2:
Some Android with double the cores:

Didn't matter that the Android specs wiped the iPad's off the map. The Android simply couldn't keep up. More RAM. More Cores. More Ghz per core. **** OS that couldn't use anything efficiently, even if we want to pretend that the Android's processor was better at the time.
 
I think it's far more than that actually. A LOT of companies use Macs for development and a lot of them are dependant on VMs and virtualization like Docker. Losing those customers is probably more significant than you think.

I see no reason why the M1 would prevent Docker or VMs in general working.

Docker runs fine on ARM processors already -- have used it myself on a RPi 4. Parallels are apparently working on a release to support virtualisation of ARM-compatible operating systems. Debian, Arch, Ubuntu Server, etc. all have ARM releases.
 
It’s safe to say that the rumored 12 core version of the M1 will easily reach 16 core Intel Xeon territory.
 
At this point, the difference between M1 and traditional x86 is so inanely wide it's not even funny.

And this is the M One!. Their first crack at it. The stuff they have in their R&D lab must be terrifying.
 
Something I'm puzzled about.
It was always the cheap low end machines that used to have Graphics systems that had to share graphics memory with main memory.
Of course, you also then have the problem of the memory you wish to hold programs in, being taken to be used by the graphics chip.
We then moved onto higher end graphics cards which then had their own super fast dedicated graphics to stop them hitting the processor.
So the processor could get on with what it was good at, with it's own memory, and the Graphics cards to storm ahead with their own dedicated memory also.
This allowed graphics performance to storm ahead.
Now we seem to be going back to the shared memory again.
Can anyone explain this, why this is not going backwards?
A few reasons that I can think of:

1) The balance between power consumption and performance - how much CPU performance do "most people" need, and how much battery life (or thermal envelope) do "most people" want. Apple has decided that the SoC with integrated GPU provides enough GPU performance while keeping the power consumption (and heat) low - which means long battery life and fast CPUs that aren't thermally throttled.

2) Memory bandwidth on the SoC is now good enough to provide good graphical performance without requiring a separate pool of fast (and expensive) Video-RAM.

3) Architectural efficiency - not having to copy graphical data from CPU to GPU memory and back again saves time and memory usage (no need to keep two copies,even if only temporarily)

4) Cost - it is cheaper to put GPU on the same chip than to have an entirely separate GPU with interconnections.

We used to have "co-processors" for some mathematical operations. These disappeared as CPUs became more capable. Integrating the GPU onto the same silicon die is just an extension of this trend. Quite possibly the discrete GPU will become a specialized peripheral for desktops.

Bear in mind that consoles like the PS5 and XBox X have GPUs similar in performance to an NVidia GTX 2080, and these use single SoCs similar in concept to the Apple M1 (but running at 180-200W TDP).
 
  • Like
Reactions: NetMage
I see no reason why the M1 would prevent Docker or VMs in general working.

Docker runs fine on ARM processors already -- have used it myself on a RPi 4. Parallels are apparently working on a release to support virtualisation of ARM-compatible operating systems. Debian, Arch, Ubuntu Server, etc. all have ARM releases.
This is true, but it is undeniable that there is a much larger ecosystem of Docker images for x86 than there is for ARM. This may or may not be relevant for building net new images, but Docker's strength lies in being able to extend existing images from a repository. You *can* create multi-arch images, but it's a hassle that will put many people off.

Same story with VMs - unless you are already deploying to ARM, most existing VMs will be for x86/64.

It would be a big shift for a lot of developers.

One saving grace might be to availability of ARM-based cloud services (e.g. AWS m6 instances). The availability of ARM client machines might accelerate the adoption of ARM-servers.

I expect that ARM & x64 with co-exist for a long time, and many development houses will need both.
 
  • Like
Reactions: Tanax
I'm not sure you can necessarily ascribe the trajectory of Mac sales figures so directly to their underlying platform choice - there are likely many other factors contributing to those too, including wider macroeconomic factors - but overall, yes, I agree. Apple needs to do something to trigger another wave of Mac sales growth and get out of this rut. When I watched the event one of my first thoughts was that this is going to have wider impact on the industry, as the performance characteristics (especially as Apple announce the higher-end options in due course) may be so far ahead of the competition that it drives greater adoption of Mac in market segments that are still largely Wintel-oriented.
Apple just had one of there best quarters on Mac sales in history. There is no rut.

I do agree with you that this will have profound impact to the industry. The issue for other manufacturers it is not a quick pivot to change your underlying architecture - its a multi-year program. it will be interesting to see if Microsoft ramp up their ARM based Surface computers.
 
iPad Pro runs off 6GB of RAM. Whats the obsession with RAM on non-intel processors?

Depends on what you do with the device. As example, on my MBP I occasionally need a lot of RAM for development since can involve running multiple virtualized components. My current MBP has 16GB and for my next one I definitely want at least 32GB.
 
The report on the Cinebench23 benchmark doesn't state which Mac was tested. If it was the MacBook Air, then this would be pretty promising, because it hints that the single-core results for a cooled MBP or Mini would likely equal the Ryzen 5 or 7 with Zen 3. In multi-core, of course, the AMD chips beat it due to their larger number of full-speed cores, but at 3-9X the TDP.
 

Thanks.
This is what everyone needs to see.
The cold hard reality as opposed to glossy marketing, one synthetic benchmark and graphs with no actual details on them.

Just to note:
I am fully aware and must accept Apple has some of the best chip designers in the industry working for them.
However I will also say I don't believe all the other chip designers working for Intel, AMD, Nvidia etc etc are stupid in comparison to Apple's designers.

I worry too many people are starting to think all other chip designers are dumb in comparison to Apple and don't know what they are doing.

Having REAL results from a range of real-world general software is what we want to see to be able to answer this for us all.
 
At this point, the difference between M1 and traditional x86 is so inanely wide it's not even funny.

And this is the M One!. Their first crack at it. The stuff they have in their R&D lab must be terrifying.
Definitely. It’s the first time I’m actually interested in an Apple laptop. :)

Hopefully Apple doesn’t stumble on their own (like under estimating the power efficiency of their chips and end up designing laptops with even smaller batteries).
 
If c. 1750 vs c. 1100 (single-core Geekbench 5) is "marginally faster", then I guess a 59% increase in your salary would only be just "marginal".
I'm talking about multicore score / number of cores. Multicore is the only measure of relevance here.

EDIT: Like this:

Mac mini (Late 2020)
Apple M1 @ 3.2 GHz (8 cores) 7643

Mac mini (Late 2018)
Intel Core i7-8700B @ 3.2 GHz (6 cores) 5476

7643 / 8 = 955
5476 / 6 = 913

A 5% difference for a machine that is 2 years newer. So what is the hoopla about, again??
 
Last edited:
I know this only one data point, but I’m impressed by Apple!

But it’s very embarrassing for Intel. That its pace of innovation has so dramatically slowed, that an ARM chip can now run emulated x86 code faster than a native x86 processor.

Too bad apple won’t let the processor run x86 code outside of macOS, like in boot camp for example.

Besides support for legacy code, what is the point of x86 anymore???? I wonder if the instruction set is what is really holding intel behind in terms of processor redesign.
 
  • Sad
Reactions: NetMage
well if don’t know if the memory was shared or not....

but the second generation MacBook Air had a dedicated NVIDIA GPU in addition to the Intel integrated GPU... and I was great.
Ahh, you're absolutely right. I'd forgotten about that oddity. However, according to Everymac, that NVIDIA GPU shared 256MB of RAM with main memory, so while it wasn't integrated with the CPU, it also didn't use dedicated graphics memory.
 
  • Like
Reactions: emulajavi
Depends on what you do with the device. As example, on my MBP I occasionally need a lot of RAM for development since can involve running multiple virtualized components. My current MBP has 16GB and for my next one I definitely want at least 32GB.

Understood but that won’t provide you double the efficacy - I can render video on my iPad Pro with no choppyness that my intel MacBook can only do under severe duress and it has 3 times the RAM as my IPPro.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.