Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Something interesting to me is that all the intel processors score about evenly regardless of generation or clockspeed. A 27" iMac with an i9 @ 3.6ghz performs about the same as a macbook pro with an i7 @ 2.3 ghz. There might be a software limitation with the different silicon. I'd like to see performance comparisons with Adobe products or something to really understand what's happening.
This is because Intel single core architecture and performance has not advanced since several years, so we had the same architecture with almost the same frequencies since several processor iterations. The only differencies have been in multicore. For example my new Mac is the same single core speed as my past 2017 Mac, but it's twice as fast in multicore. More importantly for me, the videocard is twice as fast, and supports 16GB of VRAM, as I need 11-12GB of VRAM for a couple of apps. Since those apps also need 10-15GB of RAM, of course I'll need an Apple Silicon Mac with 32GB or more RAM to replace my current setup, I'm expecting to change my new Mac in 2-3 years from today, so I'm sure I'll get that and even more.
 
I’m not very qualified to guess. I don’t know a lot about GPU architecture. It does seem like if you are dedicating some RAM that would otherwise be available for the CPU to GPU usage, then that’s less RAM for the CPU to use. On the other hand, the CPU might have been using an equivalent amount of memory (or more) anyway, for use in communicating back and forth with the GPUs, so UMA would be an improvement.

Anyway, the tldr; is that it‘s outside my experience, so I would only be guessing.
I think I can give a bit of an idea on this (albeit as a dev and usernot as an architecture expert)

At least in the realm of games, sometimes resources are duplicated if the cpu wants access to them, because querying results or the asset back from the gpu stalls the whole system. They run basically staggered and mostly one way: from application side logic the results are sent to be rendered for viewing (player position, camera position, render that).

For static data, say a texture that won’t change, if the the application what’s to read a specific pixel of a texture (a terrain height map where the player could be) or better yet, to have collision detection at the mesh level, that texture and those vertices will live twice, in system ram: for the cpu to read and compute things; and in gpu memory: for the graphics card to read and draw said meshes with its textures.

With UMA, it would be my guess that it would be possible to just have those potentially duplicated resources only once. I’m not too sure about the actual benefits on collision meshes (since sometimes a lower simpler representation is calculated and the original duplicate discarded)

On a similar note, a gpu ray tracer could just create its acceleration structure, mesh splitting etc on the cpu and... done, it’s there ready to be rendered by the gpu, no need to send data back and forth.
 
  • Like
Reactions: CarlJ and Spock1234
I think I can give a bit of an idea on this (albeit as a dev and usernot as an architecture expert)

At least in the realm of games, sometimes resources are duplicated if the cpu wants access to them, because querying results or the asset back from the gpu stalls the whole system. They run basically staggered and mostly one way: from application side logic the results are sent to be rendered for viewing (player position, camera position, render that).

For static data, say a texture that won’t change, if the the application what’s to read a specific pixel of a texture (a terrain height map where the player could be) or better yet, to have collision detection at the mesh level, that texture and those vertices will live twice, in system ram: for the cpu to read and compute things; and in gpu memory: for the graphics card to read and draw said meshes with its textures.

With UMA, it would be my guess that it would be possible to just have those potentially duplicated resources only once. I’m not too sure about the actual benefits on collision meshes (since sometimes a lower simpler representation is calculated and the original duplicate discarded)

On a similar note, a gpu ray tracer could just create its acceleration structure, mesh splitting etc on the cpu and... done, it’s there ready to be rendered by the gpu, no need to send data back and forth.

Sounds reasonable.
 
Great comment. However, it remains useful to remember that for 99% of the world's computing population, myself included, these machines are more than adequate.

People often overestimate what hardware they require.
I tend to buy the best I can afford not because I need the latest with the most but because 5 years (or more) later its still runs pretty well. You buy only what you need and a couple of update cycles and you've got a slow machine. No matter whose OS and chipsets are in it.
 
Without a full version of Windows

I dare say they have trustworthy stats telling them what percentage of Mac users actuallyuse Windows on their Macs. And they probably judged that potentially losing them as clients is worth getting all the other perks.

I for one don't care for Windows, and I don't personally (in my friends circle) know any Mac users who do (and I know quite a few).
 
  • Like
Reactions: CarlJ and Spock1234
The New Intel i9 is said to be very fast!
I wish APPLE would make 1 ,our iMac with it! I would buy it!
This could be the M1 Killer!

It might be faster compared to what they have, but at what price in terms of money and energy? As far as Apple is concerned Intel is on borrowed time.
 
That’s great, though i still am not sure i heard anyone say microsoft will actually license it and provide it for download.
They will license it. MSFT sells a lot of windows licenses to mac users, and those users go on to license things like Access and Visual Studio that are Windows only. No reason to forgo that revenue.
 
How the heck is Apple so far ahead in performance? It's incredible how much of a lead they have it's like alien technology.
I think because their engineers were told to sit in a room and figure this all out. Where are the performance bottlenecks? Oh there, let’s get rid of that. How? Idk, figure it out staying inside the box, but you get a bigger box.

It seems everyone else just followed the same old design with the same bottlenecks. Apple also spent the last who knows how many years learning from iOS devices to perfect their SoCs.

I’m very impressed and very tempted to buy one. I’m almost done with some other financial goals though and I’m tempted to wait until they go on the refurbished store or the “real” MacBook Pros come out. I don’t think I’d need that much power but I do want to see what my options are.
 
Great comment. However, it remains useful to remember that for 99% of the world's computing population, myself included, these machines are more than adequate.

People often overestimate what hardware they require.
Indeed people tend to obsess about the number when they should just estimate their needs better.
 
Here’s a few nuggets to think about.. Steve Jobs died in 2011.. First iPhone with Apple silicon was released in 2013.. Which means Steve was part of that decision of developing their own chips and that this was part of the roadmap all along...
 
  • Like
Reactions: gikku
How the heck is Apple so far ahead in performance? It's incredible how much of a lead they have it's like alien technology.
The iPhone/iPad has been quietly having that level of performance for the recent 2-3 years already. Just take a 4K video, apply a filter to it in the photos app, and click done. See how fast it edits a 4K video.
 
  • Like
Reactions: NetMage
Here’s a few nuggets to think about.. Steve Jobs died in 2011.. First iPhone with Apple silicon was released in 2013.. Which means Steve was part of that decision of developing their own chips and that this was part of the roadmap all along...
Apple acquired PA Semi in 2009, well in the Steve Jobs era. The first Apple silicon was Apple A4, first used in the first iPad.
 
I think because their engineers were told to sit in a room and figure this all out. Where are the performance bottlenecks? Oh there, let’s get rid of that. How? Idk, figure it out staying inside the box, but you get a bigger box.

It seems everyone else just followed the same old design with the same bottlenecks. Apple also spent the last who knows how many years learning from iOS devices to perfect their SoCs.

As the CPU guys said in an interview, they were focused on what moved the needle for performance, not what the industry was doing. I suspect a lot of CPU performance ideas are legacy, where CPUs basically plowed through huge data sets and prefetching was really about making sure that there was enough data for the one running program. That covers a lot of scientific computing, where linear accesses are what you want.

In a multi-threaded multi-process environment life becomes really difficult. Every context switch means that you could run into a big cache miss, and I expect that in a multi-process multi-threaded environment your caches are thrashing constantly. That would explain why the L3s tend to be really large.

I wonder if Apple's performance numbers are because they own the OS and hardware, so they could see at every level what. was happening. There's a ton of stuff the OS could do to make things better, but you need communication. For example, if you can tag low-performance threads in Xcode you can move that information all the way down the food chain to the CPU itself, and in the M series that could be scheduled on an efficiency core by default. In normal code your hints are basically limited to "this variable should be in a register." But that doesn't work in a multi-process environment because, well, what happens if everyone wants a register?

This is all speculation, but this is the kind of stuff that a lot of the *nix profiler-based feedback/optimization stuff was supposed to do. It didn't quite work because it was only at the application level...but if you can see the whole stack (like Apple) that's where things should start.
 
people said the same thing to me in 2012 when i maxed my MBP to 16 GB. but 8.5 years later, i'm on the same machine, as fast as it ever was, running complex fill patterns in Crossfire without missing a beat. some day, 32 gigs of RAM will be the new baseline.
Please don't over-extrapolate what I'm saying. My point is that critiquing these machines for not supporting 32GB of RAM is essentially invalid due to their status as entry-level machines (albeit entry-level machines that outperform their Pro equivalents in some aspects). And their Intel predecessors didn't support 32GB but nobody thought twice about that seven months ago when they were released.

I completely agree that someday 32GB will be the baseline. But today is not that day. I would imagine the percentage of Mac purchasers that could make effective use of more than 16GB of RAM today is very limited. The ability to 'future-proof' is good, though, and that's why I'll be happy when the machines that currently support 32GB+ get updated to ASi and continue to support 32GB+.
 
Last edited:
On die memory: there is no on die memory. It’s in the package, but not on the die. This is easy to see from the actual die photographs that have appeared on Ars (I addressed this claim in another thread and posted the picture). There are a number of LPDDR4X channels with off-chip drivers, so you can even see how the die connects to off-die RAM. Here’s the photo: https://images.anandtech.com/doci/16226/M1.png
Do you think it’s possible for Apple to move the memory out of the package and allow user replaceable memory?
 
These are some amazing numbers. Very curious to see the real world stuff and how it shakes down in every day computing. I'm still very excited about the transition!

very interested as well!

personally I’m looking to see Jonathan Morrison’s take of either of these products in his hands (he does use Logic Pro decently) but I’d love to peak at the Logic Pro forum here to see how real world pros use their software and workflows on these.

once a few confirmations positively I think sales will jump!

universal software is stating to uptick quite significantly. A sincere thank you to these amazing developers!!
 
People seem to delude themselves that this SoC is the future when it's a specialized solution.
If I didn’t know any better, AMD is paying you commission. I’ve never seen anyone bang the AMD drum as hard as yourself. Also, Apple specifically stated that the M1 is the FIRST step with respect to the future of the Mac. There’s still 2 years to for the transition to finish. Relax & grab some popcorn.
 
This is because Intel single core architecture and performance has not advanced since several years, so we had the same architecture with almost the same frequencies since several processor iterations. The only differencies have been in multicore. For example my new Mac is the same single core speed as my past 2017 Mac, but it's twice as fast in multicore. More importantly for me, the videocard is twice as fast, and supports 16GB of VRAM, as I need 11-12GB of VRAM for a couple of apps. Since those apps also need 10-15GB of RAM, of course I'll need an Apple Silicon Mac with 32GB or more RAM to replace my current setup, I'm expecting to change my new Mac in 2-3 years from today, so I'm sure I'll get that and even more.

It still seems a little off though. I know Intel hasn't made great strides with single core performance, but the frequencies are significantly different. Something about that chart just doesn't make sense to me.

They're close at the links below as well, but it looks like Geekbench doesn't reliably measure performance on macos. I haven't looked much into it, but I would really expect different results than the graph in this post when comparing chips with a turbo frequency of 4.1 ghz vs 5.0ghz.

Also, when not limited to macos, the geekbench scores make a bit more sense at 1,346 to 1,243.

Geekbench is inconsistent enough on macs that it doesn't seem like the best benchmark software to compare the chips. The m1 certainly seems to have outstanding performance, but I'd still like to see how Adobe apps perform before getting too excited.
 
Last edited:
  • Like
Reactions: AAPLGeek
The New Intel i9 is said to be very fast!
I wish APPLE would make 1 ,our iMac with it! I would buy it!
This could be the M1 Killer!

good for the holidays in the northern hemisphere. you know, with it being capable of warming a whole house.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.