Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Moore's law is about transistor count, so I'm unsure how you extrapolate anything regarding performance or other specs from that.
Why did he focus on doubling transistor count? Because that doubling is the usable metric for when users notice a meaningful change in the underlying technology. Look in to the background of Moore's law; he explains it all clearly.
 
Why did he focus on doubling transistor count? Because that doubling is the usable metric for when users notice a meaningful change in the underlying technology. Look in to the background of Moore's law; he explains it all clearly.
Yes, but your corollary is… quite a stretch.
 
Why did he focus on doubling transistor count? Because that doubling is the usable metric for when users notice a meaningful change in the underlying technology. Look in to the background of Moore's law; he explains it all clearly.
There's actually a separate rule about perception: The Just Noticeable Difference


For weight, it's around 2%?

I expect people can notice a computer speed difference around a 5-10% difference?

Sure, people should generally hold out on upgrading for a performance improvement of about 50-100%, but that doesn't work for some people and definitely not for some work.

The kicker for "Moore's Law" is: transistor count doubling doesn't directly equate to performance:

- more transistors has limited effect on single core performance except for efficiencies, branching logic, cache memory, etc; so that has been improving relatively slowly (it took about 3 years to double single-core geek bench scores (from 2020 to to 2023); and before that, it took 8 years (2012 to 2020... those were a sad 8 years).

- now doubling of transistors just tends to mean: lots more parallelization with direct performance improvements for code that can utilize it.
 
I can’t think of any vendor whose CPU design’s single-core scores have doubled between 2020 and 2023.

Apple’s have gone up about 30-40%.

Benchmarks are always rough and kinda pointless.

Geek Bench 6 scores, ignoring price point and form factor:

MacBook Pro (16-inch, Nov 2023)
Apple M3 Max @ 4.1 GHz (14 CPU cores, 30 GPU cores) -> 3128

vs

iMac (27-inch Retina Mid 2020)
Intel Core i5-10600 @ 3.3 GHz (6 cores -> 1578


... but just looking at MacBook Pros, it would be 4 years:

MacBook Pro (15-inch Mid 2019)
Intel Core i9-9980HK @ 2.4 GHz (8 cores) -> 1383


... halving again was 7ish years earlier:

MacBook Pro (15-inch Mid 2012)
Intel Core i7-3720QM @ 2.6 GHz (4 cores) -> 677


... halving again was 4 years earlier:

MacBook Pro (Late 2008)
Intel Core 2 Duo T9550 @ 2.7 GHz (2 cores) -> 331



It's all kinda meaningless, except that the mid 2010s sucked.
 
  • Like
Reactions: Chuckeee
Benchmarks are always rough and kinda pointless.

Geek Bench 6 scores, ignoring price point and form factor:

MacBook Pro (16-inch, Nov 2023)
Apple M3 Max @ 4.1 GHz (14 CPU cores, 30 GPU cores) -> 3128

vs

iMac (27-inch Retina Mid 2020)
Intel Core i5-10600 @ 3.3 GHz (6 cores -> 1578

OK, but now you’re comparing a midrange CPU from vendor A with a high-end CPU from vendor B.

... but just looking at MacBook Pros, it would be 4 years:

MacBook Pro (15-inch Mid 2019)
Intel Core i9-9980HK @ 2.4 GHz (8 cores) -> 1383


... halving again was 7ish years earlier:

MacBook Pro (15-inch Mid 2012)
Intel Core i7-3720QM @ 2.6 GHz (4 cores) -> 677


... halving again was 4 years earlier:

MacBook Pro (Late 2008)
Intel Core 2 Duo T9550 @ 2.7 GHz (2 cores) -> 331



It's all kinda meaningless, except that the mid 2010s sucked.

Right.

Post-Skylake Intel had very gradual improvements, as they were stuck on 14nm for quite a while, for a number of reasons (mostly, hubris).
 
  • Like
Reactions: DavidSchaub
I concur.

Especially when some manufacturers specifically tailor improvements to optimize benchmarks. Yes, sometimes “improvements” are just gamed for benchmarks and the resulting press coverage.

This is an issue with a variety industries, not just IT.
This is true, and some phone makers have been caught doing that. (There’s also, of course, the Volkswagen thing.)

But for all the derision it receives, I’d say Geekbench is pretty good, across OSes and architectures, as a relative “how fast will this feel” score. Two things it doesn’t offer are 1) how long can it sustain that speed (iPhone, iPad, MBAir will struggle here), and 2) what if we select for specific cores (for Apple, that would be p-cores and e-cores; Qualcomm now even has three such tiers).

So yes, I think it’s fair to say that between Skylake (2015) and Ice Lake (ca. 2020, but 2021 on desktop), improvements were very gradual. If Apple had also allowed for AMD CPUs, that might have been less painful, but those aren’t great in terms of efficiency either.
 
Yes, but your corollary is… quite a stretch.
Moore may have been speaking strictly about CPUs, as that's what his employer, Intel, was manufacturing but the observation has been applied to GPU, RAM, and storage since I started working IT in the 90s.

Even before the days of Intel and Moore, this was known, whether they invoked a "law" or not. Commodore didn't make a C72 or a C96. The successor to the Commodore 64 was the C128 (and at that time costs were much higher and had a much larger effect on PC demand).

Why do you think Apple sells memory and storage in factors of 2?
Just coincidence?

Why do you think it's universal among the international hardware manufacturers, that memory and storage is produced marketed and sold in units which double?

Anyway you don't have to believe it, maybe you have some skill where you can perceive the difference between an 8GB machine and a 9GB, but the typical users and the mass markets don't.
 
Last edited:
I really want a Mac upgrade now. I would buy an Ultra M4 Studio this second. It just seems stupid now, given it's only an M2 which is two generations behind. Would anyone here recommend I buy it? I don't see any sales.

I'm seeing M3 Pro's for about ~25%+ off, which is tempting. I could probably sacrifice one generation for the discount.
 
I really want a Mac upgrade now. I would buy an Ultra M4 Studio this second. It just seems stupid now, given it's only an M2 which is two generations behind. Would anyone here recommend I buy it? I don't see any sales.

I'm seeing M3 Pro's for about ~25%+ off, which is tempting. I could probably sacrifice one generation for the discount.

As with all computer purchases: waiting is always better, if possible.

Personally,I would try and wait; but I don't see anything wrong with buying an M2 Ultra, if you really need the grunt that the M3 Max can't do.

I would definitely only buy the Mac Studio from Apple's Refurbished Store.
 
  • Like
Reactions: Chuckeee
Moore may have been speaking strictly about CPUs, as that's what his employer, Intel, was manufacturing but the observation has been applied to GPU, RAM, and storage since I started working IT in the 90s.
Moore came up with his "Law" in 1965, Intel was founded in 1968 and the first microprocessor was announced in 1971.

Moore's Law was more about economics and process yield than the maximum number of transistors on a chip.
 
  • Like
Reactions: spicynujac
Moore may have been speaking strictly about CPUs, as that's what his employer, Intel, was manufacturing but the observation has been applied to GPU, RAM, and storage since I started working IT in the 90s.

Your assertion was: "the corollary to Moore's law states you won't really notice a bump in specs (whether GPU, CPU, RAM, storage, etc.) until it doubles."

Moore's was: "The complexity for minimum component costs has increased at a rate of roughly a factor of two per year."

There are several steps between complexity and performance implications. And the doubling he spoke of is something completely different than the one you are speaking of.

Even before the days of Intel and Moore, this was known, whether they invoked a "law" or not. Commodore didn't make a C72 or a C96.

Moore's observation predated Commodore's computers.

And your "C72 or a C96" part is funny, because Commodore almost did launch a C65.

If what you're talking about is "RAM always doubles", that isn't true either. The M3 Pro MacBook Pro starts at 18 GiB RAM, not 16 or 32. The current iPhone 15 has 6 GiB RAM, not 4 or 8.

But, sure, in some cases, having RAM be a power of two can be beneficial, due to techniques such as dual-channel.


Why do you think Apple sells memory and storage in factors of 2?

They often do not.

And in any cases, if they did, Moore's Law wouldn't be why.



Anyway you don't have to believe it, maybe you have some skill where you can perceive the difference between an 8GB machine and a 9GB, but the typical users and the mass markets don't.

Powers of two being commonplace in computing is unrelated to consumer perceptions.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.