Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
No, it is more like we have finally caught up. Apple was often criticized for its poor performance compared to its peer group in the desktop space. Intel dramatically influenced Apple's ability to compete in the Mac space by the release schedule of the x86 processors. Often, Intel would release a high power version of their new chip before they released a power optimized version. Since Apple does not use the high power devices they were always left waiting and put behind the performance curve because of the wait.

With the new Apple Silicon, all of Apple's hardware design innovations can really shine because they will compare and compete more directly without being handicapped by being behind the performance curve out of the starting block. This is why M1 is so game changing For Apple.

They can more closely tune their silicon for real-life or specialized software. Intel will do this for some customers. I'd love hardware to do Huffman decoding or jpeg zig-zag decoding and I don't see why you couldn't have transistors to do it but it's too specialized for general purpose CPUs.
 
All absolutely true, but those worlds are realising that kicking the can down the road indefinitely, and perpetuating technical debt in this way is crippling their ability to move forward and be agile, and exposing them to cybersecurity and other risks too. Many of them are moving workloads to the cloud (at varying paces of course depending on their competence and ability to invest) and see that as the vehicle for remediating those apps and getting them into an evergreen model (and SaaS-based where possible).

There are two ways this can go. Either Apple decides to start building-in better backwards compatibility, as you describe, and then becomes hamstrung in the way that Intel and Microsoft have been for many years, or they say "screw that" and tell their customers that if they want to be a user of Apple technology (and get the benefit of what appears to be the best performance per watt in the industry) then they are going to have to get with the programme and keep their apps and infrastructure current. My bet is on the latter.
That is assuming more security can only be delivered by Apple's yearly release model (as opposed to Microsoft's incremental model) and that users in government, business, engineering and education always benefit when forced to upgrade. They're not "kicking the can down the road". Most of their needs in these sectors have already been amply met by current technology. Their priority is stability as although the tasks don't require a lot of computing power, they're often complex and disjointed and involve multiple programs.

I'm not making a normative statement. I'm just giving reasons as to why Intel and Microsoft are the way they are and explaining that the inefficiency is intentional because it's actually their clientele who wants it (their keeping extending support for Windows XP is a case in point). It's a bit naïve to think that there is no downside to an "evergreen" model or that such a model is feasible in these sectors without massive investment or that investment to simply staying current with technology with no significant increase in productivity should be the goal of an organization at all.

There is a reason why most desktops in this world isn't a Mac. I hope the irony isn't lost on you that it is exactly because of this that Mac can undergo two architectural changes within 15 years while the PC world has stayed put.
 
Apple just had one of there best quarters on Mac sales in history. There is no rut.
I think it's too soon to tell. A lot of that may have been driven by purchases in support of working from home, consistent with what many manufacturers saw. The market as a whole grew in the last quarter.
 
  • Like
Reactions: widEyed
You know that "VirtualApple" result is not anymore on Geekbench website, right?

Was it fake or unintentional mistake IDK, but in any case the test result and "news" are not valid any more
Anymore info on this?
 
If you need windows, then you will eventually likely need a windows machine. Luckily for Apple, only 1% of users use bootcamp, and something like 5% use VMs, so even if they lose those customers, they will more than make up for it with new buyers who want to run iOS software on their laptop or desktop.
I'm, sure you're correct about Apple playing the main chance on new users vs legacy users. Apple has usually had decent foresight with that and not afraid to risk it all (FCP X is the only disaster I recall when they lost half an industry to Adobe and other NLE overnight).

I know the developer of mimoLive has to uses Parallels for macOS Catalina or Mohave to run the Quartz Composer (QC) Editor to make layers for the mimoLive A/V multicasting app. Apple has regressed QC and now the Editor doesn't even work in Big Sur. (mimoLive won at least one Apple Design Award). Rare case of QC dependency, most of us moved on but it's hard to replace a visual programming framework that plugs into Apple's video pipelines overnight.

I guess if someone needing Windows or x86 compatibility they can get a Mac mini or down the track an M1x Mac Pro (exceedingly expensive for all but hard core professionals, I really hope something gives there) you can share all you peripherals b/w the Mac and the Windows box.

It's regrettable eGPUs will not work over TB3 with these new Macs or I'd have already ordered a Mac mini with 16GB RAM. These benchmarks are encouraging, waiting on real world assessments.
 
You seem to know your stuff. How would they scale up this now? Seeing as it's already got 8 cores for the next step up would they just look to go to 16 cores and then 32 cores for the high end machines? Or would multiple processors be the way forward? When you scale up cores like that, apart from heat are there other foreseeable problems or reasons why they shouldn't? I assume if they're still running this at 5w, then double cores only makes it 10w and easily heat controllable still, so no issues there for say 24" iMacs - and then 32 cores for desktop and large iMacs. Or do you see another way for them to ramp the power up?

They could certainly add more cores, though not necessary by doubling each time, and not necessarily by adding equal numbers of high performance and low power cores each time. They doubtless have run simulations and profiled existing machines to determine what sorts of core configurations make sense.

If I had to guess, I’d think the 16” MBP and higher end 13” MBP’s, and probably the iMacs, will get 4 more cores, but I’m not sure if that means 2+2 or 4+0 or what. Of course, that choice also affects maximum power consumption.

I would also expect more GPU cores.

As for scaling, performance-wise the scaling will not be quite linear. More cores, even if software is optimized to use them, still causes contention for things like buses and caches, and increases latency for communications (since things are further away from eachother). This may mean the L2 cache is increased in size (at least for the really high-end chips) or that additional read or write ports are added to it, for example.

Power-wise, heat will scale more or less linearly with respect to the cores, but of course that doesn’t mean the whole chip produces double the heat if you double the cores - there are lots of other blocks generating heat that does not also double.

In any event, they have power well under control, and there will be no problem easily scaling the M1 to fit into even their highest-end desktops (i.e. something like a iMac Pro or Mac Pro).
 
That is assuming more security can only be delivered by Apple's yearly release model (as opposed to Microsoft's incremental model) and that users in government, business, engineering and education always benefit when forced to upgrade. They're not "kicking the can down the road". Most of their needs in these sectors have already been amply met by current technology. Their priority is stability as although the tasks don't require a lot of computing power, they're often complex and disjointed and involve multiple programs.

I'm not making a normative statement. I'm just giving reasons as to why Intel and Microsoft are the way they are and explaining that the inefficiency is intentional because it's actually their clientele who wants it (their keeping extending support for Windows XP is a case in point). It's a bit naïve to think that there is no downside to an "evergreen" model or that such a model is feasible in these sectors without massive investment or that investment to simply staying current with technology with no significant increase in productivity should be the goal of an organization at all.

There is a reason why most desktops in this world isn't a Mac. I hope the irony isn't lost on you that it is exactly because of this that Mac can undergo two architectural changes within 15 years while the PC world has stayed put.
I'm not arguing that more security can be delivered by Apple's model than Microsoft's or anyone else. Nor am I arguing that users always benefit when they upgrade. I'm arguing that running out of date software - even if it ostensibly meet's the organisation's needs for functionality, performance, etc - is one of the biggest sources of security vulnerability and business risk. And the older something is, the greater the risk and the more people typically want it replaced. I've seen countless times legacy systems supporting some critical business process, but the person/people that designed and built it have long since left the organisation, and it's some unsupported version of some app running on some unsupported version of some database on some unsupported OS on some unsupported hardware. Nobody in their right mind says they're okay with that even if it functionally meets their needs.

In my experience, CIOs and CTOs in Government, business, etc, are highly unlikely to say that their needs are met by current technology and bemoan the need to upgrade. At least not today. Quite the opposite, such people commonly point at the burden of legacy technology as one of their biggest challenges, and something which prevents them from adopting newer technology. And in most regulated industries regulators will routinely audit firms for their compliance and procedures around vendor software currency.

I agree that Microsoft's and Intel's position is intentional, or at least it has been historically. Although I think Microsoft is increasingly endeavouring to move away from that to de-shackle itself from that burden. They don't *want* to be offering these support agreements for years-old technology.

I never suggested that there was no downside to an evergreen model. Clearly it requires considerable effort to keep up, and that's a struggle for many organisations - you can see how Microsoft has been forced to relax it's servicing model for Windows 10 for example. But the direction of travel towards that is clear, at least in my view. But yes, it's hard.
 
  • Like
Reactions: lysingur
They could certainly add more cores, though not necessary by doubling each time, and not necessarily by adding equal numbers of high performance and low power cores each time. They doubtless have run simulations and profiled existing machines to determine what sorts of core configurations make sense.

One thing that the Apple guys have said is that they aren't going to do stuff just for kicks. If more cores doesn't move the needle they won't add them. Tech enthusiasts will be clamoring for more cores because "more is better." There will be increasing pressure for the hardware to get more "stuff" as time goes on, because more of everything is better.

After all, the ARM chips with 80 cores sounds really impressive. I mean wow, that's 10 times more cores than the M1! It has to perform better! But that number ignores all kinds of other things like memory bandwidth, I/O, etc. And if it's sitting under a web server it's waiting on I/O most of the time then the actual chip performance is somewhat irrelevant.
 
This is all fake news until production units are in the hands of real users and YouTubers. Does anyone *actually* believe the gimped M1 with 8GB or 16GB of RAM is going to process video faster than an Intel blowtorch. Probably not. But the marketing is cute.
It's already faster on iPhones and iPads with A14 chips. With far less ram.
 
One thing that the Apple guys have said is that they aren't going to do stuff just for kicks. If more cores doesn't move the needle they won't add them. Tech enthusiasts will be clamoring for more cores because "more is better." There will be increasing pressure for the hardware to get more "stuff" as time goes on, because more of everything is better.

After all, the ARM chips with 80 cores sounds really impressive. I mean wow, that's 10 times more cores than the M1! It has to perform better! But that number ignores all kinds of other things like memory bandwidth, I/O, etc. And if it's sitting under a web server it's waiting on I/O most of the time then the actual chip performance is somewhat irrelevant.

My daily workload works well with a lot of cores. Two big trading programs and two virtual machines and then all of the usual office stuff. I'm quite happy with a 10700 and it meets my needs but I'm eyeing a 16-core Zen3 for my next build. Performance can be addictive.
 
Apple has consistently sold 50% of Macs each year to folks that are entirely new to the platform. That’s 9 million new people, and 9 million upgraders. If there are 9 million devs that don’t upgrade, that “might” be significant. Otherwise, just part of the yearly churn.
Do you have a source for that 50% datapoint? I don't think I've ever seen that stated.
 
Looking forward to seeing sustained performance figures with bigger workloads. Not ready to make the jump yet as I just upgraded to a 16" - next couple of years are going to be interesting.
 
They say the player build itself is going to be (or already is) working soon.
The editor itself, yes, it’s farther but it is my understanding that Rosetta 2 will just launch it on the M1 Mac?
As an Unity tinkerer myself I’m really curious about the usability of Unity as of next week.
Also, normal MacOS player builds (non arm builds) would just launch with the Rosetta interpreted one?
I'm not too concerned with the player as it seems to be pretty stable but the editor is a mess. I would never trust a Rosetta interpreted Unity app. Unity usually takes about 2 years to fully settle down with big changes like this but I hope to be proven wrong.

EDIT: I'm seeing a few videos of x86 Unity running on M1 and it seems to be working fairly well. I might get an M1 Mini and see how it goes.
 
Last edited:
  • Like
Reactions: amartinez1660
FYI, those Cinebench numbers for the M1 (that showed it being slower than an AMD CPU) being reported by various tech sites were entirely made up. The Twitter account that first published them admitted it.
 
There is a small percentage of customers that are concerned about upgrading the hardware of their Apple compute
There is a small percentage of customers that are concerned about upgrading the hardware of their Apple computers.
Is that why so many people are still using a cMP?

I really dont understand why the blind cult members are ok with having these limitations imposed on them?

Dude, it afects you are a customer... I...just..can't..
 
If you need windows, then you will eventually likely need a windows machine. Luckily for Apple, only 1% of users use bootcamp, and something like 5% use VMs, so even if they lose those customers, they will more than make up for it with new buyers who want to run iOS software on their laptop or desktop.
Where do you get the 1% and 5% figures from?

I think those numbers are FAR less than that.
 
  • Like
Reactions: Unregistered 4U
They could certainly add more cores, though not necessary by doubling each time, and not necessarily by adding equal numbers of high performance and low power cores each time. They doubtless have run simulations and profiled existing machines to determine what sorts of core configurations make sense.

If I had to guess, I’d think the 16” MBP and higher end 13” MBP’s, and probably the iMacs, will get 4 more cores, but I’m not sure if that means 2+2 or 4+0 or what. Of course, that choice also affects maximum power consumption.

I would also expect more GPU cores.

As for scaling, performance-wise the scaling will not be quite linear. More cores, even if software is optimized to use them, still causes contention for things like buses and caches, and increases latency for communications (since things are further away from eachother). This may mean the L2 cache is increased in size (at least for the really high-end chips) or that additional read or write ports are added to it, for example.

Power-wise, heat will scale more or less linearly with respect to the cores, but of course that doesn’t mean the whole chip produces double the heat if you double the cores - there are lots of other blocks generating heat that does not also double.

In any event, they have power well under control, and there will be no problem easily scaling the M1 to fit into even their highest-end desktops (i.e. something like a iMac Pro or Mac Pro).

So do you think it can easily compete with iMac Pro/Mac Pro? That was my fear at the start of all this - fantastic processors for mobile devices as power to watt ratio was where they exceeded but they wouldn't be able to build something that beat Ryzen 3 or the top end Xeon processors for instance - however that looks different now.

As well as increasing cores, can they increase power to the M1, is the ghz top ended out? Could they use turbo boost like Intel do? Could the same amount of cores just use more power to get more out of them? I assume not as the Mac mini would just do better (although maybe it's the M1 power limit itself and the fact they don't want an entry level Mac mini to do better)

I assume they'll have no problem allowing scaling of RAM as they go up either? Will they be able to do a Mac Pro that takes 2TB of ram on the same architecture?

Surely at a point the SOC graphics will struggle to keep up with a dedicated graphics card like the Mac Pro and they'll have to at least break it out onto a PCI-E board?
 
  • Love
Reactions: lysingur
Where do you get the 1% and 5% figures from?

I think those numbers are FAR less than that.

Far more - I ran an entire business just selling already set up Boot Camp installations - a LOT of people are coming over from Windows or want a Windows option on the Mac.
 
  • Like
Reactions: lysingur
Honestly they way GeekBench is being used almost as a weapon these days.....

If I was the head of chip design I'd tell my team, a top priority was to ensure we build silicon in our design specifically dedicated to smashing GeekBench scores.
Which I'm sure is 100% possible to do.
 
  • Like
Reactions: Pahanda
I'm not making a normative statement. I'm just giving reasons as to why Intel and Microsoft are the way they are and explaining that the inefficiency is intentional because it's actually their clientele who wants it (their keeping extending support for Windows XP is a case in point).
As far as I can tell, there has been no support at all for Windows XP since April, 2014. Running critical infrastructure on a system is a bad idea if the device is not net connected, and is simply malpractice if it is (a net-connected with no security updates for 6 years is beyond problematic).
 
Far more - I ran an entire business just selling already set up Boot Camp installations - a LOT of people are coming over from Windows or want a Windows option on the Mac.
Clearly there are some enthusiasts that do it. But IMHO there is no way that anything like 1 in every 100 Mac users uses Boot Camp. I'd be extremely surprised if it's even as high as 1 in 1000.
 
As far as I can tell, there has been no support at all for Windows XP since April, 2014. Running critical infrastructure on a system is a bad idea if the device is not net connected, and is simply malpractice if it is (a net-connected with no security updates for 6 years is beyond problematic).
Never said Microsoft is still supporting Windows XP. :rolleyes:
 
Honestly they way GeekBench is being used almost as a weapon these days.....

If I was the head of chip design I'd tell my team, a top priority was to ensure we build silicon in our design specifically dedicated to smashing GeekBench scores.
Which I'm sure is 100% possible to do.
There is precedent for it. I forget which manufacturer it was, but one of them was found to be "cheating" at benchmarks.
 
The way I see it, it’s not much different. Jobs was so against the original Mac having even any expansion slots originally. As I see it, the current Apple is just extending that vision, of course, to the annoyance of some. I myself am annoyed. I can understand laptops being less user upgradeable due to its form factor. But I’m really annoyed when Apple made it so difficult for users to upgrade the RAM and drive on desktops like the Mac mini.
That is my point, my good sir!

Not sure why there are customers that defend this practice?
 
  • Like
Reactions: lysingur
Far more - I ran an entire business just selling already set up Boot Camp installations - a LOT of people are coming over from Windows or want a Windows option on the Mac.
Since we are looking at anecdotal evidence, how many users did your entire business serve every year?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.