Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
G2 isn't optimized for single threaded performance. [..]

A13's 1.5x is also "sprint burst" . If loaded up 15 users with single threaded jobs on the A13 for 3-4 hours of workload the lead wouldn't be that high.

Yes, well, A13 is, shocker, optimized towards workloads typical of a phone.

Even on a desktop, a ton of workloads involve short bursts of single-threaded operations, such as running initial JS on a webpage.

Interestingly, Graviton 2 running 64 threads is barely faster than a 32 core EPYC.
But running 16 thread is dramatically faster than a 8 core EPYC.

It look like the design was flawed or they are targeting burst multi-VM usage (like T series instance type).

The latter, I think. I don't know why it's brought up so much here. Yeah, it's an interesting CPU, but no, it's not very interesting for existing Apple products.
 
There is about zero good reason to do that. Two major reasons. First, the card does one and only one thing ( decode the various formats of ProRes), Perhaps there will be a later upgrade so that it will add either encoding ProRes and/or an updated format of ProRes to the mix. But it will still basically be one narrow niche class of things. For Video editors dealing with 6-8K or more than several 4K decodes streams it is a value add, but otherwise it doesn't do that. The Mac Pro isn't a narrow silo computer ( not for A/V use only).

It is not an ASIC card, but pragmatically it is. It is a FPGA chip so it could change, but since Apple solely does calls to it from their A/V core libraries it is effectively an ASIC. And not being "end user" programmable means it comes with whatever modes (and switching between modes) that Apple ships.

Second major reason is cost. Adding $2K to every Mac Pro would be even worse than the effective $6K floor they have put on the device. Even if it got cheaper by buying them in larger bulk $1-1.5K , then that would still have the impact of making the system increasingly non competitive with other workstations for many workloads.

If trying to pitch that Apple would ship a developer enviroment and folks could adapt the FPGA to multiple other usages. Again would add costs.

"Everybody" doesn't need the Afterburner card any more than "everybody" need 20 cores.





If app developers make Foundation A/V Core library calls to open and decode ProRes files and Afterburner is present ... then it gets uses. Developers had to be going out of their way to avoid Apple's libraries in order not to be setting the preconditions to leverage Afterburner. It is what Apple has been asking developers to do in the first place before Afterburner showed up.

However, you are also vastly overselling how much work Afterburner gets done. Afterburner has more of a multiplier effect due to the paritial CPU unloading it does. If the CPU isn't decoding ProRes that leaves more CPU workload headroom for other stuff ( effects , encode , etc. ). Those other tasks aren't disappearing. Likewise keep the decode process off the GPU (if tossing computation there. ). The user can work better with 12-16 core system as opposed to needing a 24-28 core system, but still have that baseline of 8-10 cores to do the rest of the workload being thrown at the workstation.


[quote[
Which leaves apple free to switch the CPU out for something else without losing as much performance in Afterburner-accelerated applications.

Apple could throw a relatively week A-Series derivative in the Mac Pro because Afterburner was carrying most of the 'water' ? That would fail.



Afterburner isn't an exit from Intel at all. Afterburner is a mechanism to get the ProRes format more traction in the video storage format space. That is mainly it. It will help ProRes RAW be adapted into more video cameras as a high end alternative format or enable HDMI RAW output be sent to a external recorder ( e.g. ATMOS ) to because encoded as ProRes RAW. For example

Panasonic DC S1H adding support recently.

And Apple putting ProRes Raw on Windows

Again to promote wider adoption of the format ( and draw a bit of a "underline" into the Afterburner+macOS+macPro combo having a performance edge in that expanded ecosystem. )

[ If some open video RAW format took off in adoption then Apple could selectively cover that also. But the afterburner focus would probably always include Apple's solution for this general task.]

That is completely orthogonal to Intel ( versus AMD or some other vendor ) in the CPU in the Mac Pro.
[/QUOTE]


As long as RED has patents, it would be very difficult to use video RAW format on any cameras. Because RED has patents for that and that's why DSLR and Mirrorless still not able to support ProRes RAW properly. This is also why Blackmagic made their own RAW format called BRAW.
 
....
As long as RED has patents, it would be very difficult to use video RAW format on any cameras. Because RED has patents for that and that's why DSLR and Mirrorless still not able to support ProRes RAW properly. This is also why Blackmagic made their own RAW format called BRAW.

RED's patents don't cover simply just all RAW video formats. They used compression in a relatively straight forward way. The courts backed up the initial Patent application glance that that was 'different enough" to put a patent on. The "RAW" ness isn't what is at issue. How to losslessly compress the RAW data in an effective fashion is.

It isn't difficult as long as license the patents. ATMOS has a Red license. I suspect won't see may cameras get one because they have the options of just dumping the RAW data to ATMOS (and maybe 1-2 other folks) and getting it done without increasing the cost of their camera directly.

First, many are probably loathe to give a competitor money. RED likely will use it as a tool to not put their own cameras at risk. Apple will spend the 'RED tax' over all the Macs sold and don't have a direct camera product so will grit their teeth and bear. (for now. ). Similarly ATMOS isn't a camera company so RED isn't totally a direct competitor either.
Second, Apple wants to put their "two cents" into how to do things to (and licensing hoops to jump through) . In that sense not that much different that RED meddling in their camera internals either. So all the pitfalls of the first issue with more baggage piled on top.

The camera vendors would all like to drop the RAW into their own siloed format because that gives them leverage long term. Blackmagic's BRAW is "open" but it isn't likely there are going to be many other takers. ( Apple could eventually add BRAW to Afterburner's repertoire, but that wouldn't be a priority and would have to be a bigger camera base to drive that. )
 
Interestingly, Graviton 2 running 64 threads is barely faster than a 32 core EPYC.
But running 16 thread is dramatically faster than a 8 core EPYC.

if talking about this benchmark comparison.

https://www.anandtech.com/show/15578/cloud-clash-amazon-graviton2-arm-against-intel-and-amd

It is dated. ( that is a 32 core 7571 (Zen 1 ). ) So 64 virtual CPU ( using SMT for the EPYC versus 64 real cores on the G2 , that first result should be all that surprising.

The second. G2 is only deployed in virtual machine running system environoment. If it wasn't optimized to run VMs well then it would be missing the primary purpose of the systems it was install in. ( and also ..... Zen 1 memory limitations at work again. )



It look like the design was flawed or they are targeting burst multi-VM usage (like T series instance type).

It isn't like Amazon is selling G2's to third parties. What system with a G2 isn't running multi-VM at some point in time?


Anyway, A13 is just a 6W chip and if we put a fan on it it will 100% keep the boost clock forever.

Theoretically. The RAM is packaged in the chip. There are multiple dies in that chip package. If have a huge delta in temperate between dies you'll run into that problems. ( Intel Foveon tech isn't magic... it has limitations. I doubt Apple's is even as robust as Foveon in thermal flexibility and validation. )



With this 1.5x performance we do not need a 64 core monster, a 32 core solution may already be faster (due to core to core communication cost) than Graviton 2.

Only have 1.5x in single thread. Once have multiple cores going don't have that anymore.
If swap out the A-series memory interconnect for fast core to core at the 20+ core level probably will have something close to what N1 has. Doubtful going to have lower memory access latencies since that is one of the things that ARM optimized in the N1 design. Memcache edge servers is one of that things they wanted to try to get some traction on with this chip.

And they largely have:

https://docs.keydb.dev/blog/2020/03/02/blog-post/

edit:
P.S. another data throughtput benchmark.


(again .. illustrative that the G2 doesn't core to core memory latency "problem". )
 
Last edited:
Yes, well, A13 is, shocker, optimized towards workloads typical of a phone.

which is exactly brings it into a mismatch of being used as the baseline for an upper end iMac 27" , iMac Pro , Mac Pro kind of usage. Apple would need very substantively different for volumes of systems 2-3 orders of magnitdue smaller than the iPhone. Are they really going to spend a substantial amount of money in that small of a market in a timely fashion? ( Mini 2014 -> 2018 , iMac Pro 2017 -> ??? , Mac Pro 2013 -> 2019 , warmed over iMac revisions. etc. ) . A12X -> A12Z : huge expenditure or Apple economizing chip development costs?



Even on a desktop, a ton of workloads involve short bursts of single-threaded operations, such as running initial JS on a webpage.

Exceeding few folks are buying at iMac Pro or Mac Pro to boost web browser JS times.

The latter, I think. I don't know why it's brought up so much here. Yeah, it's an interesting CPU, but no, it's not very interesting for existing Apple products.

The G2 isn't for sale so it won't ever land in a Apple system ( it solely goes into AWS systems so if it didn't do multiple-VMs extremely well it would be a primary target market failure. )

It is indicative though of what the Ampere 'Quicksilver' could turn in as far as performance goes if matched to the core count about the same. If Ampere is only going to have one single die that maxes out at 80 cores ( so maybe a 64 , 72 , 80 ) product, then not so much. If there is a smaller variant that limited to one socket and around 32 cores with a higher single thread clock cap then that could be an option if Apple is myopically focused on maximizing the ARM deployment in the mac line up. But Ampere may not have funding/time to do a smaller derivation ( or taking a chiplet approach which may sync up.)

Or Apple take the N1 baseline themselves and tweak it to cut down on the amount of derivative work would need to build something to do the job. ( or Apple jumps in at "N2" / Zeus" (2020 ) or "N3 / Poseidon" (2021) . [ Pretty good chance that Graviton 2 is followed by Graviton 3 in about 12-14 months from now. ARM intends to move this family to 7nm+ and 5nm about the same transitions as Apple is waiting... just offset. ]
 
The latter, I think. I don't know why it's brought up so much here. Yeah, it's an interesting CPU, but no, it's not very interesting for existing Apple products.

The reason I (and someone else in this thread) mentioned it, is because it is proof that ARM can be more powerful then what we see on the iPhone. I think that's what Apple is working towards with their rumored 12 core processor. I'm ganna call it the MC-1 (haha MaC 1). The MC-1 - having 8 high performance cores and 4 energy efficient cores would be perfect for the 12 inch MacBook. And the second rumored chip based off next years A15 will probably be for the Mac Mini or iMac. Or maybe both.

I don't think they will start work on an of their "Pro Mac's" until after these consumer level devices are released, and will probably give a spec bump to at least one other MacBook Pro or iMac Pro until they are ready for their "Pro ARM Chips".

This will give pro app developer's time to get their code running on ARM macOS. And hopefully, a smoother transition for those who are worried about this transition. Hopefully, too, will guarantee longer macOS Updates for Intel Mac's. I bought this 2017 MacBook Pro when my 2011 iMac stopped getting updates and I not happy it might loose support in three years :-(
 
The reason I (and someone else in this thread) mentioned it, is because it is proof that ARM can be more powerful then what we see on the iPhone. I think that's what Apple is working towards with their rumored 12 core processor. I'm ganna call it the MC-1 (haha MaC 1). The MC-1 - having 8 high performance cores and 4 energy efficient cores would be perfect for the 12 inch MacBook. And the second rumored chip based off next years A15 will probably be for the Mac Mini or iMac. Or maybe both.

I don't think they will start work on an of their "Pro Mac's" until after these consumer level devices are released, and will probably give a spec bump to at least one other MacBook Pro or iMac Pro until they are ready for their "Pro ARM Chips".

This will give pro app developer's time to get their code running on ARM macOS. And hopefully, a smoother transition for those who are worried about this transition. Hopefully, too, will guarantee longer macOS Updates for Intel Mac's. I bought this 2017 MacBook Pro when my 2011 iMac stopped getting updates and I not happy it might loose support in three years :-(

PowerPC can be more powerful than what’s in an iPhone. That isn’t happening, I think, anytime soon.

No one is saying ARM isn‘t powerful, it’s not powerful enough for the workloads thrown at it. For MacBook users, this might be ok, for MBP users, this is likely to be a problem.

It’s great having a multi Core system, but unless the OS and apps use it, it’s useless.
 
Well, Apple will make its own version of *something*, but not of neoverse N1. Apple has its own microarchitecture, and will design it all themselves. Assuming they wish to target Mac Pro-style performance,

But that is a pretty big assumption. They didn't even want to put a ton of effort into a Mac Pro with a 3rd parties doing the chip work. Why are they going to be keenly interested in doing a Mac Pro system chip? For that kind of effort for that kind of low volume production. What relatively low volume product does Apple put tons of exclusively custom , expensive work into?

A T-series ARM product that spans the whole Mac line up isn't as low volume at any of the individual Mac products. if the whole line up has it then it is "one size fits all" for the entire run of Macs for that year ( presuming iMac gets one in next year or so).

The host CPU could be some x86 or perhaps a N2/N3 follow out capped at around 32 cores (and higher single threaded throughput). The T-series handles the "Apple-ly" things like boot security , Siri , Touch/FaceID , etc. with a derivation of what they already put in the phones. Don't need tons of those Apple specific cores for that.



for example, they would likely extend the rumored A14 Mac chip so that instead of just 8 high performance cores and 4 low power cores, it has many more of each. It already must have a good on-chip bus architecture, but they’d likely have to scale it up to handle more cores (say 16 + 8, or 20 + 10, or whatever).

Deep mystery as to why would need anything more than 4 low power cores every for a Mac. Can tweak the macOS kernel to farm and pin background, service threads there , but there low demand services aren't going to scale up with the primary use case of a single user as the primary driver of the system.



They could increase the effective size of caches, and increase bus width in and out of them, for another boost in performance.

In other words reinvent the wheel that N1 already did. Yeah Apple could do that. Not sure where the "value add" kicks in any substantive way.


They know what they are doing, and presumably it would blow away Amazon’s thing, since, per core, they already do.

It is not even close to be a matter of do they know what they are doing. It is whether they are going to allocate the resources. Apple could have built a substantively better Mac Pro in 2016, 2017, or 2018 and didn't. Wasn't really much about didn't know the basics of "how to" but far more was a didn't want to.

One of the major things at Apple is that they say "No" to a long list of possible things they could do. They may have lots of stockholder money in the "money pit" but there isn't infinite resources applied to as many projects as possible.

One reason Apple has a very highly competitive phone SoC is that they don't try to assign "everything for everybody" to the development team. They build processors not just for phones but for the iPhone/iOS specially. And just one instance for the whole iPhone line up ( so maximized volumes of whatever they do specially make. ) It is for economies of scale at least as much as it is for 'control'.


There is little to no business case for Apple making their own "Mac Pro" processor. It has all the characteristics of being largely a ego tech project more so than a real business issue.
 
Yep. The use of multiple cores requires software that is written to take advantage of it but that still can not mitigate the need for higher CPU frequencies required by some software. AMD had to learn this lesson; Apple will too.

Apple has already exceeded 3 GHz on their latest phones, while none of my Intel laptops have (unless you count a millisecond of turbo.) If you look at benchmark comparisons (which I admit aren't the whole story), only Intel's highest end cores edge out Apple's ARM cores in single thread performance, but that's comparing an Intel CPU to a 1 watt phone CPU. Once you can pipe 45 watts into it, you should be able to run it at substantially higher speeds.

Don't forget how far behind Intel is in single threaded performance because of their complacent years. It got to the point where almost anyone can make a chip with single thread performance as good as Intel. Because they went almost a decade without significant improvements in single threaded performance. Intel initially thought AMD was only ahead in multithreaded workloads - now they admit they're even ahead in single threaded performance.
 
PowerPC can be more powerful than what’s in an iPhone. That isn’t happening, I think, anytime soon.

No one is saying ARM isn‘t powerful, it’s not powerful enough for the workloads thrown at it. For MacBook users, this might be ok, for MBP users, this is likely to be a problem.

It’s great having a multi Core system, but unless the OS and apps use it, it’s useless.

PowerPC is still a powerful processor, and is now open source allowing anyone to make processors with PowerPC as a reference design. I don't think IBM is doing anything anymore with it, but they were developing it way beyond when Apple stopped using them.

I have read a lot of this thread, and there does seem to be a lot of people saying ARM isn't good enough and also stating that the ARM Mac's will be the end of Mac for Professionals (for a myriad of reasons, but I did pick up some implied belief that developer's wouldn't be able to put some pro level app's on the platform). In fact, one of such posts is quoted below. So yes, it is implied here over and over again that a Mac Pro with ARM could never be as good as an x86 processor.


For folks consumed with tech porn, CPU only benchmark scores, the "low power" cores can largely keep up if hand them code and data that largely sits in the on chip cache hierarchy .

They will "benchmark" better on many relatively computationally light workloads. If throw a highly vectorized ( e.g. AVX-512 , AVX-128 ) like workload at them they won't keep up. ( significantly increase the memory pressure, more SMT/Hyperthread friendly , etc. )
 
PowerPC is still a powerful processor, and is now open source allowing anyone to make processors with PowerPC as a reference design. I don't think IBM is doing anything anymore with it, but they were developing it way beyond when Apple stopped using them.

I have read a lot of this thread, and there does seem to be a lot of people saying ARM isn't good enough and also stating that the ARM Mac's will be the end of Mac for Professionals (for a myriad of reasons, but I did pick up some implied belief that developer's wouldn't be able to put some pro level app's on the platform). In fact, one of such posts is quoted below. So yes, it is implied here over and over again that a Mac Pro with ARM could never be as good as an x86 processor.

What @deconstruct60 means is that ARM can be useful, just not in as much Pro application workflows. I tend to agree with this. I’ve tried the AWS ARM server as a DB server and the performance just wasn’t there yet. Obviously, the databases I was using weren’t inherently designed for ARM but adding it as an experimental type.

macOS has always been a pro system, based on its underlying unix compatible architecture and providing features OOB.
 
  • Like
Reactions: AlexGraphicD
What @deconstruct60 means is that ARM can be useful, just not in as much Pro application workflows. I tend to agree with this. I’ve tried the AWS ARM server as a DB server and the performance just wasn’t there yet. Obviously, the databases I was using weren’t inherently designed for ARM but adding it as an experimental type.

Blue cars are slower than red cars, because ferrari hasn’t yet chosen to make a blue car.

Same logic.
 
  • Haha
Reactions: Nütztjanix
The reason I (and someone else in this thread) mentioned it, is because it is proof that ARM can be more powerful then what we see on the iPhone. I think that's what Apple is working towards with their rumored 12 core processor. I'm ganna call it the MC-1 (haha MaC 1). The MC-1 - having 8 high performance cores and 4 energy efficient cores would be perfect for the 12 inch MacBook..

There is not 12 inch MacBook in the line up now. so if they added a 12 inch MacBook one port wonder back into the line up they would have replaced zero Macs that are currently there with an ARM chip. None.

That is pretty much the point. What Apple has (and will likley have in the next 1-2 years doesn't cover the whole line up).

Core count doesn't really cut it. A-series isn't even going to do more that 8GB effecdtively. let along 80GB.


And the second rumored chip based off next years A15 will probably be for the Mac Mini or iMac. Or maybe both.

Very highly doubtful. Just like there was no A11X or A13X , there will pretty probably won't be a 15X (or 15X+Plus or whatever label Apple might throw at a core supersided A-series implementation. ).


I don't think they will start work on an of their "Pro Mac's" until after these consumer level devices are released, and will probably give a spec bump to at least one other MacBook Pro or iMac Pro until they are ready for their "Pro ARM Chips".

Which kind of begs question if can go multiple years with a split Mac line up cost effectively. Why would they got off? The hand waving answer is "control" and/or OCD uniformity issues.


This will give pro app developer's time to get their code running on ARM macOS. And hopefully, a smoother transition for those who are worried about this transition. Hopefully, too, will guarantee longer macOS Updates for Intel Mac's. I bought this 2017 MacBook Pro when my 2011 iMac stopped getting updates and I not happy it might loose support in three years :-(

App developers isn't going to be the major hang up. Low level stuff like drivers ( which are going to through other OS API changes now ) and emulation/virtualization support is probably going to be bigger hiccup driving issues. Stuff through the app store can be "just in time" configured/delivered for whatever platform and web/sneaker net distributed stuff bet fat/binaries.

The "Pro" app developers ( with high sticker priced software) probably would largely just ignore a MacBook 12 one port wonder. Especially if priced down in the old MBA 11" ( or lower) range. Something in that price range would more so be a mainstream app driver ( as the volume and share of overall Mac ecosystem would probably grow pretty rapidly. )
 
Apple may implement SVE2 on their future A series if they want their processor to cover the entirety of their Mac line up.
 
But that is a pretty big assumption. They didn't even want to put a ton of effort into a Mac Pro with a 3rd parties doing the chip work. Why are they going to be keenly interested in doing a Mac Pro system chip? For that kind of effort for that kind of low volume production. What relatively low volume product does Apple put tons of exclusively custom , expensive work into?

I am going to have to break this post down. Because it's full of odd assumptions. I understand this entire thread is all about assumptions and opinion on Apple switching their Mac's to ARM, so having assumptions is to be expected.

However ... lets begin. Why do you think the Mac Pro had little effort into it? The Mac Pro's Intel Xeon was released in 2019. Sure its not more powerful then the AMD EPYC and Ryzen processors that came out last year, but it was some of the best Xeon processors last year -- which the Mac Pro came out last year, so that's saying something.

And the Mac Pro is incredibly well designed. I would love to have a case like that for my next PC build sometime. They thought a lot about it and made sure it was easy to get in the case so Pro's can swap or add parts. Apparently even the processor can be swapped out. And we know they at the very least started development of the Mac Pro in 2017. Probably before. So that was TWO years of development. How is that not putting a lot of time and effort into the Pro machine they made?

A T-series ARM product that spans the whole Mac line up isn't as low volume at any of the individual Mac products. if the whole line up has it then it is "one size fits all" for the entire run of Macs for that year ( presuming iMac gets one in next year or so).

I don't think Apple plans on making one desktop for all of their products. I think each line will have their own processor, but slightly tweeked. Because Apple likes to design for each product and based on the needs of that product.



It is not even close to be a matter of do they know what they are doing. It is whether they are going to allocate the resources. Apple could have built a substantively better Mac Pro in 2016, 2017, or 2018 and didn't. Wasn't really much about didn't know the basics of "how to" but far more was a didn't want to.

There is little to no business case for Apple making their own "Mac Pro" processor. It has all the characteristics of being largely a ego tech project more so than a real business issue.

Yeah, Apple could have made a generic tower and throw some random parts in and not tried to make a high quality product. But they didn't. They thoughtfully designed a case that is easy to upgrade and add parts into the tower.

And there is a pretty goods reason to design their own Pro Processor. They are switching to ARM, and unless Apple is completely changing the wheel and supporting Intel and Arm Mac's forever they can either never make MacBook Pro's, iMac Pro's and Mac Pro's ever again OR they can make an ARM processor. They might take an actual design like the Neoverse N1 and customize it to their liking or they can do what they have been and design one from the ground up. Which is kind of perfect, because the Mac Pro won't be due for an update until 2022 or 2023 so they have several years to test and develop Mac Processors leading up to their Pro Level processors.

So I am not sure why you believe they have no business reason to continue to make Pro Machines but Apple will continue to make them. I seriously doubt they will just forget this segment and never make another Pro Machine again.
 
You can’t draw any conclusions about Arm by looking at a particular Arm chip. It’s just as easy (or hard) to make a high performing chip that’s Arm as X86.
Surely that logic goes both ways then? Point is, we don’t have enough data, but the data we do have isn’t promising.
 
PowerPC is still a powerful processor, and is now open source allowing anyone to make processors with PowerPC as a reference design. I don't think IBM is doing anything anymore with it,

IBM not doing anything?

https://en.wikipedia.org/wiki/POWER10

https://www.anandtech.com/show/13740/ibm-samsungs-7nm-euv-power-z-cpu ( if think Z and Power are highly decoupled then smoking something. They aren't. )



Power being opened source is just trying to keep up with RISC-V which probably has a bit more traction and built a bit of a moat around Power to stave off the ARM server ( N1, Ampere, Marvell-ThunderX2 , etc ) class solutions that are coming to market.


I have read a lot of this thread, and there does seem to be a lot of people saying ARM isn't good enough and also stating that the ARM Mac's will be the end of Mac for Professionals (for a myriad of reasons, but I did pick up some implied belief that developer's wouldn't be able to put some pro level app's on the platform). In fact, one of such posts is quoted below. So yes, it is implied here over and over again that a Mac Pro with ARM could never be as good as an x86 processor.
For folks consumed with tech porn, CPU only benchmark scores, the "low power" cores can largely keep up if hand them code and data that largely sits in the on chip cache hierarchy .

They will "benchmark" better on many relatively computationally light workloads. If throw a highly vectorized ( e.g. AVX-512 , AVX-128 ) like workload at them they won't keep up. ( significantly increase the memory pressure, more SMT/Hyperthread friendly , etc. )

You are hand waving making up what I was talking about there. Not particularly accurate at all. There is a huge gap between what Apple has implemented being suitable for the whole Mac line up and what can be done with ARM. There aren't the same.

Some folks are pointing at Apple's ARM implementation hitting a narrow subset of benchmarks ( single threaded performance in mobile class processor space) and declaring "that's it ... can use Apple ARM for the whole Mac line up". That is a fundamentally flawed argument. There is much more that would go into a CPU replacement for the whole Mac line up than just single threaded performance. In fact it goes far past just core count and CPU only benchmarks.

If there isn't sufficient I/O it is relatively easy to starve cores . (e.g.. only have 4 PCI-e v3 lanes not really going to be able to feed an Afterburner card with data or get the multiple streams of decoded of 8K video data off the card. )

macOS users can add a very wide range of kernel extensions whereas iOS/iPadOS users can't. That will present significant changes and challenges to kernel memory security which the A-series doesn't really attempt to solve.

There is no one magic bullet that Apple has to hit to replace the CPUs in the entire Mac line up. There is a long list of things way outside the iPhone class device constraints they need to do to replace. One reason Intel has hung onto the job of the CPU for the Mac line up is that there are lots of t's to cross and i's to dot. ( boot firmware , low level thunderbolt support, low level GPU driver support , I/O chipset support , robust ECC memory support, etc. etc. ) that goes way past core counts and hot rod tech porn single core drag racing.


Nobody, including all the signs from Apple right, now is trying to build a high end desktop / mainstream workstation ARM solution at the moment. There are folks on the phone/mobile space, there are embedded solutions (of various power levels ) , and server chips ( somewhat hacked into a PC chassis but server chip+board none the less. ). [ there is a corner case of the "insatiable core count" workstation user market that might be happy with max possible cores in a single box, but that isn't the mainstream workstation market. Windows 10 desktop not being on ARM is a major blocking issue. ]


That is the big disconnect.
 
Last edited:
I am going to have to break this post down. Because it's full of odd assumptions. I understand this entire thread is all about assumptions and opinion on Apple switching their Mac's to ARM, so having assumptions is to be expected.

However ... lets begin. Why do you think the Mac Pro had little effort into it? The Mac Pro's Intel Xeon was released in 2019. Sure its not more powerful then the AMD EPYC and Ryzen processors that came out last year, but it was some of the best Xeon processors last year -- which the Mac Pro came out last year, so that's saying something.

And the Mac Pro is incredibly well designed. I would love to have a case like that for my next PC build sometime. They thought a lot about it and made sure it was easy to get in the case so Pro's can swap or add parts. Apparently even the processor can be swapped out. And we know they at the very least started development of the Mac Pro in 2017. Probably before. So that was TWO years of development. How is that not putting a lot of time and effort into the Pro machine they made?



I don't think Apple plans on making one desktop for all of their products. I think each line will have their own processor, but slightly tweeked. Because Apple likes to design for each product and based on the needs of that product.





Yeah, Apple could have made a generic tower and throw some random parts in and not tried to make a high quality product. But they didn't. They thoughtfully designed a case that is easy to upgrade and add parts into the tower.

And there is a pretty goods reason to design their own Pro Processor. They are switching to ARM, and unless Apple is completely changing the wheel and supporting Intel and Arm Mac's forever they can either never make MacBook Pro's, iMac Pro's and Mac Pro's ever again OR they can make an ARM processor. They might take an actual design like the Neoverse N1 and customize it to their liking or they can do what they have been and design one from the ground up. Which is kind of perfect, because the Mac Pro won't be due for an update until 2022 or 2023 so they have several years to test and develop Mac Processors leading up to their Pro Level processors.

So I am not sure why you believe they have no business reason to continue to make Pro Machines but Apple will continue to make them. I seriously doubt they will just forget this segment and never make another Pro Machine again.

You’re giving the case way more credit than it’s worth. I have the Mac Pro as well.

The CPU us socketed like most desktop machines, so of course it’s easy to swap out. What we don’t know is how many processors will remain compatible with the socket type. The CPU isn’t modular in this case and isn’t user replaceable, unlike the MPX modules, which features PCI cards. However, let’s face it.. how often do owners open Mac Pro?



PS, no need for the apostrophe :)
 
There is not 12 inch MacBook in the line up now. so if they added a 12 inch MacBook one port wonder back into the line up they would have replaced zero Macs that are currently there with an ARM chip. None.

That is pretty much the point. What Apple has (and will likley have in the next 1-2 years doesn't cover the whole line up).

That's true. That is really just me wanting them to bring back the 12 inch Macbook. It might be called something else.

Core count doesn't really cut it. A-series isn't even going to do more that 8GB effecdtively. let along 80GB.

Then I guess Apple will never be able to make a good computer again. RIP macOS.


Very highly doubtful. Just like there was no A11X or A13X , there will pretty probably won't be a 15X (or 15X+Plus or whatever label Apple might throw at a core supersided A-series implementation. ).

This whole thread is based off a rumor that they making an Mac SoC (I'm calling it MC-1 for now)based off of A14. And that they already started working on another MC-2 based off of the A15 processor next year. True, maybe this whole thread is humbo jumbo but thats what the rumor says and what this thread is about.


Which kind of begs question if can go multiple years with a split Mac line up cost effectively. Why would they got off? The hand waving answer is "control" and/or OCD uniformity issues.

I'm basing this off the belief that Apple will have to heavily up the ante on their processor designs if they want to replace Intel, and that it would be best to do it over two - three years. I could be wrong and Apple might magically ramp it 100% in 2021 and can make their new Intel Mac Pro irrelevant in 2021. But again, that is also pretty doubtful.
 
That's true. That is really just me wanting them to bring back the 12 inch Macbook. It might be called something else.



Then I guess Apple will never be able to make a good computer again. RIP macOS.




This whole thread is based off a rumor that they making an Mac SoC (I'm calling it MC-1 for now)based off of A14. And that they already started working on another MC-2 based off of the A15 processor next year. True, maybe this whole thread is humbo jumbo but thats what the rumor says and what this thread is about.




I'm basing this off the belief that Apple will have to heavily up the ante on their processor designs if they want to replace Intel, and that it would be best to do it over two - three years. I could be wrong and Apple might magically ramp it 100% in 2021 and can make their new Intel Mac Pro irrelevant in 2021. But again, that is also pretty doubtful.

If they were going to do it, they should have done it with the recently released Mac Pro. The thermals for it do not make sense, and they’d need to redo the mobo, which might come with a design change. I don’t see the Mac Pro changing for a few more generations.
 
IBM not doing anything?

https://en.wikipedia.org/wiki/POWER10

https://www.anandtech.com/show/13740/ibm-samsungs-7nm-euv-power-z-cpu ( if think Z and Power are highly decoupled then smoking something. They aren't. )



Power being opened source is just trying to keep up with RISC-V which probably has a bit more traction and built a bit of a moat around Power to stave off the ARM server ( N1, Ampere, Marvell-ThunderX2 , etc ) class solutions that are coming to market.
This doesn't really change what I said. I was saying that PowerPC is more powerful then what the iPhone has. However, i am glad they are still using it :).

You are hand waving making up what I was talking about there. Not particularly accurate at all. There is a huge gap between what Apple has implemented being suitable for the whole Mac line up and what can be done with ARM. There aren't the same.

Some folks are pointing at Apple's ARM implementation hitting a narrow subset of benchmarks ( single threaded performance in mobile class processor space) and declaring "that's it ... can use Apple ARM for the whole Mac line up". That is a fundamentally flawed argument. There is much more that would go into a CPU replacement for the whole Mac line up than just single threaded performance. In fact it goes far past just core count and CPU only benchmarks.

If there isn't sufficient I/O it is relatively easy to starve cores . (e.g.. only have 4 PCI-e v3 lanes not really going to be able to feed an Afterburner card with data or get the multiple streams of decoded of 8K video data off the card. )

macOS users can add a very wide range of kernel extensions whereas iOS/iPadOS users can't. That will present significant changes and challenges to kernel memory security which the A-series doesn't really attempt to solve.

There is no one magic bullet that Apple has to hit to replace the CPUs in the entire Mac line up. There is a long list of things way outside the iPhone class device constraints they need to do to replace. One reason Intel has hung onto the job of the CPU for the Mac line up is that there are lots of t's to cross and i's to dot. ( boot firmware , low level thunderbolt support, low level GPU driver support , I/O chipset support , robust ECC memory support, etc. etc. ) that goes way past core counts and hot rod tech porn single core drag racing.


Nobody, including all the signs from Apple right, now is trying to build a high end desktop / mainstream workstation ARM solution at the moment. There are folks on the phone/mobile space, there are embedded solutions (of various power levels ) , and server chips ( somewhat hacked into a PC chassis but server chip+board none the less. ). [ there is a corner case of the "insatiable core count" workstation user market that might be happy with max possible cores in a single box, but that isn't the mainstream workstation market. ]


That is the big disconnect.

All of this points to exactly why I think this will not be done in one year. They will gradually replace their Mac line of products with processors and get them on a standard replacement cycle (similar to what they do with the iPad Pro line). There is a lot of work to be done, and although I am sure they have been working on these plans for the last 5 years or longer .. it just seems highly doubtful they will magically replace every Mac next year. Hell, the rumor doesn't even state that.
 
All of this points to exactly why I think this will not be done in one year. They will gradually replace their Mac line of products with processors and get them on a standard replacement cycle (similar to what they do with the iPad Pro line). There is a lot of work to be done, and although I am sure they have been working on these plans for the last 5 years or longer .. it just seems highly doubtful they will magically replace every Mac next year. Hell, the rumor doesn't even state that.

IMO they can’t do it in one year, they’d be putting all of their eggs in the basket and if the plan fails, there’s no fallback.
 
Surely that logic goes both ways then? Point is, we don’t have enough data, but the data we do have isn’t promising.
Sure. The logic applies both ways.

What you *can* do is carefully study the ISA and see if either of them offers an inherent advantage or disadvantage. And as someone who designed both RISC and x86-64 chips, I‘ve done that.

Arm‘s advantages are a much simpler instruction decoder that allows you to reduce the size of the core by a reasonable percentage, the avoidance of terrible addressing modes and, especially, all the gunk that goes into backwards x86 compatibility, and, if you believe that a compiler thinking about a problem for awhile can do a better job than a few million transistors that have to run in real time, then the simpler instruction set is an advantage too. Otherwise, if you feel like compilers do a bad job, x86-64 may have an advantage due to heftier instructions (though a lot of that goes away if you are running in pure 64-bit mode).

Any Intel trick to speed things up (branch prediction algorithms, multithreading, whatever) can be done just as easily on Arm as x86. But since x86 will always have deeper pipelines, it’s actually easier to do a lot of these things on Arm. (The penalty for guessing wrong, flushing the pipeline, and trying again is less when the pipeline is shallower. This means you can get away with less branch prediction accuracy, which means fewer transistors, and achieve the same performance).

In the end it’s all probably a wash other than the fact that x86-64 will always need more transistors to do the same job, which means more power consumption, and longer wires (which slow things down). How much of an effect that is will vary depending on lots of factors. But the one thing I can tell you for sure is that x86-64, with equal chip designers using equal fabs, does not have any advantage.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.