Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Apple can fail some day like any other company. Not anytime soon, though.

Of course nothing lasts forever, not even Apple. But to the cynics chanting ‘Apple is doomed’ year after year after year, by the time Apple does go under, they will have been wrong enough times that even when they are eventually proven right, in the greater scheme of things, these people would still have been wrong.

On another note, one does also wonder what this means for modularity. What Apple is also exposing is the physical inefficiencies of buying all these different parts from different vendors to assemble into your own PC. Yeah, it’s probably still cheaper, but it seems like the hits to performance may no longer be worth it.

I suspect that moving forward, we may see more and more companies going with integrated SOCs. Building your own PC may eventually become a retro-only hobby.

Another underrated ability of Apple’s is to marginalise entire industries.
 
Will Apple be able to implement hyperthreading on these cores? I seem to remember a report on a 56-core ARM server processor that had 4x hyperthreading. I can’t find it now, and all ARM server chips seem to be 1-core / 1-thread designs.
They would be, but they wouldn’t. Notice how AMD doesn’t do a form of hyperthreading, yet is beating Intel at their own x86 game.

i know that these machines will beat intel and they’ll do so badly, but I wonder where exactly you improve year over year when you already have 16 cores and the next node shrink is years away. And when intel finally gets down to 7nm, i wonder then whether apple’s lead will diminish to nothing
Intel and AMD can ONLY improve on their end. Apple can improve across the board, their OS, their development tools, their hardware. The CPU is only a PART of the puzzle. Apple could determine that all Finder tasks really need to be offloaded to another chip to improve performance and, boom, done in hardware. And EVERY release of macOS will just take advantage of the new infrastructure.
 
The minute they’ve got an Intel x86-64 version of Windows 10 running on a Mac powered by Apple Silicone. I’ll be thrilled to turn over my money. There are only a handful of Windows apps I need to run and they have very low overhead so here’s to hoping Apple, Parallels, VMWare, CrossOver, etc can find a solution. It’s not like I’m expecting to play resource intensive games but I do need some kind of solution or Apple Silicone Macs aren’t an option which would suck.

If there's only a handful of Windows apps you need and none are resource-intensive, then you should be fine eventually. I've already seen some YouTube videos showing Windows emulation working, and it seems highly likely a 3rd party solution (or several) like VMWare will get onto it soon. Now that native x86 dual-boot is no longer a thing on Macs, it would be leaving money on the table to not fill the gap with emulation software precisely for casual Windows dependent users like yourself.

Also it's entirely possible that if Apple Silicon really takes off, some Windows-only developers might see a new growing market and decide to port their apps themselves. Wouldn't be the first time that happened.
 
  • Like
Reactions: Macaholic868
Correct. And many of us will for some time to come.


For Apples sake I hope not. The M1 is impressive for what it is, but has a lot of room for improvement. I see people praising it and saying it can hold its own against certain dedicated GPUs, but the problem is those cards it can hold its own against are several years old and or are mobile cards.

If I am buying a new desktop from Apple I want the option of a higher end card and NOT something comparable to a 3-4 year old low range, laptop graphics card.

As I said earlier, the M1 looks promising, but the next few chips Apple introduce will need to start taking more leaps in power, otherwise they will spend each generation trying to play catch up with the others.
A lack of upgradability on a pro Mac would be awful.

The two elephants crowding the room are RAM limit and future proofing. Pro users need stability more than amateur users. They need to be able to simply add RAM or a faster GPU instead of upgrading a machine because of potentially missed downtime.

A former coworker upgraded to a brand new Mac Pro with merely a point update difference in MacOS from his iMac, and suddenly his software no longer worked. The loss of productivity due to troubleshooting and upgrading cost more than the new machine. Two weeks later the software was updated for the OS update, but the damage was done.

The next Mac Pro has to be a "Mac for Pros" and not just "pro" in name only.
 
  • Angry
Reactions: NetMage
I think they will go with another naming convention than M because the SOC will be very different to accommodate the Pro series with added expandability.

If they thought that cpu naming was a useful source of marketing, they wouldn’t have called it “Apple silicon” or “m1.”

It’ll be called M.
 
Yes, the M1 is top notch...and based on tech...from there we will see a linar upgrade...think about what an 20 core imac with apple custom gpu can do next late year

Think about it...the pros who wants RAW power...using windows will switch...because PRO are not limited by OS but limited by time...for PROs time is relevant...so, the macs now with impressive battery life and impressive power...can steal a lot of - both the pros and the casual users from windows and linux users
From 4 years using both windows oem and mac...now im going full mac starting next year
Pros are not overwhelmingly OS-agnostic—for many, the OS is very important. It determines how easily and efficiently they can use their computer, and it may determine if the software they prefer is available to them. Thus if a pro is comfortable and efficient in one OS, they're not going to switch to another just because the computer is faster, if it makes them slower. Their personal time is far more valuable than processing time. Indeed, you could say the OS *is* time for many pros. Thus some will switch for the performance, some won't.

For instance, when Apple semi-abandoned the pro market with their non-upgradable Trashcan Mac Pro, some did (reluctantly) switch to Windows/Linux in order to get the graphics processing power they needed. Many, however, refused to switch, sticking with their slower Macs, so they could retain their personal efficiency, even if processing took longer. I expect it would be the same for many Windows users.
 
Last edited:
Will Apple be able to implement hyperthreading on these cores? I seem to remember a report on a 56-core ARM server processor that had 4x hyperthreading. I can’t find it now, and all ARM server chips seem to be 1-core / 1-thread designs.

As a CPU designer, when you tell me you have implemented hyperthreading in your cores, what I hear is “i did a crummy job matching my hardware resources to the workload, so as a result the ALUs are sitting idle a lot of the time. To try and hide that, I added a bunch of complexity to my register file, load/store unit, and scheduler so that I can try to keep the ALUs busy by switching threads instead of just assigning the threads to other cores which is what I would have done if I had properly allocated the right number of ALUs to the cores int he first place.”
 
i know that these machines will beat intel and they’ll do so badly, but I wonder where exactly you improve year over year when you already have 16 cores and the next node shrink is years away. And when intel finally gets down to 7nm, i wonder then whether apple’s lead will diminish to nothing

There are ways to improve performance other than die shrinks. Apple was so far able to deliver consistent performance improvements, die shrink or not.

Apple's 5nm currently leads Intel's 10nm by a factor of 4 in efficiency. I can barely see an Intel die shrink delivering this kind of efficiency jump. Even if Intel manages to reduce their power consumption by half, Apple would still be ahead by 100%.

They would be, but they wouldn’t. Notice how AMD doesn’t do a form of hyperthreading, yet is beating Intel at their own x86 game.

AMD CPUs certainly have SMT (what Intel calls hyperthreading) and have had it for a while. Anandtech just did an in-depth analysis of SMT on the latest Zen 3 CPUs.

Whether Apple will ever implement it is questionable. It can be argued that Apple doesn't need SMT as they can already reach good levels of backend utilization.
 
The M1 is impressive for what it is, but has a lot of room for improvement. I see people praising it and saying it can hold its own against certain dedicated GPUs, but the problem is those cards it can hold its own against are several years old and or are mobile cards.

If I am buying a new desktop from Apple I want the option of a higher end card and NOT something comparable to a 3-4 year old low range, laptop graphics card.

As I said earlier, the M1 looks promising, but the next few chips Apple introduce will need to start taking more leaps in power, otherwise they will spend each generation trying to play catch up with the others.
This seems like a good representative comment from the skeptic’s crowd, and nicely sums up how people seem to be missing the point.

The M1 is in 3 products: An ultralight laptop, the lowest-end part of Apple’s regular laptop range, and the low-end half of Apple’s budget compact desktop.

None of these products has had a dedicated GPU for the better part of a decade. The M1 is only competing with, and is only trying to compete with, ~15-30W CPUs with integrated GPUs. The CPU part of it absolutely dominates its price and performance segment. The GPU part of it appears to be pretty dominant against competing integrated graphics.

If the M1 was the only thing available in a 16” MBP, which currently feature fairly powerful mobile GPUs and high-end mobile CPUs, we should be comparing it to a Radeon Pro 5600M with 8GB of dedicated memory. If the M1 was the only thing available in an iMac, we should be comparing it to a 10-core i9 with 128GB of RAM paired with a Radeon Pro 5700 with 16GB of dedicated GDDR6.

But it isn’t. Those computers are all still available, because the M1 isn’t the part that can do that job, nor is it intended to.

How it compares head-to-head with higher-end current-generation GPUs, either laptop or outboard desktop, is irrelevant, because it isn’t even trying to compete with those. The actual takeaway is only that Apple can wring an astounding amount of performance out of a CPU core that draws in the ballpark of 2.5W, and relatively impressive performance out of a GPU core that draws well under 1.5W.

Currently Apple has a chip with 4 of the former and up to 8 of the latter. It is a given at this point that they will have one or more product in the future offering more than 4 of the former and probably more than 8 of the latter. All we can realistically speculate on now is how many Apple will put into a product, and how linearly performance will scale.

There’s much more at work in scaling these things that mean it might not be linear, but oversimplifying, a chip with 16 of those performance cores would be in the running with huge server-grade CPUs at probably a fifth the power consumption. A chip with 128 of those GPU cores could hypothetically be in the running with higher-end dedicated GPUs at half the power draw.

Maybe that won’t pan out--maybe it won’t scale, or maybe Apple will decide they can sell enough computers without competing directly with the ultra-high-end that they just don’t bother, or maybe they’ll keep 3rd party GPUs around at the very high end. Then you can complain, and maybe switch platforms if you don’t like what your options are.

But the bottom line in any individual product line is, if the performance can meet or exceed what the competition would offer, what does it matter what the architecture is? If an "integrated" 16” MBP can beat what it would have with an Intel + AMD option, who cares what the architecture is?

I’m fairly confident extrapolating that Apple can pull it off on the CPU performance side. I’m not at all confident that will be the case on the GPU side, at least for the near future--Apple probably is not going to produce a part that will meet or exceed four Radeon Pro Vega II each with 32GB of HBM2, even if they hypothetically could.

But unless and until a Mac Pro without a dedicated GPU option ships, we don’t really have any idea.
 
Last edited:
Yesss..... give us 16 cores in the 16" MBP, Apple !!!

I'm waiting for that one.
Give us separate screens and processors, connected either wirelessly or with USB-C. That way you could use a small screen with fastest 16 core processor, and avoid 747 fans and burned lap.
 
16 cores (12 high performance and 4 low power), 64 Gb and all native apps including development tooling and frameworks and then a lot of users will consider apple M1. At this moment it is just a good premise.
 
In the end, the PPC-Intel transition was also completed much faster than they initially said, Steve Jobs said 24 months, it turned out to be 18, Considering the transition was announced in June, approx 7 months of the transition have already passed. I expect it to be completed 4th quarter of financial year 2021 on that basis...
They may beat their stated schedule, like they did last time. In fact, they may be saying 2022 because they are prediciting late 2021: saying 2022 gets more people to continue to buy the highest-end Intel Macs rather than holding off, and it makes them look good when they finish ahead of schedule.

But this is a different type of transition. With PPC->Intel, once they had released the initial models (and thus had completed the software portion of the transition), it was just a matter of re-enginnering their line to accommodate existing Intel CPUs. Now, by contrast, they're responsible for the entire chip range themselves.
 
Last edited:
TFLOPS is not everything but 128 cores is crazy. It would be faster than any GPU on the market, including GF 3090!!

M1 8 GPU cores 2.6 TFLOPS
M? 16 GPU cores 5.2 TFLOPS
M? 32 GPU cores 10.4 TFLOPS
M? 64 GPU cores 20.8 TFLOPS
M? 128 GPU cores 41.6 TFLOPS

Radeon Pro 5700 6.2 TFLOPS
Radeon Pro 5700 XT 7.7 TFLOPS
Radeon Pro Vega II 14.06 TFLOPS
Radeon Pro Vega II Duo 2x14.06 TFLOPS
GF RTX 3060 14.2 TFLOPS
GF RTX 3060 Ti 16.2 TFLOPS
Radeon RX 6800 16.2 TFLOPS
GF RTX 3070 20.3 TFLOPS
Radeon RX 6800 XT 20.7 TFLOPS
Radeon RX 6900 XT 23 TFLOPS
GF RTX 3080 29.8 TFLOPS
GF RTX 3090 35.6 TFLOPS
 
Last edited:
  • Like
Reactions: NetMage and krazzix
A lack of upgradability on a pro Mac would be awful.

The two elephants crowding the room are RAM limit and future proofing. Pro users need stability more than amateur users. They need to be able to simply add RAM or a faster GPU instead of upgrading a machine because of potentially missed downtime.

A former coworker upgraded to a brand new Mac Pro with merely a point update difference in MacOS from his iMac, and suddenly his software no longer worked. The loss of productivity due to troubleshooting and upgrading cost more than the new machine. Two weeks later the software was updated for the OS update, but the damage was done.

The next Mac Pro has to be a "Mac for Pros" and not just "pro" in name only.
Agree completely that the Mac Pro needs to be modular. Apple says they learned that lesson with the Trashcan.

But I don't understand your point about the Mac Pro not being "Pro" because of the software issue. From what you've described, it sounds like Apple did an OS update that broke some of his applications. I recall that happening with Catalina. If so, that that would be a general Apple software QC issue that has nothing to do with the Mac Pro per se, right? And if that was the issue, it should have been directly addressable by restoring from a cloned backup of the earlier OS version, and continuing to use that until the compatibility was resolved by Apple and/or the app developer.
 
Last edited:
  • Like
Reactions: NetMage
I don't think all of us knew about the 32 core cpus and the 128 core GPUs

You were expecting them to perhaps stick with 8?

It’s always seemed pretty obvious to me that the core count would keep doubling up, as they scaled up to workstation level.

Here’s a prediction they didn’t mention, they won’t stop at 32 core CPUs.
 
The two elephants crowding the room are RAM limit and future proofing. Pro users need stability more than amateur users. They need to be able to simply add RAM or a faster GPU instead of upgrading a machine because of potentially missed downtime.

8 or 16GB (and maybe 32GB in the future) of non-expandable low-power RAM is par for the course for ultra-portable systems like the MBA and lowest-end MBP. A 16/32/64 choice would be sufficient for the higher MBPs. The LPDDR4 RAM these systems use isn't even available in module form. Obviously, though, that's not going to cut it as a replacement for a Mac Pro with 1.5TB of RAM...

There's no reason why the Mx chips for Mac Pro/27" iMac need to use LPDDR (since power is less of an issue) and AFAIK the only thing "special" about having on-package RAM is that the ultra-short memory busses can run a bit faster. You could still have "unified memory" with external RAM modules, but DDR4 would just have to run slightly slower... if only there was an even faster DDR5 coming real soon now. Which, of course, there is...

As for PCIe expansion - likewise, there was no reason to put more than one or two external PCIe lanes on the M1, while for the Intel Mac Pro, Apple went for a new Xeon-W chip that featured an insane number of PCIe lanes even by Intel standards. So, for a direct Mac Pro replacement, Apple might have to come up with the M-series equivalent of a Xeon W and maybe a range of Apple silicon-based PCIe GPU cards (if Apple can make an integrated GPU, they can make a PCIe/MPX GPU card - and it's not like the existing MPX cards they're promoting are off-the-shelf PC components). The only question is whether they make enough money from Mac Pros to justify the development costs of a M-series "Xeon killer".

The cheaper-to-develop alternative might be to have a system with multiple "mid range" SoCs and/or one that could be expanded by plugging in multiple "compute modules" each providing extra CPU, GPU and RAM. That would be more software-sensitive - but the sort of high-end jobs we're talking about, farming out work amongst multiple systems isn't a new idea.

I recall that happening with Catalina. If so, that that would be a general Apple software QC issue that has nothing to do with the Mac Pro per se, right?

It's not all QC. Part of the problem was that Catalina officially dropped 32 bit support, which unavoidably broke a lot of software. With every major OS release , a bunch of software stops working, there's a wait of many months for many bits of third-party software to be updated, and there are always some bits of "abandonware" that never make it. Pro users don't like expensive downtime or unscheduled changes, and for anybody doing serious cashy business on their Mac, staying one OS version behind is often the prudent thing to do.

Then, you get a brand new Mac and find that it comes with the latest and greatest operating system version released a few weeks ago and can't be downgraded because the old version doesn't recognise the new hardware - so suddenly you've got to solve all the compatibility problems overnight.

Remember: this isn't just your carefully planned 4-year-or-whatever hardware upgrade cycle: throughout that period hardware can fail/get stolen, new employees join and need kitting out... and suddenly you have to deal with a Catalina in a sea of Mojaves...

Apple's secretive habits aren't particularly helpful to organisations trying to plan their hardware budget... Look at the situation now - the cheapest Macs seem to outperform the more expensive Intel models, but we've only been given the vaguest of notions when the rest of the Intel machines will be replaced or for how long Intel will be supported. Meanwhile for something like, say, audio production, there's a whole laundry list of key 3rd party suppliers who don't yet support Big Sur, let alone M1 and just say "watch this space" on their websites. Logic itself might fly, but whether your virtual instruments, effects, drivers etc. will work properly is a lottery.

...and yes, there's a beta period for every OS, but from what I've seen Apple are liable to make significant changes right up to the final release, and no opportunity for large scale beta testing until the OS is generally available.
 
  • Angry
Reactions: NetMage
Give us separate screens and processors, connected either wirelessly or with USB-C. That way you could use a small screen with fastest 16 core processor, and avoid 747 fans and burned lap.

Then obviously what you want is a Mac Pro or maybe Mac Mini once Apple update the high-end version.

I still need the portability of the Macbook Pro, so 16 cores in the 16" shell is what I'm dreaming for.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.