Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
People do not really understand ARM vs Intel. ARM is a RISC chip, Reduced Instruction Set Computer. The ARM CPU is less complex and has to break up instructions/code into small pieces.

Because it is less complex it is easier to shrink to 10/7/5nm. It benefits from from using less power, which produces less heat and requires less cooling. This makes it the best choice for mobile devices.

Intel chips are CISC chip, Complex Instruction Set Computer. The Intel CPU's are way more complex and thus harder to shrink. At the same time they way more powerful in many ways. Being more powerful requires more power, which produces more heat which requires more cooling. This makes it better for computers in terms of space/power/cooling (minius the throttlebook of course). The CISC CPU's can run way more complex software and do it faster. The IPC of a Intel CPU over a ARM CPU is many times greater. This is why we do not see full blown Photoshop, CAD apps, games like Witcher 3 etc running on ARM. We do see light versions of all of this.

If Apple or Qualcom (Windows 10 on ARM) keep making their ARM CPU's more powerful to take over for Intel x86 they will eventually run into the same issues with power, heat, and cooling. There is no way around it. Unless of course they never want to run anything but light/less complex applications which might be fine for the majority of people. Powerful, complex applications will require more than ARM can deliver today.
Sorry, this is incorrect from start to finish.

All processors are Turing complete, and as such they can all run the same type of software, regardless of complexity.

Simply speaking, in modern CPU's, the difference between RISC and CISC is the instruction decoder. For x86 this takes up ~20% of the die area. This translates directly into either making the chip smaller and cheaper to manufacture, or into being able to pack more stuff on the die (cores, caches, ...).

This is then also the reason why RISC processors may "shrink better". They can skip the 20% extra complexity of the instruction decoder. But otherwise CISC transistors shrink the same as RISC transistors.

RISC processors were the first superscalar processors, had deeper pipelines, and as such were able to execute more instructions per clock than their CISC counterparts. Over time, CISC processors also got deep pipelines, multiple execution units, and became superscalar. In modern processors it's not clear to me that either technology would fundamentally be capable of higher IPC than the other. There's also no fundamental reason why one should be able to execute software faster than the other.

For modern RISC architectures that are quite competitive vs x86 you can look at SPARC or POWER for example. It has been attempted with ARM architecture as well. This hasn't yet seen the same level of performance, but it's not because of RISC vs CISC. I'm sure these powerful RISC chips consume just as much power as CISC chips.

The one thing you got right was that both RISC and CISC processors are ultimately limited by the laws of physics.

Edit: While not a chip designer, I do program these chips regularly at a low level, both x86 and ARM. And I guess I technically built a one-instruction CPU for a uni lab once, but it's not clear whether that counts for anything. And I think I have the 20% figure from cmaier in an earlier discussion, any mistakes there would be mine.
 
Last edited:
This is very misleading. I designed many x86 chips. They have risc cores. They have complicated circuitry on board which breaks up instructions into smaller, more regular instructions. They include microcode roms and circuitry for converting variable-length instructions with lots of operands into a sequence of reduced, fixed length instructions.

You say risc chips need to break up the instructions into smaller instructions but You have it backwards. In CISC chips like x86 the processor does this. In RISC chips the compiler does it. The compiler can often do a better job because it can analyze the full program to determine how best to break instructions up.

The IPC of arm vs Intel isn’t a thing. You can make ARM chips have higher IPC than Intel if you want ... it just wouldn’t do much good on a phone.

Remember, if you strip out the complex instruction decoder and microcode from an Intel chip, what you are left with is a risc cpu. Many risc cpus, by the way, have very high IPCs. Look at Sparc, PowerPC for workstations, etc.

I’ve designed PowerPC, sparc and x86 chips, and your explanation is pretty much nonsense.

My explanation was from a simple, high level. The Sparc and PowerPC, in their day got out classed by cheaper Intel CISC CPU's. The Sparc and the PowerPC at the time ran into thermal problems trying to compete with Intel CISC chips. Where are the RISC workstations today?? Why do we not see high end applications and games running on RISC?

RISC is great, especially if you completely control the OS and the software so you can optimize for it. Hence small IOT devices running some form of stripped down Linux running one application can do well. If you need to throw a bunch of different software at it, in real time, CISC is better.



Here is a good explanation of both for the audience.

https://www.allaboutcircuits.com/news/understanding-the-differences-between-arm-and-x86-cores/

"For example, many RISC-based machines perform operations between registers, which commonly requires the program to load variables into registers before performing an operation. A CISC-based machine, however, can (or should) be able to perform operations between registers, between a register and a memory location, and even between memory locations. Other common operations include multiplication with floating point numbers, barrel rolls, single instruction loops, complex memory manipulation, memory searches, and much more. "
 
Because it is less complex it is easier to shrink to 10/7/5nm.

CISC/RISC have nothing to do with the ability to shrink silicon geometry.

The IPC of a Intel CPU over a ARM CPU is many times greater.

RISC IPC is likely higher than CISC, all else being equal. With less instruction complexity, it is easier to execute more instructions. The flip side is that some operations will take more instructions. Also, higher level CISC instructions could also provide more hints to the scheduler.

The real question today regarding RISC v CISC is the power draw of the more complex instruction decoding needed by CISC. Does this extra expense return the investment? Also, what if a RISC team can spin their design every year, while the CISC team needs two years?

The reason it looks like CISC won is that for many years Intel's silicon fabrication technology was a generation ahead of everyone else.
 
My explanation was from a simple, high level. The Sparc and PowerPC, in their day got out classed by cheaper Intel CISC CPU's. The Sparc and the PowerPC at the time ran into thermal problems trying to compete with Intel CISC chips. Where are the RISC workstations today?? Why do we not see high end applications and games running on RISC?

RISC is great, especially if you completely control the OS and the software so you can optimize for it. Hence small IOT devices running some form of stripped down Linux running one application can do well. If you need to throw a bunch of different software at it, in real time, CISC is better.



Here is a good explanation of both for the audience.

https://www.allaboutcircuits.com/news/understanding-the-differences-between-arm-and-x86-cores/

"For example, many RISC-based machines perform operations between registers, which commonly requires the program to load variables into registers before performing an operation. A CISC-based machine, however, can (or should) be able to perform operations between registers, between a register and a memory location, and even between memory locations. Other common operations include multiplication with floating point numbers, barrel rolls, single instruction loops, complex memory manipulation, memory searches, and much more. "
Not just simple and high level, your explanation was incorrect.

If I recall correctly, SPARC M8 claimed 2x Intel performance when it was launched. I haven't used one in a good while so I don't know exactly what they're comparing, but whether it's 2x or 0.5x it's still the same order of magnitude as Intel. It's true that RISC workstations got replaced by Intel counterparts, but that's down to economy of scale rather than anything else. Intel sold more chips, so they could put more into R&D, which paid off and they got faster quicker than the RISC chips, and here we are. Intel x86 was not particularly impressive back in the day, but in modern day I think they are. The instruction set is still weird, but that's arguably true for SPARC and ARM as well. I did write a.... erm... Lisp to SPARC compiler back in the day, which is a weird thing to do but you do get exposed to the instruction set.

RISC servers are still a thing, though maybe a bit niche these days.
 
My explanation was from a simple, high level. The Sparc and PowerPC, in their day got out classed by cheaper Intel CISC CPU's. The Sparc and the PowerPC at the time ran into thermal problems trying to compete with Intel CISC chips. Where are the RISC workstations today?? Why do we not see high end applications and games running on RISC?

RISC is great, especially if you completely control the OS and the software so you can optimize for it. Hence small IOT devices running some form of stripped down Linux running one application can do well. If you need to throw a bunch of different software at it, in real time, CISC is better.



Here is a good explanation of both for the audience.

https://www.allaboutcircuits.com/news/understanding-the-differences-between-arm-and-x86-cores/

"For example, many RISC-based machines perform operations between registers, which commonly requires the program to load variables into registers before performing an operation. A CISC-based machine, however, can (or should) be able to perform operations between registers, between a register and a memory location, and even between memory locations. Other common operations include multiplication with floating point numbers, barrel rolls, single instruction loops, complex memory manipulation, memory searches, and much more. "
TODAY you can buy PowerPC and spare cpus with higher IPC than intel’s best.

RISC cpus do floating point. CISC cpus have to break up instructions that access memory into separate instructions that include a load or store. You have no idea what you are talking about.

X86 has one instruction that adds two numbers from ram and puts the result in ram. It breaks it into two instructions to fetch the operands, one to add, and one to put the result in memory. So it takes 5 instructions and many cycles. Same as RISC.
 
TODAY you can buy PowerPC and spare cpus with higher IPC than intel’s best.

RISC cpus do floating point. CISC cpus have to break up instructions that access memory into separate instructions that include a load or store. You have no idea what you are talking about.

X86 has one instruction that adds two numbers from ram and puts the result in ram. It breaks it into two instructions to fetch the operands, one to add, and one to put the result in memory. So it takes 5 instructions and many cycles. Same as RISC.
To make things more complex and interesting, it's possible to build a chip that supports multiple ISA, a bit like POWER can switch endianness. So a chip could actually be built that is both RISC and CISC in the same silicon. AMD were presumably working on such a chip a few years back, but I don't know what came of it.

You wouldn't happen to know more about any of that, would you?
 
To make things more complex and interesting, it's possible to build a chip that supports multiple ISA, a bit like POWER can switch endianness. So a chip could actually be built that is both RISC and CISC in the same silicon. AMD were presumably working on such a chip a few years back, but I don't know what came of it.

You wouldn't happen to know more about any of that, would you?
We weren’t doing that at AMD. Our chips all used risc cores but there was never a plan to allow direct access to it as a separate architecture.

The original plan at Exponential Technology was to do exactly that.

What this dude seems to believe is that intel has more “powerful” instructions that magically are just as fast as reduced instructions. It just doesn’t work that way. If an instruction is Three times as powerful it takes three times longer to run.
 
We weren’t doing that at AMD. Our chips all used risc cores but there was never a plan to allow direct access to it as a separate architecture.

The original plan at Exponential Technology was to do exactly that.

What this dude seems to believe is that intel has more “powerful” instructions that magically are just as fast as reduced instructions. It just doesn’t work that way. If an instruction is Three times as powerful it takes three times longer to run.
Yeah, that's clearly not the case, though with a naive understanding of processors I can understand how one might get that idea. When you have x86 instructions that does multiply+add+move in one instruction, it might appear to do more work than 2-3 separate instructions. But if that was ever a good mental model for how processors work, it certainly isn't anymore.

RAM also doesn't work the way people generally think (don't know about dude). Neither do caches. Or probably just about anything in a modern computer :)

And neither does macOS. It's weird when people say macOS and IOS will eventually merge, when they have been merged from the very start. At the core it's the same OS, and this is why it doesn't take any leaks at all to know that macOS has been running on ARM for at least half a decade, probably more. Making ARM Macs is about performance and compatibility, and of course most of all whether it makes financial sense. (which it probably does)

The press reported Skybridge as dead in May 2015, though maybe that was just socket compatibility. Doesn't matter, multi-ISA is a cool idea in theory, but the fact that it isn't done is evidence that it doesn't make sense in practice.
 
The Sparc and the PowerPC at the time ran into thermal problems trying to compete with Intel CISC chips.

PPC "died" because the AIM alliance failed. Each of the three members needed to take PPC in a different direction. Apple had neither the money nor expertise to continue developing PPC for the desktop market. The architecture is still alive as IBM's POWER and (former)-Motorola embedded chips. The lack of development funds also meant that Apple was forever one DRAM generation behind all the way from the PC33 era through to PC2-4200.
 
Well it will be interesting if it does happen.

A great opportunity for smaller and faster moving software companies.

The behemoths like Adobe might find it similar to the meteor that did for the dinosaurs.

The software world is in many ways very different today compared to 2005.

As long as Apple don't lose their collective heads and make the desktop OS useless for actual production of content… then the exodus to Windows will be massive.
 
Here is a good explanation of both for the audience.

That's actually a lousy explanation.

X86 has one instruction that adds two numbers from ram and puts the result in ram. It breaks it into two instructions to fetch the operands, one to add, and one to put the result in memory. So it takes 5 instructions and many cycles. Same as RISC.

An assembly language programmer may be very thrilled to have this type of CISC instruction. However, it is not all that useful. It could be hundreds of lines of code between instances of this instruction class. That's because usually you'll want to immediately do something else with that sum - and you'd like it to be in a CPU register. Therefore you'll usually code it almost like you're on a RISC processor.

So - this instruction adds lots of complexity to a CPU while providing very little usefulness.
 
And neither does macOS. It's weird when people say macOS and IOS will eventually merge, when they have been merged from the very start. At the core it's the same OS, and this is why it doesn't take any leaks at all to know that macOS has been running on ARM for at least half a decade, probably more. Making ARM Macs is about performance and compatibility, and of course most of all whether it makes financial sense. (which it probably does)

I know we're probably drifting way of course here, but you're talking about Darwin right?
So in practice, how easy/difficult is cross-compatibility? Performance constraints aside.
 
That's actually a lousy explanation.



An assembly language programmer may be very thrilled to have this type of CISC instruction. However, it is not all that useful. It could be hundreds of lines of code between instances of this instruction class. That's because usually you'll want to immediately do something else with that sum - and you'd like it to be in a CPU register. Therefore you'll usually code it almost like you're on a RISC processor.

So - this instruction adds lots of complexity to a CPU while providing very little usefulness.

Exactly. CISC was great when we were hand compiling code. There’s no advantage now that we aren’t. You still convert to risc, but do it less efficiently and do it every time you run instead of once when you compile.
 
I know we're probably drifting way of course here, but you're talking about Darwin right?
So in practice, how easy/difficult is cross-compatibility? Performance constraints aside.
It's most definitely not just Darwin. Yes, there's the whole UIKit vs. AppKit thing, but some of the frameworks and APIs are either the same or very similar across both OSes. Also, don't forget that Apple is coming up with an official way to easily port iPhone apps to the Mac using an implementation of AppKit compatible with macOS, which they've admittedly used internally to port Home, Stocks and Voice Memos to Mojave; should they take the plunge and release ARM-based Macs a few years from now, ARM-compatible versions of those very apps would be just a recompile away. In fact, cross-platform game engines would be easier to make compatible with both the Mac and the iPhone/iPad, which might somewhat offset the disadvantages of deprecating OpenGL in favor of Metal. The writing is on the wall, as all of Apple's recent moves seems to be a strategy to prime developers for that very scenario.

Also, you'd be crazy to think Apple doesn't have a full version of macOS running on an ARM-based machine (like, say, an Apple TV, which, being kind of a “shrunken Mac mini” of sorts, complete with native 4K HDMI output and all, is the most obvious candidate) hidden in an R&D office somewhere, ever since iPhone OS 1, just like they did maintain their x86 Mac OS X branch from the NeXTStep/Rhapsody days in secret all the way to the first public release of Tiger for x86.

Sure, they deprecated PowerPC and I don't believe they have a secret build compatible with whatever POWER-based processors are available these days, but ARM on Macs? They would be foolish not to have it in their pipeline or at least as a plan B; they are a full-blown ARM licensee, they develop their own A-series custom chips and, as such, they control the whole stack, as per Jobs' and Cook's philosophy.

After developing their own integrated graphics and M-series chip on the iPhone, and switching to a T-series chipset and a modern filesystem tuned for flash drives on the Mac as well, swapping the x86 processor to an ARM-based one and stacking a desktop OS on top of it seem to be the next obvious steps. They are doing the transition right in front of our eyes, one small piece at a time, and they're down to the last physical one… My guess is: they will release a round of new Intel-based Macs with T-series chips (perhaps a lower-power T3 chip, even? Don't forget that to this moment, only the iMac Pro and the 2018 MacBook Pros have those…), just to iron out the kinks and be sure they work fine, and only then start a new architecture transition (which fits in perfectly with the rumoured 2020 date, if I may add).

If I had to guess, there will come a time when there are two Macs very similar to one another (sorry, guys, no case redesigns until *after* the transition, save for the new Mac Pro, which may remain as an Intel-based machine for a loooooong time), except for their processor architecture, kind of like the Rev. C iMac G5 and the Early 2006 Intel iMac. Remember those? They were so similar on the inside that it seemed as if they were developed simultaneously and the former was just released as a stopgap, or as if the latter was just a minor redesign (check the image attachment to see what I mean… If it wasn't for the key I've added and the Intel chips on the Intel board, you could easily confuse which was which; even the screws are in similar places!).

And if I had to bet on the model, the transition would start from the bottom up, with the 12'' Retina MacBook; it might even take a while, just to “test the waters”. That's not a professional computer (in the sense of raw computational power) anyway, so it's not like the software its target market would want to run (say, productivity suites like Office and said apps converted from iOS) wouldn't be available on launch. In fact, most of the software for that machine is already available for the iPad Pro, which one could argue that it's just a keyboard-less MacBook or vice-versa.

By the way, for the sake of comparison, let's see when each of Apple's big transitions started and finished: 68k to PowerPC lasted from 1994 to 1998; Classic Mac OS to Mac OS X/OS X/macOS lasted from 2000 to 2007 (yes, Tiger for PowerPC still ran the Classic environment, and it is patently obvious why such an OS transition would last longer; it was much harder to port software from Mac OS Classic, and many of it just had to be replaced with, well, alternative software from different companies); PowerPC to Intel lasted from 2006 to 2009; so, if we consider them inevitable – and survivable, as Apple proved time and time again! –, the iPhone was both a “distraction” of sorts and the catalyst for next one (and in and of itself it amounted to sort of a shadow “OS transition”, as indeed many of the technologies developed for iOS ended up being used for an internal revamp of iOS), which seems to be reaching its logical time, so to speak… It will likely be announced next year, and take place over 2-3 years, so, from 2020 to 2022-3, possibly with an exception made for higher-end machines, depending on how the market and Intel development evolves (I mean, a dual-processor architecture could be workable; even though Apple likes to chuck away as much legacy cruft as they can, they are still the biggest company in the world and they could certainly manage that if they wanted).
 

Attachments

  • iMac comparison.jpg
    iMac comparison.jpg
    4.6 MB · Views: 109
Last edited:
  • Like
Reactions: Howard2k
It's truly heartening that you think that MacOSX is worth fifteen hundred dollars.

Please explain how your take away from what I said is that he MacOSsx is worth $1500?
[doublepost=1535755803][/doublepost]
2) Are you joking ? Why da hel * apple support egpu ? Just put one usb c for charging hoping bluetooth wifi solve everything ?
3) People dual boot to get perfomance . A BIT rude saying just get 300 dollar windows machine.

Suggesting that people who want to dual boot could easily buy one of many affordable laptops is not rude. Certainly o more rude than those minority of people who dual boot demanding Apple bend over backward, at the expense of the majority of its users, to give them exactly what THEY want. That’s selfish.
 
I know we're probably drifting way of course here, but you're talking about Darwin right?
So in practice, how easy/difficult is cross-compatibility? Performance constraints aside.
Yes, Darwin is, to the best of my understanding, the underlying core operating system for both macOS and iOS. It's a Unix-like OS that is a mix between Mach (kernel), BSD, NeXTSTEP, other free software, as well as some Apple code. It covers things like file system, networking, core OS, threading, memory management, security, and basic inter-process communication. I think it also includes Cocoa [Touch]. On top of this, both macOS and iOS adds custom parts like UI Kit, and I'm not sure what the counterpart is called on macOS since I don't usually program it directly. What's interesting about all this is that Darwin is an open-source OS.

In terms of cross compatibility, apps (in general) are written to API's. For iPhone/iPad apps, I think a lot of the code is written to UIKit and similar, and that is effectively why you can't run iOS apps on macOS. Today this means that you have to write separate user interface code for macOS and iOS apps, even though code that targets lower level API's can stay mostly the same. However, Apple are moving the iOS APIs to macOS, thereby making it possible for developers to target both platforms with the same code. This wouldn't necessarily have to include ARM emulation on x86 Macs, but presumably it would. This would mean that you can run iOS apps under macOS seamlessly.

I assume by cross compatibility you meant app compatibility, but if you mean Darwin then it is also designed to be easy to put on other architectures. This has been done with PPC and 32-bit ARM and x86 respectively, and if they wanted to put it on any other architecture in the future, that would be relatively easy. (to whatever extent writing low level kernel code can ever be thought of as easy) But as already stated, for 64-bit ARM they don't need to do this, because they already have it. If I remember correctly, iOS has been 64-bit since around the iPhone 6 and iOS 9.0 which released in 2014, though Apple would have had Darwin running on 64-bit ARM for some time before then.

Note, this is from my understanding, and I'm not really an Apple developer. If others can improve on this, then please be welcome.
 
  • Like
Reactions: Howard2k
Please explain how your take away from what I said is that he MacOSsx is worth $1500?
You were the one suggesting that we all use a $300 computer from walmart for our windows needs. That suggests to me that you think there is little to no value associated with apple hardware.

Personally, I think that a 5k display is useful, even when I have to use windows. It's not as if I downgrade my expectations of what's possible when I use a windows computer. CAD is CAD.
 
Last edited:
Exactly. CISC was great when we were hand compiling code. There’s no advantage now that we aren’t. You still convert to risc, but do it less efficiently and do it every time you run instead of once when you compile.
The main CISC architectures that I'm familiar with are VAX PDP, x86, Motorola 68k and MOS 6502. These are designs from the 60's and 70's. From the 80's and onward, to my knowledge, all new CPU architectures have been RISC. Even Intel would probably do a RISC architecture if they were to redesign one today, and in fact they already did this two decades ago with itanium. Itanium didn't get much of anywhere for whatever reasons, but I think everyone agrees that CISC isn't necessarily the best way to go anymore, but x86 is going to stay the way it is because of its massive market share.
[doublepost=1535760039][/doublepost]In terms of running iOS apps on macOS, this has technically been possible (for developers) since forever in the XCode simulator. I think this is actually iOS compiled for x86, and apps are compiled to x86 code. This is of course the exact opposite of what we're discussing in this thread, but conceptually the same thing. It's also not streamlined for end users at all, but it shows that tech wise both operating systems are interoperable.

Anyway, from a tech perspective, Apple could have had ARM Macs a long long time ago. Whether it ends up happening, completely or partially, is going to depend on other things entirely. The tech is already there, at least for a low end laptop like the 12" MacBook.
 
Last edited:
Anyway, from a tech perspective, Apple could have had ARM Macs a long long time ago. Whether it ends up happening, completely or partially, is going to depend on other things entirely. The tech is already there, at least for a low end laptop like the 12" MacBook.

Sure, they could. But they would still be dependent on Intel or other companies for their chipset…

That's why I believe they waited until they had all their ducks in a row (meaning, all the main custom chips) to take that leap. The only reason they haven't done it earlier, I believe, is just the fact that they want to thoroughly test the newer components before they to fully commit their whole production chain to the new architecture… And all those issues with the T2 firmware are a clear indication that the people from the small niche that is their professional market are, rather unfortunately, being used a bit like public beta testers for the main event, the consumer machines.

In a sense, the transition has started already, with the ancillary components and technologies, from the top down, and will start officially and visibly, with the processor itself, from the bottom up. If you really stop and think about it, it makes huge sense and explains a lot of Apple's recent actions (those used to become obvious only in hindsight, but Apple has done so many transitions already that, by now, they are becoming a bit predictable; remember when they started pushing heavily for devs to switch to XCode and Cocoa? Yes, they pulled the rug out from under Adobe when they deprecated Carbon 64 at the last minute, and that wasn't very cool, and all, but those lazy frenemy bastards should've known better since the x86 transition writing was indeed on the wall; and do you know who's ready for an ARM transition, this time? Their competition, Serif… Or do you think they ported their Affinity suite to the iPad just because? They're using modern, platform- and architecture-agnostic C code on their graphics engine for a reason ;) ).

By the way, the fact that ARM chip manufacturing would have to be spread out across the iPhone, the iPad, the Apple TV, the Apple Watch, the Home Pod *and* the Mac lines might become a bit of an issue. Are there any other chip manufacturers around besides TSMC and Samsung which could rise up to the task? AMD? Or even, Jobs forbid, Intel itself? :D Either way, we really should pay attention to supply chain rumours, especially those on backstage deals, as they may be telling of things to come. I mean, all those processors have to come from somewhere.
 
Last edited:
  • Like
Reactions: CodeJoy
Are you sure? I mean, I understand that they painted themselves into a thermal envelope corner with the Mac Pro, and the entire MacBook and MacBook Pro lines are now so thin that they indeed have to wait for faster speeds at the same TDP, but… the Mac Mini? The iMac? The former could and should've been upgraded 3 or 4 times already, and the latter could very well receive speed bumps between the minor internal redesigns it underwent.

It's ridiculous, and Intel is not the only one to blame here. Apple is making a conscious choice when selling severely outdated processors on machines priced as if they were just released. It's insulting, and I'm guessing they have huge sales spikes whenever new machines are released. It's a bit stupid, because if they released upgrades more frequently, maybe the demand would be more easily manageable… People wouldn't just rush to buy new Macs, and they would lose a lot on margins, but they would probably make up for it goodwill and in numbers.
 
Last edited:
Seems to me apple wants to start fabricating chips now that chips have pretty much hit the wall in terms of performance.

I just don't see a real performance boost with the new chips. And now that I purchase machines in a much, much longer cycle I STILL don't see the performance boosts like you used to.
A11 is much cheaper and less power hungry than Intel. Apple could put 4 or 8 ARM CPUs in a laptop.
 
Sure, they could. But they would still be dependent on Intel or other companies for their chipset…

That's why I believe they waited until they had all their ducks in a row (meaning, all the main custom chips) to take that leap. The only reason they haven't done it earlier, I believe, is just the fact that they want to thoroughly test the newer components before they to fully commit their whole production chain to the new architecture… And all those issues with the T2 firmware are a clear indication that the people from the small niche that is their professional market are, rather unfortunately, being used a bit like public beta testers for the main event, the consumer machines.

In a sense, the transition has started already, with the ancillary components and technologies, from the top down, and will start officially and visibly, with the processor itself, from the bottom up. If you really stop and think about it, it makes huge sense and explains a lot of Apple's recent actions (those used to become obvious only in hindsight, but Apple has done so many transitions already that, by now, they are becoming a bit predictable; remember when they started pushing heavily for devs to switch to XCode and Cocoa? Yes, they pulled the rug out from under Adobe when they deprecated Carbon 64 at the last minute, and that wasn't very cool, and all, but those lazy frenemy bastards should've known better since the x86 transition writing was indeed on the wall; and do you know who's ready for an ARM transition, this time? Their competition, Serif… Or do you think they ported their Affinity suite to the iPad just because? They're using modern, platform- and architecture-agnostic C code on their graphics engine for a reason ;) ).

By the way, the fact that ARM chip manufacturing would have to be spread out across the iPhone, the iPad, the Apple TV, the Apple Watch, the Home Pod *and* the Mac lines might become a bit of an issue. Are there any other chip manufacturers around besides TSMC and Samsung which could rise up to the task? AMD? Or even, Jobs forbid, Intel itself? :D Either way, we really should pay attention to supply chain rumours, especially those on backstage deals, as they may be telling of things to come. I mean, all those processors have to come from somewhere.
Indeed, the transition makes a lot of sense, though possibly more so to Apple than to end users. I think you're spot on w.r.t the chipset, what they're doing there is actually quite interesting. With the T2 and its built-in SSD controller and disk encryption, they only have to add the nand chips. One subtle benefit is that they can spread out the SSD chips across the logic board, instead of having them all in the same place, and they of course already do this. This doesn't necessarily benefit end users who would prefer a replaceable M.2 device instead, but I suspect it benefits Apple in terms of the design and the manufacturing costs. Another benefit being that they can offer disk encryption with no slowdown. One can easily imagine other functionality being moved into custom chips over time. One thing I don't think they have yet is a TB3 controller, but that's easy enough to either buy from Intel or to integrate themselves eventually.

As far as chip manufacturing, they're already selling well over 50M ARM devices per quarter, and something like 3.5M Macs. Only a fraction of those Macs would be moving to ARM in the first wave, so I don't think the manufacturing would be a major issue initially. And y'know... I wouldn't be very surprised to eventually see Apple end up with their own fabs...

For the foreseeable future though, I suspect that if and when ARM Macs do appear, it's still going to be a mixed lineup with Intel chips at the higher end of performance for some time. That's not how it happened last time, but then the performance of the Intel chips were quite far ahead of the PPC chips, whereas the ARM chips are still at the lower end of the spectrum for macOS devices. (and they're not just suddenly going to pop out Xeon 18 core level ARM chips out of nowhere)
 
  • Like
Reactions: Mainyehc
Please explain how your take away from what I said is that he MacOSsx is worth $1500?
[doublepost=1535755803][/doublepost]

Suggesting that people who want to dual boot could easily buy one of many affordable laptops is not rude. Certainly o more rude than those minority of people who dual boot demanding Apple bend over backward, at the expense of the majority of its users, to give them exactly what THEY want. That’s selfish.
Don't think so, since the fiasco of graphic card in mojave. It is choice end user to use windows 100% hardware. It is about choices not about limitation. "Think outside the box"
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.