Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Intel is not gonna cry by loosing Apple, is a very small customer for them...as a former Intel eng. I know the hardship of building x64 architecture chips on anything smaller than 12 nm. We are talking about playing with single atoms almost...
[doublepost=1540230559][/doublepost]

arm architecture, simpler than x64. That kind of architecture Intel is able to fit in 5 nm long time ago...It's not the same to build a car than a big commercial plane.

This post makes no sense. The physics problems that arise at smaller feature size (leakage, DFM issues, electro migration, IR drop, etc.) effect all instruction set architectures equally. The electrons and holes don’t care if you are RISC or CISC. Maxwell’s equations still apply.

Having designed powerpc, sparc, mips, and AMD64 cpus, I’ve not once found that one architecture works at a process node but another doesn’t. The only differences are yield. Some arch’s require more transistors than other.
 
This post makes no sense. The physics problems that arise at smaller feature size (leakage, DFM issues, electro migration, IR drop, etc.) effect all instruction set architectures equally. The electrons and holes don’t care if you are RISC or CISC. Maxwell’s equations still apply.

Having designed powerpc, sparc, mips, and AMD64 cpus, I’ve not once found that one architecture works at a process node but another doesn’t. The only differences are yield. Some arch’s require more transistors than other.


If you help designing cpus, then you know that what makes the logic work is how those connections between transistors, registers, ALU's and CU's are build, the more complex the connections the more difficult to shrink are, and the more problems with cross talking you have, even between metal layers, not only the silicon substrate. I already explained this in a previous post.

Another point which I've already mentioned, is the old baggage the x86 architecture carries with, in detriment of the efficiency, something that ARM doesn't carry. x86 cpus need to be backwards compatible with software and OSes from the 80's or even before that. And of course is the VT-x (VM emulation) included in the x86-64 CPU's that ARM doesn't have.

Let me add to finish this, that I'm not an Intel zealot, I know Intel chips are no the most efficient here, they are the big truck, but their market customers ask them to be backwards compatible and be a little Jack of all trades. I will love to see a Mac running an in home Apple CPU, it will help most apps in running better, as for the more complex software, that's a different issue, and it will need to be rebuild almost from ground to be able to make full use of the ARM advantages.
 
Last edited:
How do you overcome the extremely cold temperatures needed for effective quantum computing?

A liquid nitrogen or liquid helium setup at home just seems... dangerous and inefficient.

I left Intel some time ago, and even if I knew I wouldn’t have the right to say it, since NDA in place. But the way Intel works, is to have several developing lines, with isolated teams in different countries working. So carbon nanotubes are a good option for the middle time but the big future is Quantum cpu’s doing the parallel work together with classic CPUs.

To be clear, I like ARM architecture too
 
  • Like
Reactions: StellarVixen
How do you overcome the extremely cold temperatures needed for effective quantum computing?

A liquid nitrogen or liquid helium setup at home just seems... dangerous and inefficient.

Quantum computing is still in its infancy, right now has not use for home and even for business. It's the long term future, but in the middle time we need to explore another technologies like carbon nanotubes, and we already using the parallel processing or multicore approach.
 
If you help designing cpus, then you know that what makes the logic work is how those connections between transistors, registers, ALU's and CU's are build, the more complex the connections the more difficult to shrink are, and the more problems with cross talking you have, even between metal layers, not only the silicon substrate. I already explained this in a previous post.

I designed many CPUs. Exponential x704, AMD K6+, Opteron, Athlon 64, UltraSparc V, etc. I don't know what you are trying to say. Are you talking about interconnect coupling? If so, what does that have to do with anything? There is just as much crosstalk in a powerpc as in an x86. The interconnect graph is equally complicated, and Cadence routes them using the same algorithms. In fact, the reason the first x704 takeout didn't operate at full speed was due to a coupling issue. And coupling is taken care of by adjusting interconnect pitch, wire swizzling, and adding ground/power planes. The same techniques are used regardless of instruction set architecture.

If you are saying its somehow more difficult to shrink an x86 than an ARM chip, that's nonsense. Internally, they are pretty much the same. The main difference is that an x86 has a much more complicated instruction decoder. But the execution units, caches, floating point units, scheduler, register renaming, TLBs, etc. all look almost identical. I helped shrink many x86 chips. I never once thought "gee, this would be so much easier to shrink if this were a RISC chip." It's the same.

Another point which I've already mentioned, is the old baggage the x86 architecture carries with, in detriment of the efficiency, something that ARM doesn't carry. x86 cpus need to be backwards compatible with software and OSes from the 80's or even before that. And of course is the VT-x (VM emulation) included in the x86-64 CPU's that ARM doesn't have.

Ok. Not relevant to anything that was said, but ok. Note that x86-64 gets rid of a lot of that cruft, so that compatibility is achieved by microcode, instead, and there isn't a lot of random hardware in the design to support the old instruction sets. As I said, the instruction decoder (including the microcode ROMs) is the main difference. It adds about 20% to the area of a single core die. Of course the penalty is less when you factor in cache, I/Os, etc.
 
And do you really think macOS on ARM will have any significant bearing on the server market?

Directly, no. But I think as more developers find they're using ARM CPUs on the desktop, they'll increasingly include ARM build targets for *nix.

Already on my Pi, I'm finding it's uncommon for me to not find what I want for ARM.
 
Who cares about the etch size? When will we finally see all the Skylake features we were promised, like AVX-512, or modern video standards?
 
I designed many CPUs. Exponential x704, AMD K6+, Opteron, Athlon 64, UltraSparc V, etc. ...

I'm not a designer, so you have the upper hand here, I just used to be part of the building process (diffusion area and the quality control of all the process before sort). But I used to work very close with final design teams. At Intel a person doesn't design a CPU, it takes several teams with hundreds of people across the globe and a few good years work. My job was be part of the team that builds the physical CPU design and solves all the problems in the process. So a deep knowledge of a CPU structure is a must.

By the way there's knowledge that is impossible to learn outside the Fab, in the University or such, due the nature of the information being considered commercial protected and under NDA's. Intel used to have the 3 shock months for the new engineers coming to work. We used to say "forget about everything you learnt, here is a new world"
 
Last edited:
I'm not a designer, so you have the upper hand here, I just used to be part of the building process (diffusion area and the quality control of all the process before sort). But I used to work very close with final design teams. At Intel a person doesn't design a CPU, it takes several teams with hundreds of people across the globe and a few good years work. My job was be part of the team that builds the physical CPU design and solves all the problems in the process. So a deep knowledge of a CPU structure is a must.

By the way there's knowledge that is impossible to learn outside the Fab, in the University or such, due the nature of the information being considered commercial protected and under NDA's. Intel used to have the 3 shock months for the new engineers coming to work. We used to say "forget about everything you learnt, here is a new world"

We never hired folks from Intel because they worked on teams of hundreds and therefore had no understanding of the overall design. We'd ask them questions at interviews and all they knew about was how to design the adder circuit they'd been responsible for for 10 years. At AMD, for example, there were about 15-20 main people who designed x86-64 and the first chip that used it, plus some folks who did things like design verification. The physical designers did their own logic design, floor planning, place and route, etc. And, for example, rather than owning, say, an adder, I would own logic and physical design for all the integer and floating point execution units, or the integer units and the scheduler, etc. Same thing at Sun. At Exponential, when I arrived, I took over responsibility for half the chip (to be fair, it was a shrink-type situation by that point). We worked closely with the fab people, but we drove what they were doing. And the drivers for process technology didn't change one iota depending on whether we were designing an x86 or a RISC.
 
You could install Windows 10 on a PC from 2006.

You could install a latest Linux on a PC from 1989.
Both of these are factual statements! As the proud owner of a PC from 2007 (the GPU is actually from 2012, because driver support) that's running Windows 10 and a PC from 1995 (not quite 89, unfortunately, that uh... would be almost as old as me) that's running a fairly recent low-power distro of Linux, I can confirm both statements.

I can also confirm that neither of those user experiences are very good. In fact, I'd go so far as to say they're both an unusable pile of crap with those OS's on them.

If your hardware is more than 7 or 8 years older than your OS, particularly if you haven't made upgrades to stuff like your storage speed or RAM (I can't even tell you how much I hate Win10 without an SSD), I feel comfortable saying you are going to have a problem, regardless of whether the installation actually completes. So, actually, I feel like Apple has the right idea in this regard.
 
  • Like
Reactions: uberzephyr
We never hired folks from Intel because they worked on teams of hundreds and therefore had no understanding of the overall design. We'd ask them questions at interviews and all they knew about was how to design the adder circuit they'd been responsible for for 10 years. At AMD, for example, there were about 15-20 main people who designed x86-64 and the first chip that used it, plus some folks who did things like design verification. The physical designers did their own logic design, floor planning, place and route, etc. And, for example, rather than owning, say, an adder, I would own logic and physical design for all the integer and floating point execution units, or the integer units and the scheduler, etc. Same thing at Sun. At Exponential, when I arrived, I took over responsibility for half the chip (to be fair, it was a shrink-type situation by that point). We worked closely with the fab people, but we drove what they were doing. And the drivers for process technology didn't change one iota depending on whether we were designing an x86 or a RISC.

Just a small note...You know that a lot of expats from Intel ended working at AMD in the 90's during the Pentium crisis?
 
x86 cpus need to be backwards compatible with software and OSes from the 80's or even before that. And of course is the VT-x (VM emulation) included in the x86-64 CPU's that ARM does
I think this is one of the more important points that should be remembered/considered. A good portion of an Intel CPU die is focused on backwards compatibility and getting all the possible variety of commands converted to micro ops. Consider what performance we’d be seeing from Intel if they had been able to jettison the cruft?

Question: Does Intel ship any mass produced processor that ONLY supports 64-bit instructions? If I remember correctly, Apple was able to free up a chunk of processor real estate by removing 32-bit compatibility.
 
Just a small note...You know that a lot of expats from Intel ended working at AMD in the 90's during the Pentium crisis?

I worked at AMD (microprocessor design) from 1997 onward. Are you referring to the Austin team? If so, I'll note that that team failed utterly, and AMD bought Nexgen to actually have a decent K6. (I went to work for the Nexgen group before it was integrated into AMD). The nexgen group (with some DEC Alpha folks) also did the x86-64 work. I'm not aware of any Intel people in the california microprocessor design division in the decade I was there.
[doublepost=1540315977][/doublepost]
I think this is one of the more important points that should be remembered/considered. A good portion of an Intel CPU die is focused on backwards compatibility and getting all the possible variety of commands converted to micro ops. Consider what performance we’d be seeing from Intel if they had been able to jettison the cruft?

Question: Does Intel ship any mass produced processor that ONLY supports 64-bit instructions? If I remember correctly, Apple was able to free up a chunk of processor real estate by removing 32-bit compatibility.

It takes 20% of the core die area to support the complex instruction decoder. On a multi-core die, taking into account cache area, I/Os, etc., the penalty is much smaller.

Still, would be better to not need it at all.
 
I think this is one of the more important points that should be remembered/considered. A good portion of an Intel CPU die is focused on backwards compatibility and getting all the possible variety of commands converted to micro ops. Consider what performance we’d be seeing from Intel if they had been able to jettison the cruft?

Question: Does Intel ship any mass produced processor that ONLY supports 64-bit instructions? If I remember correctly, Apple was able to free up a chunk of processor real estate by removing 32-bit compatibility.

Not commercially but in my days we had some projects working in completaly entire different architecture more closer to ARM... I left long time ago, so I really don't know, or I can't say without compromising people...
 
Intel is not gonna cry by loosing Apple, is a very small customer for them...as a former Intel eng. I know the hardship of building x64 architecture chips on anything smaller than 12 nm. We are talking about playing with single atoms almost...
[doublepost=1540230559][/doublepost]

arm architecture, simpler than x64. That kind of architecture Intel is able to fit in 5 nm long time ago...It's not the same to build a car than a big commercial plane.
Apple's total chip orders may be "small," as you say, but Apple has a sterling reputation and leads the industry in terms of design and desirability. In other words, it is a prestigious brand, and losing Apple will not be good for Intel. It will be a blow to Intel's brand to lose Apple's non-iPad computer products if the ARM chips Apple uses get favorable reviews. That's if the rumors are true about Apple planning to ditch Intel for their own proprietary designs. Which I think this will happen, but that it will be a few years down the road. In the meantime...Apple can let Intel worry about what their plans may be.

I'm not an engineer, so I can't comment on the veracity of your statement about x64 architecture and under 12nm chips. I have read that Intel and AMD's chips are much more densely populated with transistors than ARM chips (by like 50%). Is that true?
But, I have also read that a good amount of what is in the x64 code is ancient and should be scrapped...leading to unnecessarily complex instruction sets and builds. Again, I don't know if this is accurate.

I only know that as a consumer I would like to see more competition in this space!

**EDITED TO ADD: Also, I think that another way in which Intel would suffer by Apple taking the route of using its own processors would be the PC industry following suit (if/when they see market acceptance of Apple's products). After all, ARM chips cost a lot less than Intel. There's a huge market of low-to-mid-level PCs which could ostensibly use ARM chips and offer consumers lower prices and/or better profits for the PC makers.
 
Last edited:
I have read that Intel and AMD's chips are much more densely populated with transistors than ARM chips (by like 50%). Is that true?

Not all ARM chips. There have been A-series chips that are comparatively low transistor density, but that's presumably because they are using the EVSX design style with 1-of-n logic. (This is a different way of designing circuits that has certain speed advantages in certain situations). Not clear if they are still doing that. But most ARM chips have similar active area density to most x86 chips.
 
  • Like
Reactions: FriendlyMackle
Just remember that quantum computers still need a classic computer to process the algorithm for reading the Quantum CPU output, and that 2 main problems still exists, the sensibility of the quantum bits to external influences (a simple fart can kill the system) and the decoherence problem when you add qbits to the CPU.

Yes, I'm into quantum physics too...
What?!: "the sensibility of the quantum bits to external influences (a simple fart can kill the system)"
LOL...please tell me you substituted 'fart' for something more technical?
 
  • Like
Reactions: cmaier
What?!: "the sensibility of the quantum bits to external influences (a simple fart can kill the system)"
LOL...please tell me you substituted 'fart' for something more technical?

Ha ha, sorry for that!. For you something more technical: Even the cosmic radiation or microwaves (the one used to heat your food) can break the entanglement between particles, and that's a humongous problem right now. A quantum CPU is build using particles pairs in entangled configuration...
 
  • Like
Reactions: FriendlyMackle
Ha ha, sorry for that!. For you something more technical: Even the cosmic radiation or microwaves (the one used to heat your food) can break the entanglement between particles, and that's a humongous problem right now. A quantum CPU is build using particles pairs in entangled configuration...
All I can say is 'whew!' Yes, I understand (vaguely) that quantum physics are completely unlike the classical kind which we seemingly experience at our level of life (the non-sub-atomic kind). And, not entirely joking here: So what about thoughts...will observation affect quantum CPU functionality? ie, how far away are we from practical application in consumer-facing devices?
[doublepost=1540319168][/doublepost]
If you help designing cpus, then you know that what makes the logic work is how those connections between transistors, registers, ALU's and CU's are build, the more complex the connections the more difficult to shrink are, and the more problems with cross talking you have, even between metal layers, not only the silicon substrate. I already explained this in a previous post.

Another point which I've already mentioned, is the old baggage the x86 architecture carries with, in detriment of the efficiency, something that ARM doesn't carry. x86 cpus need to be backwards compatible with software and OSes from the 80's or even before that. And of course is the VT-x (VM emulation) included in the x86-64 CPU's that ARM doesn't have.

Let me add to finish this, that I'm not an Intel zealot, I know Intel chips are no the most efficient here, they are the big truck, but their market customers ask them to be backwards compatible and be a little Jack of all trades. I will love to see a Mac running an in home Apple CPU, it will help most apps in running better, as for the more complex software, that's a different issue, and it will need to be rebuild almost from ground to be able to make full use of the ARM advantages.
Another question for you, about virtualization. Since ARM chips don't have this (right?) — would this be a disadvantage for most consumers using laptops and desktops with general non-scientific software? And, why can't ARM and/or Apple design virtualization methods without infringing on the Intel patents? Isn't this done all the time in other areas (and including phone and computer manufacturing). There is usually more than one way to achieve the same end.
 
All I can say is 'whew!' Yes, I understand (vaguely) that quantum physics are completely unlike the classical kind which we seemingly experience at our level of life (the non-sub-atomic kind). And, not entirely joking here: So what about thoughts...will observation affect quantum CPU functionality? ie, how far away are we from practical application in consumer-facing devices?
[doublepost=1540319168][/doublepost]
Another question for you, about virtualization. Since ARM chips don't have this (right?) — would this be a disadvantage for most consumers using laptops and desktops with general non-scientific software? And, why can't ARM and/or Apple design virtualization methods without infringing on the Intel patents? Isn't this done all the time in other areas (and including phone and computer manufacturing). There is usually more than one way to achieve the same end.

ARM supports virtualization.
 
All I can say is 'whew!' Yes, I understand (vaguely) that quantum physics are completely unlike the classical kind which we seemingly experience at our level of life (the non-sub-atomic kind). And, not entirely joking here: So what about thoughts...will observation affect quantum CPU functionality? ie, how far away are we from practical application in consumer-facing devices?
[doublepost=1540319168][/doublepost]

First sorry if I implied working at quantum computers, I'm not. I just follow their latest developments. I more into the theoretical part of quantum physics.

Here's is a small and simplified way how quantum computers work:

Let's assume that you are looking for a friend in a city next to you, but you don't know the address so you need to start asking house by house if your friend lives there. The classic way computers works is asking if John Doe lives at the first house you went, and if the answer is no, you move to the next house and ask again. On a classical PC you can run several parallel searches, but you are constrict to the PC capability, wherever is 100 parallel searches or 1000. In quantum computers you ask all the city houses at the same time and you get the answer in just one operation. But for that you need to use a mathematical algorithm to read the quantum computer answer, because in quantum physics measuring something changes the result or quantum state. For that you need a classical computer.

So in brief, so far not very useful for home users. And for now you will always need a classical computer working together with the quantum computer.

As for the virtualization cmaier already answered you , and surely he knows more than me in that area.
 
Last edited:
  • Like
Reactions: FriendlyMackle
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.