Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I'm sure intel will recover. The US government simply won't let their foundry side go under as its otherwise an epic national security problem if the USA is reliant on third parties/other countries for their semiconductor manufacturing.
Good thing jet fighters aren’t power constrained…. ?

Personally I wish they’d solve this problem with more basic research funding rather than by propping up a dinosaur. Better incentives.
 
  • Like
Reactions: jz0309 and throAU
So Intel is going from a chip maker to a chip orderer. LMAO
Based on this, it’s not clear there is capacity to order against. They’ve gone from world leading fabricator to seeking goodwill and spare capacity.

If they allowed themselves a little humility in years gone by, maybe they would have worked with TSMC earlier to ensure they brought enough capacity online to meet Intel’s needs.
 
Why are you blaming Intel but not, say, AMD, Nivida or Qualcomm? Intel at least is still manufacturing advanced chips in the US. The others have outsourced everything years ago because it increased their profits.
Perhaps a reason that we no longer have bleeding-edge chip manufacturing in the US is because of Intel. AMD sold off GlobalFoundries, which eventually didn’t have enough capital to compete- maybe they wouldn’t have had to if Intel didn’t pull anticompetitive crap in them. Intel also got into a big lawsuit with DEC that ended up with them taking control of DEC’s fabs.

Of course, if Intel had shared their technology with others like TSMC did, instead of using it as a competitive advantage, there would be more chips fabbed in the USA at Intel’s foundries. But they thought they would never lose their lead and kept it to themselves. Now they want the US gov to fund them because they are the only US leading-edge chipmaker. It’s almost funny.
 
I suspect it had more to do with Apple wanting more control, maximizing profits and perhaps in the long term merging their Mac line with their iDevices. Other laptop makers had no trouble bringing attractive products to the market in this period. Apple could even have switched to AMD with much less effort than is required for the ARM transition. They would probably have switched to their own CPUs no matter what Intel did.
Actually ex-Intel employee Francois Piednoel stated "The quality assurance of Skylake was more than a problem. It was abnormall bad. We were getting way too much citing for little things inside Skylake. Basically our buddies at Apple became the number one filer of problems in the architecture. And that went really, really bad. When your customer starts find almost as bugs as you found yourself, you're not leading into the right place."

As for switching to AMD, their chips had the same problems the x86 architecture as a whole does - x86 runs very hot and the electrical power to computer power ratio blows goats out of a catapult. Apple also wasn't curtain that AMD could meet Apple's demand on top of what AMD already had in the pipeline (ie Apple was going to be lucky if they were bottom man on the totem pole). Never mind Apple already had ARM in iPhones and iPads for about a decade at that point so why not unify the entire CPU line around ARM - streamlines the cross coding.

Then you had the fact everybody and his brother is slowly moving to ARM due to its mammoth energy savings (a big part of server cost is AC/cooling systems to deal with the heat the energy gobbling x86 CPUs produce.)

I don't see x86 disappearing for many years but I think in this increasing energy consumption aware society (especially if you have metered electricity) I think it is likely heading for a decline.
 
As for switching to AMD, their chips had the same problems the x86 architecture as a whole does - x86 runs very hot and the electrical power to computer power ratio blows goats out of a catapult. Apple also wasn't curtain that AMD could meet Apple's demand on top of what AMD already had in the pipeline (ie Apple was going to be lucky if they were bottom man on the totem pole). Never mind Apple already had ARM in iPhones and iPads for about a decade at that point so why not unify the entire CPU line around ARM - streamlines the cross coding.

At the end of the day the phone, tablet and laptop markets were always going to converge and Intel / AMD simply didn't have anything competitive with ARM (especially Apple's extended variant) in that space.

Apple are never going back to x86, whether it is intel or AMD. And jumping from intel to AMD would be just exchanging one third party vendor they can't control for another.

They were already building their own SOCs for iPhone, iPad, watch, etc. so re-using that IP by just linking more cores of the same type together and gaining a common software platform was a no brainer.

Even if Intel didn't have this massive stumble, the switch would have happened.

But... you can tell a lot of the last 5 years of Mac design was likely decided in advance based on much more efficient processors that Intel likely promised to have available on intel 10nm (which was delayed by - you guessed it, about 4-5 years - and when delivered was... crap. intel 10nm was late and underperforming).

From the MacBook Air, to the MacBook Pro, though the trashcan Mac Pro - those products were likely designed with probably half of the power consumption in mind based on intel roadmap promises from say 2010.
 
I’ve read all these posts regarding chip density & makers, but IMHO there is one aspect that somewhat missing: The instruction set. Apple codes for its own RISC designs, everything in house using TSMC as a partner. Intel/Microsoft are stuck with xx86 CISC chips because of many decades of total success in nearly every computing use cases of the past. Intel & Microsoft will likely need to figure out a way to dump CISC & xx86 and move on. Until then, even if they shrink their chips, they are going to run hotter, be more power hungry, and likely slower in the long run. It’s not just about size, it’s how you use it 😉
 
Intel vs. AMD.. Intel vs. ARM vs. TSMC... Apple vs. Samsung.. WTF cares? The fanboyism is just tiresome.

I'd be much more worried about China re-establishing dominance over Taiwan and further disrupting the global chip market. US better start getting ramped up with Chip fab capability ASAP.
 
Intel vs. AMD.. Intel vs. ARM vs. TSMC... Apple vs. Samsung.. WTF cares? The fanboyism is just tiresome.

I'd be much more worried about China re-establishing dominance over Taiwan and further disrupting the global chip market. US better start getting ramped up with Chip fab capability ASAP.
Nationalism is just as tiresome as fanboyism. If you’re really tired of fanboyism WTH are you doing on Macrumors?
 
I’ve read all these posts regarding chip density & makers, but IMHO there is one aspect that somewhat missing: The instruction set. Apple codes for its own RISC designs, everything in house using TSMC as a partner. Intel/Microsoft are stuck with xx86 CISC chips because of many decades of total success in nearly every computing use cases of the past. Intel & Microsoft will likely need to figure out a way to dump CISC & xx86 and move on. Until then, even if they shrink their chips, they are going to run hotter, be more power hungry, and likely slower in the long run. It’s not just about size, it’s how you use it ?
Microsoft has been trying to go to ARM for a while now with their Windows on ARM and just couldn't get the translation of x86 to ARM to work at a decent speed. Then Apple showed that with the right coding an x86 to ARM translator could perform reasonably. It also showed Microsoft had a long way to get to get their ARM code up to snuff as Apple's M1 was nearly 2x fast at running Windows on ARM and it was using virtualization. Problem is Microsoft has their license agreement Qualcomm which limits what they can officially allow Windows on ARM to run on.
 
  • Like
Reactions: SFjohn
Intel vs. AMD.. Intel vs. ARM vs. TSMC... Apple vs. Samsung.. WTF cares? The fanboyism is just tiresome.

I'd be much more worried about China re-establishing dominance over Taiwan and further disrupting the global chip market. US better start getting ramped up with Chip fab capability ASAP.
China has been saber tattling about re-establishing dominance over Taiwan for decades now. TSMC is mainly owned by foreign investors and so trying to do anything against Taiwan would be political and economic suicide. TSMC already has a few fab factories outside of Taiwan with another one being built in the US (they have one in Washington state).

COVID showed the major issue with having so much being done overseas. Also for social-political issues India is looking more promising than China as a trade partner. China's one child policy resulted in them having the same issue the majority of the first world nations are headed for - far fewer new workers to replace those that retire.
 
Intel vs. AMD.. Intel vs. ARM vs. TSMC... Apple vs. Samsung.. WTF cares? The fanboyism is just tiresome.

I'd be much more worried about China re-establishing dominance over Taiwan and further disrupting the global chip market. US better start getting ramped up with Chip fab capability ASAP.

It's not fanboyism. It's demonstrated product leadership due to the laws of physics.
 
  • Like
Reactions: SFjohn
Then Apple showed that with the right coding an x86 to ARM translator could perform reasonably.
To be fair, the M1 has a level of hardware assistance with x86 emulation built into the chip that other ARM products do not have.

No it doesn't just execute x86 natively (that would be insane and defeat the purpose of ditching x86), but it does have some features that help.
 
At the end of the day the phone, tablet and laptop markets were always going to converge and Intel / AMD simply didn't have anything competitive with ARM (especially Apple's extended variant) in that space.
This is why I say that x86 is nearing its end. It has a very bloated instruction set and just can't hold a candle to the energy use of ARM.

Looking around I found this when Apple was going to PowerPC (2003-07-09): "x86 CPUs generate a great deal of heat because they are pushed to give maximum performance but because of their inefficient instruction set this takes a lot of energy. In order to compete with one another AMD and Intel will need to keep upping their clock rates and running their chips at the limit, their chips are going to get hotter and hotter."

X86 has been having issues with Amdahl’s Law for a while now but Apple need an instruction set that the masses used thanks to the dumpster fire the company had become in the mid 1990s and PowerPC had effectively died as a meaningful CPU and that instruction set was x86.
 
  • Like
Reactions: throAU
PowerPC had effectively died as a meaningful CPU

That's a little unfair really.

Power is still going strong in certain circles, but back when Apple was involved, they were wanting laptop CPUs (like it or not, Laptops are the bulk of Apple's computer sales) whereas the other partners were chasing mainframe/datacenter workloads.

Trying to adapt something like the g5 to both was just not feasible at the time.

That, and the fact that intel did a REALLY good job with the original CORE series processors in terms of power efficiency, and most of the market being on x86 made it just not feasible to continue with PowerPC.
 
  • Like
Reactions: huge_apple_fangirl
To be fair, the M1 has a level of hardware assistance with x86 emulation built into the chip that other ARM products do not have.

No it doesn't just execute x86 natively (that would be insane and defeat the purpose of ditching x86), but it does have some features that help.
First I heard of this "hardware based x86 emulation" in the M1. From everything I have read (who actually knew what the Sam Hill they were talking about) the M1 needs Rosetta 2 software translation (which it has to download) to deal with x86 code.
 
First I heard of this "hardware based x86 emulation" in the M1. From everything I have read (who actually knew what the Sam Hill they were talking about) the M1 needs Rosetta 2 software translation (which it has to download) to deal with x86 code.

Its not hardware emulation or running code directly, like I said.



try reading through that thread (and others).

It's a minor tweak, but its something generic ARM does not have and it helps massively.


It's not just "Apple code rosetta good, Microsoft can't".

They also have hardware decisions made to help it run well.
 
First I heard of this "hardware based x86 emulation" in the M1. From everything I have read (who actually knew what the Sam Hill they were talking about) the M1 needs Rosetta 2 software translation (which it has to download) to deal with x86 code.
ARM has a so called "weak memory model", while x86 has a "strong memory model". The M1 has an option to switch between these memory models (for x86 emulation). This means Apple's x86 translator doesn't need to take differences between these memory models into account.

weak-strong-table.png
 
  • Like
Reactions: throAU
ARM has a so called "weak memory model", while x86 has a "strong memory model". The M1 has an option to switch between these memory models (for x86 emulation). This means Apple's x86 translator doesn't need to take differences between these memory models into account.

weak-strong-table.png
Here is what I read somewhere: "No, Apple doesn't run the translated code with a weak memory model, Rosetta 2 toggles the total store ordering (which the M1 supports) on when running its code." I've said it before and I will say it again emulation is not translation. Sheepshaver is an emulator (it emulates non-Intel Macs on Intel hardware) while WINE, Rosetta, and Rosetta 2 are all translators. It annoys me no end how people keep confusing the two.

 
This is why I say that x86 is nearing its end. It has a very bloated instruction set and just can't hold a candle to the energy use of ARM.

Looking around I found this when Apple was going to PowerPC (2003-07-09): "x86 CPUs generate a great deal of heat because they are pushed to give maximum performance but because of their inefficient instruction set this takes a lot of energy. In order to compete with one another AMD and Intel will need to keep upping their clock rates and running their chips at the limit, their chips are going to get hotter and hotter."

X86 has been having issues with Amdahl’s Law for a while now but Apple need an instruction set that the masses used thanks to the dumpster fire the company had become in the mid 1990s and PowerPC had effectively died as a meaningful CPU and that instruction set was x86.

As I discussed in detail at another site, the variable-length instructions in x86, and some other quirks, exact a performance-per-watt penalty throughout the CPU.

First I heard of this "hardware based x86 emulation" in the M1. From everything I have read (who actually knew what the Sam Hill they were talking about) the M1 needs Rosetta 2 software translation (which it has to download) to deal with x86 code.
It’s a minor addition to the CPU to simplify memory ordering (for when multiple cores are accessing memory - how do you keep the reads and writes in the correct sequence - or, more accurately, in the sequence that x86 apps expect).
 
Wine is not a translator. It’s an implementation of Win32.
WINE is not "an implementation of Win32."

"Instead of simulating internal Windows logic like a virtual machine or emulator, Wine translates Windows API calls into POSIX calls on-the-fly, eliminating the performance and memory penalties of other methods and allowing you to cleanly integrate Windows applications into your desktop." WINE HQ webpage.

Rosetta/Rosetta 2 worked/works the same way just at a different level (ie catches a call and turns it into something the CPU can use)

It’s a minor addition to the CPU to simplify memory ordering (for when multiple cores are accessing memory - how do you keep the reads and writes in the correct sequence - or, more accurately, in the sequence that x86 apps expect).
Unless I am missing something that sounds like Endianness which Sheepshaver and Rosetta handled in software without some extra piece of hardware.

In fact, an interesting issue could happen under certain conditions. Back then Apple was still using a resource fork and there would be this hiccup where the two labels that identified file would get flipped.

So APPL; WILD would go and get turned into WILD; APPL or if things really messed up DLIW; LPPA.

IIRC the work around was to keep the file in .hqx format and expand it as dragging and dropping between the two did not work...at all.
 
Last edited:
From the WINE HQ webpage: "Instead of simulating internal Windows logic like a virtual machine or emulator, Wine translates Windows API calls into POSIX calls on-the-fly, eliminating the performance and memory penalties of other methods and allowing you to cleanly integrate Windows applications into your desktop."

WINE is not "an implementation of Win32." it is a translator; the above expressly explains that.

Unless I am missing something that sounds like Endianness which Sheepshaver and Rosetta handled in software without some extra piece of hardware. In fact, an interesting bug could happen under certain conditions. Back then Apple was still using a resource fork and when one of these translators had a hiccup the two labels would get flipped. So APPL; WILD would go and get turned into WILD; APPL or if the translator really messed up DLIW; LPPA.

No, endianness is the order of the bytes within a word. And Arm supports either big- or little-endianness, so apple didn’t have to do anything there.

The issue here is what happens when you are writing and reading memory from multiple cores. If core A writes to address X and then core B writes to address X and core C is going to read from address X, the issue of which core writes first and which writes second becomes important.
 
From the WINE HQ webpage: "Instead of simulating internal Windows logic like a virtual machine or emulator, Wine translates Windows API calls into POSIX calls on-the-fly, eliminating the performance and memory penalties of other methods and allowing you to cleanly integrate Windows applications into your desktop."

That’s poor use of language on their webpage. Wine includes libraries that are posix re-writes of Windows API sdks. So when you call a windows function, it runs the posix pre-written version instead. It isn’t translating those functions, it is simply calling functions that are pre-written.
 
That’s poor use of language on their webpage. Wine includes libraries that are posix re-writes of Windows API sdks. So when you call a windows function, it runs the posix pre-written version instead. It isn’t translating those functions, it is simply calling functions that are pre-written.
Ah. I see. Well I figured the people managing WINE's HQ webpage would get the terminology right. Guess that didn't work out to well. :p
 
Ah. I see. Well I figured the people managing WINE's HQ webpage would get the terminology right. Guess that didn't work out to well. :p

Well, they were translated, just ahead of time. The calls are switched “on the fly,” so in that sense, I guess, it’s “translated on the fly,” but that’s not in the same sense we were talking about.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.