Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

leman

macrumors Core
Oct 14, 2008
19,302
19,284
Do Intel and AMD provide a neural engine in their chips? I thought I saw something that they had neural instructions. Neural engine sounds more like a big set of CISC instructions.

Intel has their own matrix extensions, comparable to the Apple AMX extensions from what I can gather, but I don't have enough information to compare the two implementations. The Apple NPU is something else entirely though and should be much faster (and more energy-efficient) for what it's designed to do.
 
  • Like
Reactions: pshufd

pshufd

macrumors G3
Oct 24, 2013
9,967
14,446
New Hampshire
Intel has their own matrix extensions, comparable to the Apple AMX extensions from what I can gather, but I don't have enough information to compare the two implementations. The Apple NPU is something else entirely though and should be much faster (and more energy-efficient) for what it's designed to do.

Looks like AMX tile instructions going into Xeon CPUs this year. Did Intel copy Apple's extensions? I've wanted matrix instructions from x86 for a while (see the interest from my username) though I may just move my architecture curiosity over to M1. One problem with using these instructions is that they aren't in consumer CPUs yet and I didn't see an indication as to when they would get into consumer CPUs. Intel has taken a year or two to move vector instructions to the consumer CPUs in the past.
 

leman

macrumors Core
Oct 14, 2008
19,302
19,284
Looks like AMX tile instructions going into Xeon CPUs this year. Did Intel copy Apple's extensions?

The basic design does look similar, and Intel is using the same coprocessor architecture as Apple. But did they copy it? These are matrix instructions, the design space is only that large :)

I've wanted matrix instructions from x86 for a while (see the interest from my username) though I may just move my architecture curiosity over to M1.

The caveat is that Apple's AMX is still hidden from the user. So if you want to play around with some assembly, you'll have to hack your way though and your code won't be portable between Apple devices.
 

thingstoponder

macrumors 6502a
Oct 23, 2014
914
1,100
Looks like TMSC will produce 3nm chips for Intel laptops and server CPU's. And the rumours are, they have a bigger contract than Apple so Apple will only use their 3nm chips for the iPads.

Going from 14nm --> 3nm will yield huge performance and energy efficiency gains for Intel, up to the point you might have to wonder if going for their iOS based chips was a wise choice.

Intel could be back sooner than most people expect.
They’re not “going from 14nm”. They’ve been in 10nm for while with laptops which is what these will be. Even the last Intel Air and 13” Pro were 10nm. And the Intel 10nm is about equivalent to TSMC 7nm. Not nearly the leap you’re making it out to be,

And no. Just because they get to an equivalent node doesn’t mean Apple switching to their own silicon is pointless. Who cares if Intel is on the same node? Its only one factor in Apple going to their own chips.
 
  • Like
Reactions: 09872738

pshufd

macrumors G3
Oct 24, 2013
9,967
14,446
New Hampshire
The basic design does look similar, and Intel is using the same coprocessor architecture as Apple. But did they copy it? These are matrix instructions, the design space is only that large :)


The caveat is that Apple's AMX is still hidden from the user. So if you want to play around with some assembly, you'll have to hack your way though and your code won't be portable between Apple devices.

That's a shame. I guess Apple is providing APIs that will work on multiple releases of the architecture. Pretty convenient for developers but not so much for playing with to see what's under the hood. I've spent a lot of time with the Intel Architecture manuals in the past. It's possible that there is no Apple Silicon equivalent publicly available.
 

leman

macrumors Core
Oct 14, 2008
19,302
19,284
That's a shame. I guess Apple is providing APIs that will work on multiple releases of the architecture. Pretty convenient for developers but not so much for playing with to see what's under the hood. I've spent a lot of time with the Intel Architecture manuals in the past. It's possible that there is no Apple Silicon equivalent publicly available.

Apple AMX has been partially reverse-engineered, so you can definitely play around with it if you want. Just not really something for production. I think it makes sense that they keep these implementation details internal, this allows them to freely experiment and introduce breaking changes without affecting the software.
 

pshufd

macrumors G3
Oct 24, 2013
9,967
14,446
New Hampshire
Apple AMX has been partially reverse-engineered, so you can definitely play around with it if you want. Just not really something for production. I think it makes sense that they keep these implementation details internal, this allows them to freely experiment and introduce breaking changes without affecting the software.

The idea with playing around with assembler or machine code is to find a use that the APIs didn't think of.
 

09872738

Cancelled
Feb 12, 2005
1,270
2,124
They’re not “going from 14nm”. They’ve been in 10nm for while with laptops which is what these will be. Even the last Intel Air and 13” Pro were 10nm. And the Intel 10nm is about equivalent to TSMC 7nm. Not nearly the leap you’re making it out to be,

And no. Just because they get to an equivalent node doesn’t mean Apple switching to their own silicon is pointless. Who cares if Intel is on the same node? Its only one factor in Apple going to their own chips.
Plus: the different node certainly helps, but Intel is still not going to keep up with Apple in terms of power efficiency. The node advantage contributes just a fraction to Apples advantage there
 
Last edited:
  • Like
Reactions: JMacHack

pshufd

macrumors G3
Oct 24, 2013
9,967
14,446
New Hampshire
Plus: the different node my help, but Intel is not going to keep up with Apple in terms of power efficiency. The node advantage contributes just a fraction to Apples advantage there

Apple didn't push the pain of dropping 32-bit support for nothing. That block diagram of CPU functionality for the M1 is impressive and may have Intel thinking CPU architecture.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,311
3,902
Looks like TMSC will produce 3nm chips for Intel laptops and server CPU's. And the rumours are, they have a bigger contract than Apple so Apple will only use their 3nm chips for the iPads.

Bigger contract for wafers in 2H22 isn't necessarily bigger over the longer term into 2023-24.

Also not a "slam dunk" that these are high end laptop chips (or high end (i.e., large ) Xeon chips).

First, Intel posted a job about putting QAT into Atom and Xeon designs for both Intel and TSMC .



If Intel took Gracemont (or Tremont) and did a port to 3nm TSMC they would likely have something that would blunt the path Qualcomm was on before they bought Nuvia. Intel also had a major 2019-2020 FUBAR with planned solution for the 5g basestation chips. [ a follow on to the Atom C3000 and cousin of Xeon D ]



Like some other non super fin 10nm products that didn't roll out as well.

Intel also had TSMC graphics already in the pipeline for 2021 ( 6nm TSMC). Xe-HPG ( DG2 ) shrunk onto 3nm and some Gracemount cores would probably work fine coupled to a celluar modem at the lower end of the laptop spectrum. Not "ultimate gaming" or "pro" laptops but there are lots of folks who don't need those two solutions. ( Chromebooks, mobile Office worker laptops, etc. )


Likewise 10-20 , 3nm Gracemount cores (with no GPU or consumer modem ) would probably make for a better base station solution than what Intel has to compete with the ARM solutions.


It would be a faster track to move Gracemont (or a Gracemount+ with some update tweaks ) to a "foreign" process node than to move the bigger core than trying to move GoldenCove. 8-20 Gracemount cores plus a modest GPU probably could fall into 90-150mm2 die size range which would make for a decent "pipe cleaner" for 3nm also. (even more so if Intel uses packaging to put the PCH , I/O largely on another die. ).

In short, I suspect Intel isn't trying to do a "M1 killer" or "AMD Zen 4 max mobile killer" solutions here. ( or attacking AMD EPYC on the high end). 1-2 years ago the "plan" was probably that Intel would use their 7nm to 'attack' that stuff. Pretty good chance this narrow 3nm target mix was already in flight 10-18 months ago.



Going from 14nm --> 3nm will yield huge performance and energy efficiency gains for Intel, up to the point you might have to wonder if going for their iOS based chips was a wise choice.

It is probably not 14nm -> 3nm, but 10nm (&6nm) -> 3nm that is the gap. If Intel wants to dominate the laptop dGPU space as they did with iGPUs they have work to do. Their consumer dGPU product is rolling out on TSMC 6nm. These Atom+(smallish GPU) could be very good pipe cleaners for later moving their bigger dGPU dies from 6nm -> 3nm.



Intel could be back sooner than most people expect.

Intel is in a ton of different battlefronts. They are highly unlikely to come back on all fronts soon. Some areas are going to take longer than others.

Where Intel's "everything for everybody" product line up overlaps with the M1 and "M1 bigger die " ( MBP 16" and larger iMac SoCs. ).will likely be a longer problem issue that won't get sorted with this initial 3nm TSMC products.
The major issue that Intel has to sort out is that making everything for everybody means that they are competing with half a dozen other major competitors. It is not just Apple and AMD. They need multiple different fab processes tuned for different markets to create better matches for subset of their extremely broad product line up.
 
  • Like
Reactions: JMacHack

Kpjoslee

macrumors 6502
Sep 11, 2007
416
266
Bigger contract for wafers in 2H22 isn't necessarily bigger over the longer term into 2023-24.

Also not a "slam dunk" that these are high end laptop chips (or high end (i.e., large ) Xeon chips).

First, Intel posted a job about putting QAT into Atom and Xeon designs for both Intel and TSMC .



If Intel took Gracemont (or Tremont) and did a port to 3nm TSMC they would likely have something that would blunt the path Qualcomm was on before they bought Nuvia. Intel also had a major 2019-2020 FUBAR with planned solution for the 5g basestation chips. [ a follow on to the Atom C3000 and cousin of Xeon D ]



Like some other non super fin 10nm products that didn't roll out as well.

Intel also had TSMC graphics already in the pipeline for 2021 ( 6nm TSMC). Xe-HPG ( DG2 ) shrunk onto 3nm and some Gracemount cores would probably work fine coupled to a celluar modem at the lower end of the laptop spectrum. Not "ultimate gaming" or "pro" laptops but there are lots of folks who don't need those two solutions. ( Chromebooks, mobile Office worker laptops, etc. )


Likewise 10-20 , 3nm Gracemount cores (with no GPU or consumer modem ) would probably make for a better base station solution than what Intel has to compete with the ARM solutions.


It would be a faster track to move Gracemont (or a Gracemount+ with some update tweaks ) to a "foreign" process node than to move the bigger core than trying to move GoldenCove. 8-20 Gracemount cores plus a modest GPU probably could fall into 90-150mm2 die size range which would make for a decent "pipe cleaner" for 3nm also. (even more so if Intel uses packaging to put the PCH , I/O largely on another die. ).

In short, I suspect Intel isn't trying to do a "M1 killer" or "AMD Zen 4 max mobile killer" solutions here. ( or attacking AMD EPYC on the high end). 1-2 years ago the "plan" was probably that Intel would use their 7nm to 'attack' that stuff. Pretty good chance this narrow 3nm target mix was already in flight 10-18 months ago.





It is probably not 14nm -> 3nm, but 10nm (&6nm) -> 3nm that is the gap. If Intel wants to dominate the laptop dGPU space as they did with iGPUs they have work to do. Their consumer dGPU product is rolling out on TSMC 6nm. These Atom+(smallish GPU) could be very good pipe cleaners for later moving their bigger dGPU dies from 6nm -> 3nm.





Intel is in a ton of different battlefronts. They are highly unlikely to come back on all fronts soon. Some areas are going to take longer than others.

Where Intel's "everything for everybody" product line up overlaps with the M1 and "M1 bigger die " ( MBP 16" and larger iMac SoCs. ).will likely be a longer problem issue that won't get sorted with this initial 3nm TSMC products.
The major issue that Intel has to sort out is that making everything for everybody means that they are competing with half a dozen other major competitors. It is not just Apple and AMD. They need multiple different fab processes tuned for different markets to create better matches for subset of their extremely broad product line up.

According to the rumors, Intel's 3nm order is supposedly larger than Apple's 3nm SoC orders on Ipad. I think they might be planning to use TSMC 3nm on meteor lake based compute tiles as well as their own 7nm.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,311
3,902
I doubt that Intel can outbid Apple for foundry access, so it’s probably not that simple.


"...
Total Cash (mrq)22.4B
Total Cash Per Share (mrq)5.55
Total Debt (mrq)35.88B
..
Levered Free Cash Flow (ttm)10.43B
"


Intel isn't broke and cash poor. Apple is often Scrooge McDuck when it comes to paying contractors. The issue is highly likely that Apple is probably not buying wafer starts that they don't know they are going to use. Depending upon the status of Intel's next gen Xe-HPG , they could flip some 3nm wafer starts to dGPUs from CPU is they come up short on demand for these CPU SoCs.

Intel doesn't need super high volume. They have their own fabs for super high volume. Intel has money and also ability to size their wafer order to "not too big and not too small". Intel is likely buying enough wafer starts to offset for their lack in number of EUV fabricators they don't have ( and aren't going to get in the next couple of years.)


They don't have to outbid Apple to exclude everyone else. They only need to buy up what is available.
Timing wise, the iPhone can't be a wafer hog here. Hwawei is in the side lines. AMD and Nvidia have 5nm digestion+evolution issues. (need to finish 5nm transition before can move on.)











Also, we still have no idea how Apple chip lineup will look going forward. There is a good chance that M1 is a one-off product used to kickstart the entire process and that the future models will look very differently from their phone mobile counterparts.

That is quite doubtful. Pretty good chance the M2 is about the same dimensions as the M1 with "better stuff" inside. The baseline core op codes and function will likely line up with the mobile counterparts. The "uncore" will be different but radical core drift? Probably not. The watch S core were aligned with the "energy" cores. Apple used common components across multiple products as a standard design tactic. "buy more of the same stuff get bigger discounts and economies of scale.".


The M1 SoC package doesn't physically look like the A14 package, but that probably isn't likely to grow even more different in the future. There will be bigger Mac only SoCs that won't fit inside of a iPad Pro , but a subtantive amount of the internals are gong to be the same. Just differences more so in scale (and die size).




I can for example see Apple leveraging 3nm in phones and tablets for power efficiency, while keeping the desktop chips on 4nm for a while.

Probably more so because the desktop chips are iterating more slowly. Apple iterated the 10X, 12X only slower cycles than the the iPhone chips. the larger the die ( or bigger package of dies ) the more likely it will iterate slower.

the iPhones are stuck with the "have to show up in September with something new " problem. Desktops don't necessarily have to do that.
 
  • Like
Reactions: JWSpaceMan

deconstruct60

macrumors G5
Mar 10, 2009
12,311
3,902
According to the rumors, Intel's 3nm order is supposedly larger than Apple's 3nm SoC orders on Ipad. I think they might be planning to use TSMC 3nm on meteor lake based compute tiles as well as their own 7nm.

If that is actually iPad Pro and not iPad ( Air --- mid range ) than that wouldn't be all that hard to outnumber them in he first available volume production quarter. The rumor isn't about 3nm's entire lifecycle of wafer starts. Just the initial volume blocks. The MBA update could be staged later (1-2 quarters down stream).

The mainstream iPad has almost never gotten anything other than "hand me down" SoC. It is pretty unlikely Apple is changing that up for 3nm. If 4nm has higher volume availability for a Fall product then probably want to assign 3nm is a lower volume product. Since is it relatively "lower" it is going to be far easier for someone else dies to out number it. Even more so when there are two SoC lines running versus Apple's just one for a relatively lower volume product.

If it is some kind of SoC package tile then a graphics one is more likely than a x86 one. low power consuming laptops are less likely to use tiles though because it consumes incrementally more power. Intel is more likely going tiles on higher power consuming products. ( Lakefield is already entering retirement ( hopefully Microsoft got Windows 11 optimized for intel big-little while getting the product off the ground so Adler Lake won't suck) and big tile/chiplet fans AMD don't use chiplets on their mobile APUs. )

Pretty good chance Intel is going to 3nm for this to get the 20-30% power saving more so than clock (and higher performance). That won't be a good match for Meteor Lake compute.
 

leman

macrumors Core
Oct 14, 2008
19,302
19,284
In the intervening time, Apple may end up buying TSMC. Then where would that leave Intel?

Why would Apple want to do that? It's another — humongous — business to manage, with massive risks. Apple already works very closely with TSMC to develop new nodes, it's a win-win for both parties. Buying TSMC would serve no purpose.
 

thekev

macrumors 604
Aug 5, 2010
7,005
3,343
In the intervening time, Apple may end up buying TSMC. Then where would that leave Intel?

That runs very contrary to their typical actions, which are to offload anything risky. It also seems unlikely that it would be approved.
 

pshufd

macrumors G3
Oct 24, 2013
9,967
14,446
New Hampshire
Intel also had TSMC graphics already in the pipeline for 2021 ( 6nm TSMC). Xe-HPG ( DG2 ) shrunk onto 3nm and some Gracemount cores would probably work fine coupled to a celluar modem at the lower end of the laptop spectrum. Not "ultimate gaming" or "pro" laptops but there are lots of folks who don't need those two solutions. ( Chromebooks, mobile Office worker laptops, etc. )

Likewise 10-20 , 3nm Gracemount cores (with no GPU or consumer modem ) would probably make for a better base station solution than what Intel has to compete with the ARM solutions.

It would be a faster track to move Gracemont (or a Gracemount+ with some update tweaks ) to a "foreign" process node than to move the bigger core than trying to move GoldenCove. 8-20 Gracemount cores plus a modest GPU probably could fall into 90-150mm2 die size range which would make for a decent "pipe cleaner" for 3nm also. (even more so if Intel uses packaging to put the PCH , I/O largely on another die. ).

In short, I suspect Intel isn't trying to do a "M1 killer" or "AMD Zen 4 max mobile killer" solutions here. ( or attacking AMD EPYC on the high end). 1-2 years ago the "plan" was probably that Intel would use their 7nm to 'attack' that stuff. Pretty good chance this narrow 3nm target mix was already in flight 10-18 months ago.

I saw some of the Intel dGPU reviews and it looks like they are targeting the low end of the market, kind of like a discrete version of iGPUs. That's really fine as that is where the volume is. Right now, GT 1030s are in big demand - not because customers want them but because customers can get them at only 2xMSRP. If additional Intel supply gets the market back into balance, I'm all for it. If the Intel dGPUs are far more efficient than the old GT 1030s, 1050s, 1060s, etc., then I'm all for it as well. All of the newer process chips are high-end chips so you can't get the more efficient chips on the low-end.

It looks like I will have to remain on x86 for a while longer and there is a lot of software that doesn't run on M1 and may never. Just because something runs through Rosetta 2 today doesn't mean that Apple won't yank it in two, three or four years.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,311
3,902
I saw some of the Intel dGPU reviews and it looks like they are targeting the low end of the market, kind of like a discrete version of iGPUs.

I suspect that is the DG1 stuff that is already shipping. What Intel has coming in the Fall is substantively different with DG2.



At the desktop midrange GPU card thermal budget , Intel turns in solid midrange performance. Where they need lots of help is where they try to go mobile thermals and/or integrated. Moving their GPU tech to 3nm would help extremely significantly there for an iteration on iGPU implementation. Keep the base clocks about the same and take the power and area savings to add more EUs. Crank up the iGPU's memory subsystem aggregate bandwidth and performance would likely get up in the "good enough to compete" range.

Longer term ( into 2023 ), yeah the rest of what would probably be "DG3" (Xe-HPG next gen) would roll out of 3nm. Intel's 7nm and follow on delays likely mean that the graphics wing is probably a TSMC customer for more than several years.

Samsung and AMD are working on a ARM+RDNA2 combo that Intel would have to worry about at the hyper mobile Windows market. AMD gets some indirection revenues out of that so they don't need an x86 product to cover that space for a long time. 2-3 years ago that was known to be potential competitor. Qualcomm was also a known competitor back then also. Samsung fab is behind TSMC but they are likely to get to volume shipments at higher density sooner than Intel is. ( certainly have far , far more EUV fabrication steppers than Intel has. )

If Intel keeps the x86 core "small" and the iGPU "small" then they would an easier time jumping onto 3nm early before they loose the competitive opportunity window for a low-midrange products (more available , more affordable and fast enough).

To get their low power iGPU and dGPU products off the ground and competitive they should be outsourcing at this point. If they "fail" to get significant enough market penetration then it is easier to just drop the outsourced capacity. If Intel gets its own fab house in order ( iterating quicker and doing much better job at being outsource capacity provider ) then if the those businesses get to "big" for TSMC capacity Intel can move them in house over time. Short term though, they need to be nimble and move quicker to survive to middle age.
 

Yebubbleman

macrumors 603
May 20, 2010
5,832
2,421
Los Angeles, CA
Looks like TMSC will produce 3nm chips for Intel laptops and server CPU's. And the rumours are, they have a bigger contract than Apple so Apple will only use their 3nm chips for the iPads.

Going from 14nm --> 3nm will yield huge performance and energy efficiency gains for Intel, up to the point you might have to wonder if going for their iOS based chips was a wise choice.

Intel could be back sooner than most people expect.
Intel is one of those "too large to fail" kind of companies. Many people on here commenting on their demise are only looking at it from a Mac user's perspective (which is to say, from the standpoint of their chips being inside of a Mac notebook or desktop as opposed to a server or any other kind of PC). Intel is still dominant in the server market. AMD and ARM-based offerings are encroaching, but still have a LONG way to go to fully challenge Intel's Xeon for the crown. Incidentally, while AMD's Ryzen is enjoying a much larger marketshare than AMD would've liked and even though losing the contract with Apple is huge for Intel, Intel is going to keep selling stuff and people are going to keep buying it.

Apple moving to Apple Silicon for the Mac was probably the best choice considering their M.O. of control over the Mac platform and the Apple ecosystem at large. With Intel, things were even more open than they were in the PowerPC days (hence the existence of Hackintoshes as viable computers); with largely standard parts and firmware types. Now, Apple has end to end control. Plus Intel having a larger contract with TSMC doesn't mean much as far as Apple is concerned. Apple only makes so many products. Intel, for sure, makes more.
 

pshufd

macrumors G3
Oct 24, 2013
9,967
14,446
New Hampshire
Intel is one of those "too large to fail" kind of companies. Many people on here commenting on their demise are only looking at it from a Mac user's perspective (which is to say, from the standpoint of their chips being inside of a Mac notebook or desktop as opposed to a server or any other kind of PC). Intel is still dominant in the server market. AMD and ARM-based offerings are encroaching, but still have a LONG way to go to fully challenge Intel's Xeon for the crown. Incidentally, while AMD's Ryzen is enjoying a much larger marketshare than AMD would've liked and even though losing the contract with Apple is huge for Intel, Intel is going to keep selling stuff and people are going to keep buying it.

Apple moving to Apple Silicon for the Mac was probably the best choice considering their M.O. of control over the Mac platform and the Apple ecosystem at large. With Intel, things were even more open than they were in the PowerPC days (hence the existence of Hackintoshes as viable computers); with largely standard parts and firmware types. Now, Apple has end to end control. Plus Intel having a larger contract with TSMC doesn't mean much as far as Apple is concerned. Apple only makes so many products. Intel, for sure, makes more.

A lot of us have x86 programs where we don't have a practical choice to use M1. Now if M2 includes a strong x86 core, it could make it practical to run M1.
 

Sydde

macrumors 68030
Aug 17, 2009
2,552
7,050
IOKWARDI
A lot of us have x86 programs where we don't have a practical choice to use M1. Now if M2 includes a strong x86 core, it could make it practical to run M1.
No.

There is no advantage that x86 has that would call for trying to fit an x86 core into any ARM SoC. Nothing can be done better on x86 compared to another architecture, and Rosetta has shown that translation is fairly trivial. M2 does not need a step backward.
 
  • Like
Reactions: dgdosen

pshufd

macrumors G3
Oct 24, 2013
9,967
14,446
New Hampshire
No.

There is no advantage that x86 has that would call for trying to fit an x86 core into any ARM SoC. Nothing can be done better on x86 compared to another architecture, and Rosetta has shown that translation is fairly trivial. M2 does not need a step backward.

I've already read that Apple has stuff that speeds emulation or translation of x86 code in M1 which may be why Rosetta 2 runs so well. I will continue to need x86 for a while and M1 hasn't been promising.
 

leman

macrumors Core
Oct 14, 2008
19,302
19,284
I've already read that Apple has stuff that speeds emulation or translation of x86 code in M1 which may be why Rosetta 2 runs so well. I will continue to need x86 for a while and M1 hasn't been promising.

The only thing Apple did was to introduce hardware-level support of x86 memory ordering semantics, since ARM behaves differently. Emulating x86 behavior correctly otherwise would require memory barriers everywhere which would kill performance. But that is all there is to it.

There is obviously some loss in efficiency when transpiring x86 to ARM, since the optimizer does not have access to the high-level information, and there are few patters that don’t translate well (for example movemask and friends), and it’s really bad if you are using a x86 JIT or an interpreter. But adding native x86 support to Apple Silicon is unlikely to improve anything, while it will most certainly mess a lot of things up.
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,693
There is no advantage that x86
Just market share and applications. :)

I wouldn't mine a x86 core or two, but I don't think it will happen. What I'm hoping for is some kind of peripheral that would run x86/64 Windows.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.