Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Xerox was going to buy HP- so I think they have money

No it just meant Xerox was blowing gobs of smoke. They don't.

They have about $2B in Cash.

HP's market cap is $25-30B

That is an order of magnitude smaller. Xerox was going to borrow a bunch of other people's money to buy HP.

That would have been dumb ( even at "cheap" borrowing rates now. ) , because going to be hard to really actually pay that back without doing long term investment damage that the HP businesses need. They could buy HP and perhaps kill it trying to do it , but buying to kill it is kind of goofy.

A partial possible motivation here was to muddy the financial books so much with the merge to not notice that Xerox has its own problems. Covering up not so hot financials with mergers is good way to create a even bigger hole in the ground. Perhaps if have really good management and can also fix HP at the same time, but Xerox really doesn't present that way.

If step around hype of Nvidia stock at the moment this really isn't a good time for Nvidia to chase after Arm after just acquiring Mellanox. Digesting that first and then moving on to something large would be far more prudent. Nvidia won't necessarily collapse if they picked up Arm but they are more likely to screw it up because juggling too much stuff unfamiliar with at the same time. It is going to be easier for Nvidia to 'hand wave' some justification as ot why borrowing that much money might possible get a return on investment. ( Nvidia doesn't have enough cash to buy Arm either. ( $16B cash and Arm bought for $32B ) If will have some tap dance story about how all the stars line up in perfect orientation. )

If the bids come in too low for Arm Sofbank may not let go. If they can't get a large enough amount of cash they might as well borrow "cheap money" too
 
Interestingly, these same licensing concerns were raised when SoftBank acquired ARM:


Fast-forwarding to today, Apple held talks with SoftBank about acquiring ARM, but they didn't go anywhere:


From what I've read, the ARM ISA licenses are perpetual. If so, Apple has nothing to worry about with its current license. As far as v9 goes, if Apple is concerned NVIDIA might play games, one tack would be for Apple to acquire a perpetual v9 license from SoftBank now, prior to the sale, assuming such licenses are available prior to v9's finalization and release.

Alternately, I wonder if there could be be anti-trust provisions built into the acquisition that require NVIDIA to make ARM licenses available to all customers on equivalent terms, to prevent NVIDIA from engaging in non-competitive behavior.

There is a public interest in the ARM license being readily purchasable, since development of ARM-ISA-based chips promotes competition with x86 and thus improves the health of the computer industry. The US government has shown an interest in promoting the developent of ARM-based supercomputers, by commisioning at least one for themselves, in part to facilitate such competition, and in part to see if such supercomputers provide any advantage over existing designs.
 
Last edited:
Considering the apparent animosity Apple has toward NVIDIA, I wonder how comfortable Apple is going to be licensing from them, especially now with desktop processors. I wonder if future Apple Silicon is going to be ARM-less... when you control the whole stack, do you even need the ARM instruction-set anymore?

If the Qualcomm settlement said anything, they can mend any toxic relationship
 
  • Like
Reactions: macsplusmacs
the “highest growth” link above is an article from August 14,2012 and what’s the source of your graph above? Is it o my focusing on smartphones sales in the USA? I’m sure Nokia alone sold more than 50K smartphones globally back in 2012.
I think you may have missed my point. I was responding to the assertion that if Intel wanted to, independent of process technology, they could have made x86 competitive in the mobile space. My question was, if that’s true, why did they not want to participate in the biggest growth industry we’ve pretty much ever seen?

It seems they left a lot of money on the table, and allowed two competitors (Arm, TSMC) to gain dominant industry positions, for what was suggested to be a simple adaptation. The rise of TSMC made AMD more competitive in Intel‘s core market.

The source of the graph was Ars Technica, I don’t see a citation from them to the original data source. The vertical is thousands of units. The precise numbers aren’t terribly relevant unless you have a reference that shows the smartphone market growing more slowly than the PC market...
 
I think you may have missed my point. I was responding to the assertion that if Intel wanted to, independent of process technology, they could have made x86 competitive in the mobile space. My question was, if that’s true, why did they not want to participate in the biggest growth industry we’ve pretty much ever seen?...

Actually, Intel's Atom chip was originally competitive in the mobile space, and they did want to participate. What happened is less about technology (and Intel's technological capabilities, or what can be done with x86), and more about economics, existing licensing arrangements ("Qualcomm had been ruthlessly enforcing licensing and purchasing terms that made it effectively impossible for manufacturers to offer Intel-based mobile devices"), and Intel's subsequent business decisions ("There were clearly executives at Intel who understood how critical mobile would be to the company’s long-term future and pushed for aggressive positioning and product ramps. Unfortunately, those efforts were stymied by others who were concerned about the impact Atom and the low-cost devices it was supposed to enable would have on Intel’s primary business."):

 
Last edited:
Actually, Intel's Atom chip was originally competitive in the mobile space, and they did want to participate. What happened is less about technology (and Intel's technological capabilities, or what can be done with x86), and more about economics, existing licensing arrangements ("Qualcomm had been ruthlessly enforcing licensing and purchasing terms that made it effectively impossible for manufacturers to offer Intel-based mobile devices"), and Intel's subsequent business decisions ("There were clearly executives at Intel who understood how critical mobile would be to the company’s long-term future and pushed for aggressive positioning and product ramps. Unfortunately, those efforts were stymied by others who were concerned about the impact Atom and the low-cost devices it was supposed to enable would have on Intel’s primary business."):


And yet, here we are today still without a viable Intel mobile processor. The importance of the mobile market is clear, the negative impact on their core markets of leaving the mobile market unchallenged is clear, Qualcomm's under regulatory constraints, Apple and Samsung are shipping their own mobile processors despite big bad Qualcomm, and Intel is still not making a showing.

Nor is AMD.

It's relatively easy for a tech company to sit back and say "Our technology is better than everyone else's. We'd prove it to you but Qualcomm is mean and we don't feel like playing today. The Great Intel has spoken." If there's billions of dollars on the table and you're faced with an existential threat, but don't do anything about it, then I suspect the challenges are deeper.
 
And yet, here we are today still without a viable Intel mobile processor. The importance of the mobile market is clear, the negative impact on their core markets of leaving the mobile market unchallenged is clear, Qualcomm's under regulatory constraints, Apple and Samsung are shipping their own mobile processors despite big bad Qualcomm, and Intel is still not making a showing.

Nor is AMD.

It's relatively easy for a tech company to sit back and say "Our technology is better than everyone else's. We'd prove it to you but Qualcomm is mean and we don't feel like playing today. The Great Intel has spoken." If there's billions of dollars on the table and you're faced with an existential threat, but don't do anything about it, then I suspect the challenges are deeper.
I thought the article did address the issues you raised (concerning why Intel hasn't had a significant presence in the mobile market). Nevertheless, it sounds like you still think/suspect the issue hasn't been business/economic, but that it's instead been technological.

Do I understand you correctly and, if so, what do you think the technological issues have been?

For instance, do you think that the x86 ISA is inherently less efficient than the ARM ISA? If it's that, I can say there is no inherent efficiency advantage to ARM over x86.
 
Last edited:
I thought the article did address the issues you raised. Nevertheless, it sounds like you still think/suspect the issue isn't business/economic, but that it's instead technological. Do I understand you correctly and, if so, what do you think the technological issues are?

For instance, do you think that the x86 ISA is inherently less efficient than the ARM ISA? If it's that, I can say there is no inherent efficiency advantage to ARM over x86.

Or is it because you think TMSC's fab process is currently more advanced than Intel's (smaller device sizes -> higher density -> more efficiency)? If it's the latter, I can't speak to that — it's difficult for me to disentangle all the marketing surrounding device sizes. Indeed, a recent IEEE publication tried to address just that:


Summary of the IEEE article in the popular press:

Well, you're kind of coming into the middle of a different conversation and ignoring the context.

What I was responding to was that the ISA is just fine, it's just full of garbage instructions, and that if Intel wanted to they could optimize for power and size. At the risk of repeating myself, you can't say "the ISA is fine, all you have to do is change it", and since Intel apparently hasn't optimized for power and size that implies they don't want to which means the problems at Intel are deeper than I thought.

It's pretty widely accepted that TSMC had a process advantage over Intel. Even Intel talks about "regaining leadership" when they reach their 5nm node. Sadly, they expected to return to parity when they started shipping 7nm which now won't be until 2022 or 2023. They wouldn't use the word "regain" if they hadn't felt they lost it.

I can say there is no inherent efficiency advantage to ARM over x86.

Can you support that in any way? I keep hearing people make that statement, but I never see it supported.

Are you saying that all ISAs are inherently equally efficient? That seems provably false to me. So once we accept that different ISAs have different efficiencies in terms of energy per operation and silicon for implementation (which are themselves related), then it seems nearly impossible that x86 and Arm are identically efficient.

So, which ISA is likely to be more efficient? Is it likely that an instruction set with it's roots in a 1970's 8bit calculator chip accidentally became the most efficient design?

I'm also taking a bit of a Darwinian view: businesses exist to make money, there's a lot of money to be made in mobile devices while the PC industry has plateaued, Arm dominates the mobile market and Intel hasn't managed to establish themselves there despite years of effort, Intel relies on Arm processors in their power sensitive Altera SoCs and appear to be using it in their upcoming iteration of Movidius Keem Bay parts. That all strongly suggests that Arm is better adapted to that space.
 
Last edited:
I thought the article did address the issues you raised (concerning why Intel hasn't had a significant presence in the mobile market). Nevertheless, it sounds like you still think/suspect the issue hasn't been business/economic, but that it's instead been technological.

Do I understand you correctly and, if so, what do you think the technological issues have been?

For instance, do you think that the x86 ISA is inherently less efficient than the ARM ISA? If it's that, I can say there is no inherent efficiency advantage to ARM over x86.
One inherent efficiency advantage of ARM over x86 is the increase in register count, which reduces load/stores and thus power consumption. Another is the much simpler (and thus smaller/less power hungry) instruction decoder.
 
I can see that being annoying now for Mac developers in the Intel/AMD Era who are still leveraging legacy Open CL / Open GL code in their apps, but once everything has been re-created into an Apple Silicon and Metal native app, will it really matter?

You are skipping over the point that Metal technically is not trying to cover all of the same ground as OpenCL and OpenGL did. Metal does nothing with FP64 ( in part because iOS apps basically ignore it). That is a gap. When Apple transitions over and kills off OpenCL that disappears. There is high overlap on most datatypes and computational constructs that can be mapped over from OpenCL to Metal, but there are also gaps.

Apple's bias to skipping gaps that iOS suppresses is more likely to just get bigger when Apple moves the SoC over to the Mac. There will be an even bigger pool of devices that just skip it. Shrinking GPU implementation diversity on the Mac is quite likely is going to be a context where Apple's groupthink bias is going to grow.

Apple herding developers to write more explicitly hand optimized code for Apple's GPU will help Apple Silicon look better in the short term. Long term though that probably isn't going to create a broader and more open, general computational language.

OpenGL is similar. Apple is likely going to get a larger number of portability focused apps to shift over to 3rd party libraries that carry a major focus on portability, than get more folks using their Apple ones. Some apps going up an abstraction level in order not to get dragged down into details. Yes, the folks who provide the library will translate those calls into Metal. Also, the apps more at the lower cost margins will likely just leave because they don't "have to" drop OpenGL on other platforms.
 
If the Qualcomm settlement said anything, they can mend any toxic relationship

mended? Apple spent $1B last year to kick Qualcomm to the curb permanently. This is about as mended as the North/South Korean DMZ. More a pause in the war than an end of it.

Apple pragmatically didn't have a choice if wanted a world marketable, leading, 5G solution. Instead what Apple got was another log to throw on the "dump Intel" fire more so than "mended" relations with Qualcomm. Intel's major execution problems have hit Apple harder than the system builders who just buy x86 products from Intel.

If trying to draw an analogy with Nvidia it would be bringing them back temporary before throughly in every way possible removing them. Apple is highly unlikely to bring back Nvidia to get rid of them. Even if Nvidia attaches Arm to themselves. If they start to engage in the behavior that pissed off Apple they'll be only walking the plank the way Qualcomm is now. Most of the mobile phone market doesn't want discrete cell modems now. Part of Apple's problem is they are about the only major player left who wants those. ( And Apple doesn't really want them either if could put together a better package. ).

AMD is a viable alternative to Nvidia GPUs. Apple doesn't necessarily need them to be competitive over a very broad part of the market. That will cause some friction with the folks who like or need Nvidia GPUs but lots of very viable work can be done without one. If Intel wasn't busy shooting themselves in the foot with a large caliber weapon, then Apple would have two other viable "big' GPU suppliers. AMD GPU execution could be better but they are making progress over time. It isn't like the pragmatically "have to" go to Qualcomm situation at all.
 
For instance, do you think that the x86 ISA is inherently less efficient than the ARM ISA? If it's that, I can say there is no inherent efficiency advantage to ARM over x86.

Can you support that in any way? I keep hearing people make that statement, but I never see it supported.

One inherent efficiency advantage of ARM over x86 is the increase in register count, which reduces load/stores and thus power consumption. Another is the much simpler (and thus smaller/less power hungry) instruction decoder.

Indeed, I can support it :). This is a long post, so I've bolded the highlights for those who just want to scan through it (the following is slightly updated from an earlier post I made on this subject):

To the extent papers have been presented in professional journals or conference proceedings on this subject, the overall conclusion has been that neither ISA (x86 or ARM) is inherently superior, and what really matters is instead the implementation, i.e., the microarchitecture (and, in particular, how well-optimized the microarchitecture is for the use case).

Let's start with what seems to be the most highly cited paper on this subject (unless there are others I've missed), by Blem at al. in 2013, and then check through all the subsequent papers that cited it:

"We find that ARM and x86 processors are simply engineering design points optimized for different levels of performance, and there is nothing fundamentally more energy efficient in one ISA class or the other. The ISA being RISC or CISC seems irrelevant." [emphasis mine]

FROM: E. Blem, J. Menon and K. Sankaralingam, "Power struggles: Revisiting the RISC vs. CISC debate on contemporary ARM and x86 architectures," 2013 IEEE 19th International Symposium on High Performance Computer Architecture (HPCA), Shenzhen, 2013, pp. 1-12, doi: 10.1109/HPCA.2013.6522302.
[https://ieeexplore.ieee.org/abstract/document/6522302]

I then proceeded to do a quick scan though all 176 papers that had cited Belm et al. (https://scholar.google.com/scholar?cites=14820675711934164696&as_sdt=2005&sciodt=0,5&hl=en), to see if any of the citing articles directly addressed this question (and, in particular, to see if any disagreed). I only found two, both which broadly supported Blem et al.'s conclusion:

1) "Our simulation results suggest that although ARM ISA outperforms RISC-V and X86 ISAs in performance and energy consumption, the differences between ARM and RISC-V are very subtle, while the performance gaps between ARM and X86 are possibly caused by the relatively low hardware configurations used in this paper and could be narrowed or even reversed by more aggressive hardware approaches. Our study confirms that one ISA is not fundamentally more efficient." [emphasis mine]

FROM: M. Ling, X. Xu, Y. Gu and Z. Pan, "Does the ISA Really Matter? A Simulation Based Investigation," 2019 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM), Victoria, BC, Canada, 2019, pp. 1-6, doi: 10.1109/PACRIM47961.2019.8985059.
[https://ieeexplore.ieee.org/abstract/document/8985059]

2) "The difference in performance and power consumption between the studied processors seems to be determined by the intended application rather than by the choice of ISA. In other words, in modern processors, the way the ISA is implemented, that is, the microarchitecture, plays a more significant role in determining performance and power characteristics than ISA." [emphasis mine]

FROM: Chevtchenko, S. F., and R. F. Vale. "A Comparison of RISC and CISC Architectures." resource 2: 4. [No year given.]
[https://pdfs.semanticscholar.org/8977/18e3387690736f132e812d097dc40379ea2c.pdf]

And finally we have this last citing paper. Unlike the three I referenced above, it is not attempting to tease out the effects of ISA specifically, so it doesn't directly address the ISA question. Nevertheless, it is interesting because it gives an overall comparison of specific ARM vs. x86 implementations (which includes ISA, microarchitecture. and other factors [e.g., I/O] as well) for ML/big data:

3) "In this paper, we presented a survey of existing hardware performance benchmark suites that range from evaluation of heterogeneous systems to distributed ML workloads for clusters of servers. From the survey, we selected BigDataBench in order to compare the performance of server-grade ARM and x86 processors for a diverse set of workloads and applications, using real-world datasets that are scalable. We benchmarked a state-of-the-art dual socket Cavium ThunderX CN8890 ARM processor against a dual socket Intel􏰀 Xeon􏰀 processor E5- 2620 v4 x86-64 processor. Initial results demonstrated that ARM generally had slightly worse performance compared to x86 processors for Spark Offline Analytics workloads, and on par or superior performance for Hive workloads. We determined that the ARM server excels over x86 for write heavy workloads. It is worth noting the apparent disk I/O bottleneck of the ARM server when comparing performance results to the x86 server. There are many other BigDataBench workloads that have yet to be tested on ARM, many of which may lead to promising results when provided with larger amounts of disk and network I/O. Moreover, recording the CPU temperatures and power consumptions of these servers may yield even more fruitful results, further promoting the use of ARM in server-grade processing for ML and Big Data applications. [emphasis mine]

FROM:
Kmiec S, Wong J, Jacobsen HA. A Comparison of ARM Against x86 for Distributed Machine Learning Workloads. InTechnology Conference on Performance Evaluation and Benchmarking 2017 Aug 28 (pp. 164-184). Springer, Cham.
[https://link.springer.com/chapter/10.1007/978-3-319-72401-0_12]
 
Last edited:
  • Like
Reactions: Analog Kid
Indeed, I can support it :). This is a long post, so I've bolded the highlights for those who just want to scan through it:

To the extent papers have been presented in professional journals or conference proceedings on this subject, the overall conclusion has been that neither ISA (x86 or ARM) is inherently superior, and what really matters is instead the implementation, i.e., the microarchitecture (and, in particular, how well-optimized the microarchitecture is for the use case).

Let's start with one of the most highly cited papers on this subject, by Blem at al. in 2013, and then check through all the papers that cited it (citing papers will always be more recent):

"We find that ARM and x86 processors are simply engineering design points optimized for different levels of performance, and there is nothing fundamentally more energy efficient in one ISA class or the other. The ISA being RISC or CISC seems irrelevant." [emphasis mine]

FROM: E. Blem, J. Menon and K. Sankaralingam, "Power struggles: Revisiting the RISC vs. CISC debate on contemporary ARM and x86 architectures," 2013 IEEE 19th International Symposium on High Performance Computer Architecture (HPCA), Shenzhen, 2013, pp. 1-12, doi: 10.1109/HPCA.2013.6522302.
[https://ieeexplore.ieee.org/abstract/document/6522302]

I then proceeded to do a quick scan though all 176 papers that had cited Belm et al. (https://scholar.google.com/scholar?cites=14820675711934164696&as_sdt=2005&sciodt=0,5&hl=en), to see if any of the citing articles directly addressed this question (and, in particular, to see if any disagreed). I only found three, all of which broadly supported Blem et al.'s conclusion:

1) "Our simulation results suggest that although ARM ISA outperforms RISC-V and X86 ISAs in performance and energy consumption, the differences between ARM and RISC-V are very subtle, while the performance gaps between ARM and X86 are possibly caused by the relatively low hardware configurations used in this paper and could be narrowed or even reversed by more aggressive hardware approaches. Our study confirms that one ISA is not fundamentally more efficient." [emphasis mine]

FROM: M. Ling, X. Xu, Y. Gu and Z. Pan, "Does the ISA Really Matter? A Simulation Based Investigation," 2019 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM), Victoria, BC, Canada, 2019, pp. 1-6, doi: 10.1109/PACRIM47961.2019.8985059.
[https://ieeexplore.ieee.org/abstract/document/8985059]

2) "The difference in performance and power consumption between the studied processors seems to be determined by the intended application rather than by the choice of ISA. In other words, in modern processors, the way the ISA is implemented, that is, the microarchitecture, plays a more significant role in determining performance and power characteristics than ISA." [emphasis mine]

FROM: Chevtchenko, S. F., and R. F. Vale. "A Comparison of RISC and CISC Architectures." resource 2: 4. [No year given.]
[https://pdfs.semanticscholar.org/8977/18e3387690736f132e812d097dc40379ea2c.pdf]

3) "In this paper, we presented a survey of existing hardware performance benchmark suites that range from evaluation of heterogeneous systems to distributed ML workloads for clusters of servers. From the survey, we selected BigDataBench in order to compare the performance of server-grade ARM and x86 processors for a diverse set of workloads and applications, using real-world datasets that are scalable. We benchmarked a state-of-the-art dual socket Cavium ThunderX CN8890 ARM processor against a dual socket Intel􏰀 Xeon􏰀 processor E5- 2620 v4 x86-64 processor. Initial results demonstrated that ARM generally had slightly worse performance compared to x86 processors for Spark Offline Analytics workloads, and on par or superior performance for Hive workloads. We determined that the ARM server excels over x86 for write heavy workloads. It is worth noting the apparent disk I/O bottleneck of the ARM server when comparing performance results to the x86 server. There are many other BigDataBench workloads that have yet to be tested on ARM, many of which may lead to promising results when provided with larger amounts of disk and network I/O. Moreover, recording the CPU temperatures and power consumptions of these servers may yield even more fruitful results, further promoting the use of ARM in server-grade processing for ML and Big Data applications. [emphasis mine]

FROM:
Kmiec S, Wong J, Jacobsen HA. A Comparison of ARM Against x86 for Distributed Machine Learning Workloads. InTechnology Conference on Performance Evaluation and Benchmarking 2017 Aug 28 (pp. 164-184). Springer, Cham.
[https://link.springer.com/chapter/10.1007/978-3-319-72401-0_12]

[To be a bit more precise, this last paper, unlike the first three I cited above, is not attempting to tease out the effects of ISA specifically, but is rather an overall comparison of ARM vs. x86 implementations (which includes both ISA and microarchitecture) for ML/big data.]
Remember, though, that implementation is not independent of ISA. I designed many CPUs, both RISC and CISC. Our teams were pretty good (for example, I helped design the world’s fastest PowerPC, the world’s fastest SPARC, and arguably the world’s fastest x86), and we used similar methodologies across the different chips I worked on. Without a doubt, our x86-64 designs were less efficient than, for example, our PowerPC designs or SPARC designs.
 
  • Like
Reactions: Analog Kid
Remember, though, that implementation is not independent of ISA. I designed many CPUs, both RISC and CISC. Our teams were pretty good (for example, I helped design the world’s fastest PowerPC, the world’s fastest SPARC, and arguably the world’s fastest x86), and we used similar methodologies across the different chips I worked on. Without a doubt, our x86-64 designs were less efficient than, for example, our PowerPC designs or SPARC designs.
I'll try going through those papers again in detail to see if I can find anything that specifically addressed the points you made.

But, broadly speaking, what it sounds like you're saying is that the ISA comparisons those authors (except those of the last paper) were attempting to do are inherently invalid, since one can't separate the ISA from the microarchitecture. Do I understand you correctly? If so, suppose you were still active in the field, met one or more of those authors during a mixer at an IEEE conference, and mentioned the above to them. What do you think they might say in response —what would be their counterargument?

I'm just guessing here, but might this be a response they might give: "Yes, you're right that you can't fully separate the microarchitechture from the ISA, but our comparison accounted for that, and even with those unavoidable microarchitechture differences, we still found no inherent efficiency advantage of one over the other. We certainly believe your description of what your team experienced, but that's anecdotal, and may have been specific to your team; we looked at many different attempts by many different teams (etc.).

My other broad question would be this: They're experts and you are an expert. Often, when experts disagree, this is an indicator that this is an unsettled question in the field (one camp thinks there is no efficiency advantage of one ISA over the other, while another camp thinks there is). Is that the case here and, if so, in your estimation, how large is the former camp vs. the latter?
 
Last edited:
Interestingly, these same licensing concerns were raised when SoftBank acquired ARM:


Fast-forwarding to today, Apple held talks with SoftBank about acquiring ARM, but they didn't go anywhere:


Last time more so didn't mean much of anything because Softbank is not a direct Arm customer. Sofbank was a customer (partner) of Apple ( phones ) not on the other side of the supply food chain. Softbank taking Arm licenses 'private' when they don't make any silicon at all makes about zero sense to be 'afraid' of.

As for Apple being in "preliminary talks". ... how many companies out there can write a $32B check for cash on relatively short notice ? Also Softbank would likely get sued ( at least by some USA based investors) if they don't talk to enough folks to get the highest possible price for Arm. Folks almost have to show up at Apple asking for a chunk out of Apple's Scrooge McDuck money pit because somebody will hand wave that they could have struck gold there if just asked nicely but didn't.

Apple gets valuable information even if they don't have any serious intention of buying. Info on the finances of that company about to be sold. Probably some insights into who might be dropping on the the job market ( mergers often mean layoffs. ) [ e.g., if Nvidia buys then are they going to chuck the Arm GPU design team into the street? May want to hire some of those folks . etc. ]

Softbank also hired Goldman Sachs to help round up folks to possibly sell Arm off too. Again Apple can do a favor swap by listening to GS and Softbank for a couple of hours on their sales pitch.

There is likely no shortage of companies flashing their crown jewels at Apple every weeks trying to entice a buyout. Venture Capital funds trying to unload one of their investments on Apple at some high multiple. Company X that is looking for an exit strategy , etc etc. Probably more than Apple has time for.


From what I've read, the ARM ISA licenses are perpetual. If so, Apple has nothing to worry about with its current license. As far as v9 goes, if Apple is concerned NVIDIA might play games, one tack would be for Apple to acquire a perpetual v9 license from SoftBank now, prior to the sale, assuming such licenses are available prior to v9's finalization and release.

Apple buying v9 before it is actually defined. That would actually open a back door for Nvidia, not close it. if it is basically finished ( and waiting on 'nice looking documentation and tutorials' perhaps, but if not substantively done there is not much to buy. Folks keeping pointing at "v9' but not clear that is really a new complete instructional set or a set of new "optional add ons" ( SVE2 , TME ).


Alternately, I wonder if there could be be anti-trust provisions built into the acquisition that require NVIDIA to make ARM licenses available to all customers on equivalent terms, to prevent NVIDIA from engaging in non-competitive behavior.

If Nvidia buys Arm then the profits from the 'equivalent' licensing charge that make Nvidia pay will just flow back to Nvidia coffers anyway. If Nvidia is going to play favorites, that 'favorite' target will most likely be itself, not some other company.

Non competitive behavior by Nvidia more likely would occur on the GPU 'half' of the Arm IP. For example Nvidia 'nukes' the Arm GPU and hard bundles an Nvidia GPU with the CPU for the mobile (and up) processors. There are a limited number of GPU implementors out there in the market now ( Arm , Imagination Tech , Qualcomm , maybe Samsung/AMD ) . Apple doesn't really count as they won't sell to anyone else.

But making them hand out exactly the same set of licenses they have now forever problematical. if Nvidia doesn't want to be in SSD controller core market anymore antitrust really can't "make them" stay there.


There is a public interest in the ARM license being readily purchasable, since development of ARM-ISA-based chips promotes competition with x86 and thus improves the health of the computer industry.

This really isn't about Arm vs x86 personal computers. There are lots of market where x86 isn't even there. See MIPs, Power, etc. in more embedded systems and microcontrollers. The personal computer market has about zero antitrust traction here at all with respect to Arm. ( currently not the dominate player . )

Trying to classify Arm license as some public usage utility has no traction at all with USA. Probably doesn't have any traction with Japanese or UK law either. Maybe EU "slap big fines on tech company of the week" law.

Nvidia buying Arm and taking it private for internal usage only wouldn't block folks from signing up with somebody else. There would be a big scramble for who that somebody else was, but Nvidia wouldn't be stopping anymone from moving.
 
  • Like
Reactions: theorist9
I'll try going through those papers again in detail to see if I can find anything that specifically addressed the points you made.

But, broadly speaking, what it sounds like you're saying is that the ISA comparisons those authors (except those of the last paper) were attempting to do are inherently invalid, since one can't separate the ISA from the microarchitecture. Do I understand you correctly?

I think so. I mean, how would you even make such an apples-to-apples comparison? How do you determine whether differences in performance are due to differences in ISA vs. difference in skill level of the people implementing? A lot of academic people assume that CPU designers are fully fungible, or that everyone is using logic synthesis. It ain’t the case. What you are usually looking at is an ARM chip designed using second rate designers running Synopsys vs. an x86 chip handcrafted at the standard cell level with large macro blocks designed at the transistor level. There are no CISC chips around that are designed using standard ASIC methodology, so where do these folks get their data points?

If so, suppose you were still active in the field, met one or more of those authors during a mixer at an IEEE conference, and mentioned the above to them. What do you think they might say in response —what would be their counterargument?

I don’t think they’d disagree. Physical design is a 20% factor, ISA is A 20% factor, etc. You can compensate for bad ISA with good physical design. But if your competitor also has good physical design *and* good ISA, then what?


I'm just guessing here, but might this be a response they might give: "Yes, you're right that you can't fully separate the microarchitechture from the ISA, but our comparison accounted for that, and even with those unavoidable microarchitechture differences, we still found no inherent efficiency advantage of one over the other. We certainly believe your description of what your team experienced, but that's anecdotal, and may have been specific to your team; we looked at many different attempts by many different teams (etc.).

Maybe, but their analysis can’t be very scientific either - the universe of CISC chips to compare with is very small, and all of them are designed very similarly by teams with more or less equivalent capabilities (AMD is better than Intel in terms of physical designers. Intel is maybe better architecturally). Meanwhile most ARM chips are designed using a completely different methodology. Same with most other RISC chips. So how do you separate out what causes what effect? Just by running “idealized” benchmarks? Assume zero gate delays and zero wire delays and say “see! CISC and RISC are equivalent!” Well, that’s nice. Except in the real world gates have delays, wires have delays, there is crosstalk between wires, cache lookups take time, etc. So in the real world there are real implications to your choice of ISA.

My other broad question would be this: They're experts and you are an expert. Often, when experts disagree, this is an indicator that this is an unsettled question in the field (one camp thinks there is no efficiency advantage of one ISA over the other, while another camp thinks there is). Is that the case here and, if so, in your estimation, how large is the former camp vs. the latter?

I think most people believe ISA makes a difference. That’s why there are so many people researching ISAs, writing books and papers about them, etc. And there is real data to prove it. Apple is probably the first company to design ARM the way that AMD and Intel design x86. And look at how the speed of the DTK compares to Intel, at a much lower power usage, much smaller die size, etc.
 
One, I smell anti-trust suits here. Two, there way too many paragraphs in this thread.
 
I think most people believe ISA makes a difference. That’s why there are so many people researching ISAs, writing books and papers about them, etc. And there is real data to prove it. Apple is probably the first company to design ARM the way that AMD and Intel design x86. And look at how the speed of the DTK compares to Intel, at a much lower power usage, much smaller die size, etc.

Indeed Intel is using quite a few full-custom blocks in their designs, while standard ARM cores are fully synthesizable. This can skew the result considerably if not taking into account when comparing contemporary architectures.
 
Indeed Intel is using quite a few full-custom blocks in their designs, while standard ARM cores are fully synthesizable. This can skew the result considerably if not taking into account when comparing contemporary architectures.
And at AMD we synthesized nothing (when I was there. Now I think they synthesize some stuff).
 
I think most people believe ISA makes a difference. That’s why there are so many people researching ISAs, writing books and papers about them, etc. And there is real data to prove it. Apple is probably the first company to design ARM the way that AMD and Intel design x86. And look at how the speed of the DTK compares to Intel, at a much lower power usage, much smaller die size, etc.
Thanks for your thoughtful reply. It certainly makes sense that some ISAs would be naturally better suited to certain tasks than others. And that could certainly explain the active interest in exploring ISAs. But as to the question at hand—namely whether one ISA is inherently superior to another (x86 vs ARM)—my take-away at this point (based on the difference between what you wrote, and what's written in those papers) is that this is not yet a settled question in the field.
 
Thanks for your thoughtful reply. It certainly makes sense that some ISAs would be inherently better suited to certain tasks than others. And that could certainly explain the active interest in exploring ISAs. But as to the question at hand—namely whether one ISA is inherently superior to another (x86 vs ARM)—my take-away at this point (based on what you wrote, and what's written in those papers) is that this is not yet a settled question in the field.
Ok. You’re entitled to think that.

I think a simple thought experiment would disprove it, though. Imagine two identical ISAs, but one of them allows 32 registers and one of them allows only 16 registers (and designates the extra operand bits for ”future use.”)

Clearly there would be a very quantifiable difference in performance/power/[insert whatever metric you’d like] between these two ISAs.

So, too, do other ISA choice create realworld quantifiable effects.

The only reason these academics can’t find an effect is that the crappier ISAs, for historical reasons, have the best physical designs. All of that has changed with Apple using x86-style physical design for ARM.
 
I think a simple thought experiment ...

Damn, you're going to make me think? Curses!!

Imagine two identical ISAs, but one of them allows 32 registers and one of them allows only 16 registers (and designates the extra operand bits for ”future use.”)

Clearly there would be a very quantifiable difference in performance/power/[insert whatever metric you’d like] between these two ISAs.
Of course, but this thought experiment simply shows you can create an inferior ISA. It doesn't address the question at hand, which is whether, considering two specific highly-developed, sophisticated ISAs (ARM and x86), one is inherently superior to the other. Anytime you have two different designs in broad use, it's always best to be skeptical when anyone claims one is superior to the other, period. [Until you get actual data, which you may have seen, but the public has not.] More typically, each is better in specific applications.

As you've recognized, the designers at Intel are very sophisticated. When they wanted to enter the mobile market, they had no need to maintain backward compatibility with x86 for those chips, right? So, if ARM was so obviously superior, why did they choose an x86 ISA (Atom) instead of an ARM ISA? [Yes, x86 allowed them to leverage their IP, while ARM would not have. But Intel also judged they would be able to compete with ARM on efficiency with x86, and they were correct -- their Atom chips were competitive at the time.] And on another thread you mentioned that, if Intel had (early on) offered Apple the x86 license for free, Apple would have seriously considered designing x86-based chips instead of ARM-based chips. Why would they have done this if ARM is clearly inherently superior?

Not saying you're wrong. You could be right. But I haven't yet seen the data to demonstrate that. When AS is released into the wild, there should be more of that data.

The only reason these academics can’t find an effect is that the crappier ISAs, for historical reasons, have the best physical designs. All of that has changed with Apple using x86-style physical design for ARM.

Yes, that's possible. Or could it be that, if AS is faster, watt-for-watt, than x86, it's not b/c of the ISA, it's b/c Apple is doing a better job with the microarchitecture than Intel, because all of Apple's mobile chip design experience has given them a sophstication when it comes to power efficiency that Intel, having left the mobile market a while ago, lacks?
 
Last edited:
Damn, you're going to make me think? Curses!!


Of course, but this thought experiment simply shows you can create a bad ISA. It doesn't address the question at hand, which is whether, considering two specific complex, sophisticated ISAs (ARM and x86), one is inherently superior to the other. Anytime you have two different designs in broad use, it's always best to be skeptical when anyone claims one is superior to the other, period. [Until you get actual data, which you may have seen, but the public has not.] More typically, each is better in specific applications.

As you've recognized, the designers at Intel are very sophisticated. When they wanted to enter the mobile market, they had no need to maintain backward compatibility with x86 for those chips, right? So, if ARM was so obviously superior, why did they choose an x86 ISA (Atom) instead of an ARM ISA? [Yes, x86 allowed them to leverage their IP, while ARM would not have. But Intel also judged they would be able to compete with ARM on efficiency with x86, and they were correct -- their Atom chips were competitive.] And on another thread you mentioned that, if Intel had (early on) offered Apple the x86 license for free, Apple would have seriously considered designing x86-based chips instead of ARM-based chips. Why would they have done this if ARM is clearly inherently superior?

Not saying you're wrong. You could be right right. But I haven't yet seen the data to demonstrate that. When AS is released into the wild, there should be more of that data.



Yes, that's possible. Or could it be that, if AS is faster, watt-for-watt, than x86, it's not b/c of the ISA, it's b/c Apple is doing a better job with the microarchitecture than Intel, because all of Apple's mobile chip design experience has given them a sophstication when it comes to power efficiency that Intel, having left the mobile market a while ago, lacks?

You answered your own question - Intel owned strong-arm, but they don't get to have lock-in and leverage their x86 monopoly if they went with Arm. I mean, what is it about Intel that makes you think they care about performance? They do nothing unless forced by competition, which has been rare over the years. The fact that x86 was such a miserable failure in mobile tells you that it's a terrible ISA (at least for that purpose).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.