Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Apple is still rely on retribution as part of preserving Steve Job’s legacy. The members of the group that’s part of App Fairness cartel? Apple already subpoenaed them as part of the lawsuit against Epic. And Apple may kick them off the developer program and ban them from owning any Apple products for life after the lawsuit is over as payback.
I'd hardly call subpoenaing "retribution" - and if someone did try to use it as such, the presiding judge would hardly look favorably on such activities.

One insight from their subpoenas is that Epic bankrolled the App Fairness initiative in order to try to steer public perception from it being an Epic v Apple dispute to an industry v Apple dispute.
 
  • Like
Reactions: CarlJ
You're thinking about it on a technical level. You're not wrong that they absolutely could technologically segment the businesses but from a public and regulatory perception mindset it would be considered an anti-competitive conflict of interests and it would be extremely easy to make an argument that Apple would be keeping the best for themselves and holding back their competitors even if they didn't do exactly that. It would be a huge liability for them.

And that said, for Apple strategically it would be bad business to invest in something they would need to maintain for the benefit of their direct competition and NOT keep the best for themselves.

Ergo, the only smart choice is to not buy ARM, but to position against it while also being its biggest licensee. That gives Apple the flexibility to exert some level of control without owning, and prepare for a future where that is no longer tenable, or it's more profitable for them to go somewhere else.

This is a smart long game position, and it's basically what they did to Intel.

I suspect that if they move off ARM it won't even be a transition that requires a recompile/migration like this one did. It'll be seamless to the consumer, for developers a few things will be deprecated a while beforehand, and then support will end for them with the new chipset, but everything will have been in place for years in advance. This is like a 8-10 year strategy.

After that what'll be left to control? Maybe Apple will start fabricating their own chips... haha

When you don’t pay anything for an Arm license, and when your Arm license allows you to do anything you want with Arm including adding instructions, changing functionality, etc., there is no need to get leverage against the owner of Arm.
 
Well, it ranges from little things to bigger. It’s only little-endian, whereas Arm is bi-endian. x86 is little-endian and PowerPC is big-endian, So it’s nice to support bi-endian so you can remain compatible with future cpu changes. Not a huge deal.

It uses a single instruction for jump, call and return. That’s a mess, and complicates the branch predictor, likely causing either an IPC hit or slowing cycle time. It’s also just “clever for the sake of being clever.” Which describes a lot of it - it takes four or five instructions to index into an array (vs essentially 1 - or 2 depending on how you count - for x86 or Arm). The poor code density will have some effect on effective IPC, either in the instruction fetch path or in the issue path. It’s terrible at coherency, which is necessary for multiple cores. It’s really best suited for single-core, out of order issue, in order retire, machines. Sort of like what we had in the mid 2000s. Again, they could change stuff (a lot), but right now it doesn’t seem to offer any advantages over Arm for “real” computers.
Thank you, exactly what I was hoping for. It sounds like more of a teaching processor design, than a production processor design. (Then again, Pascal was supposed to be a teaching language, and one of my first jobs was working on a large inventory management system - written in Pascal.)

I wonder if there's much interest in modifying it (substantially) to be a useful production system, or if too much is set in stone at this point and the instruction set will get used in production as-is.
 
Thank you, exactly what I was hoping for. It sounds like more of a teaching processor design, than a production processor design. (Then again, Pascal was supposed to be a teaching language, and one of my first jobs was working on a large inventory management system - written in Pascal.)

I wonder if there's much interest in modifying it (substantially) to be a useful production system, or if too much is set in stone at this point and the instruction set will get used in production as-is.
I’d say that is a fair description. (The first compiled language I learned was Pascal, btw. Turbo Pascal FTW. You *could* write real software with it, but it might not have been a good idea. Just like RISC-V).

RISC-V is definitely good for a lot of uses in its current form. I just don’t think anyone saying “we need to make improvements over Arm. Let’s start with a clean sheet of paper.” would end up with RISC-V.
 
  • Like
Reactions: CarlJ
By not owning ARM, but using it as a foundation to build their own thing, and then going all-in on an open standard they can eventually steer their own way, while keeping a lot proprietary, they don't have that problem.
What precisely do you think is keeping Apple from doing whatever they want with their processor designs, since they have a perpetual license to the underlying original ARM designs, and have a well-staffed and well-functioning processor design department? It feels like you're twisting things around to make a case for the RISC-V that's not there.

About the only collision with ARM's new owners I could see would be if Nvidia added some new extension to ARM, which Apple didn't have license to, and then some substantial software needed those extensions - say Microsoft released Windows for that architecture, and Apple wanted to be able to go back to Boot Camp and/or VMs for running Windows on the Mac. I see the likelihood of all those conditions obtaining to be extremely low. If Nvidia adds some new feature, and Apple doesn't need that specific implementation of it, then why should Apple care? If they need that functionality, they can roll their own, and it will fit their needs better.

Yes, all things considered, freely available is generally preferable to encumbered. But Apple already has all the pieces they need to make processors that are perfect for their devices and for their software, basically forever. And they've got a huge head start on everyone else for the "high performance at low power/heat" market.
 
When you don’t pay anything for an Arm license, and when your Arm license allows you to do anything you want with Arm including adding instructions, changing functionality, etc., there is no need to get leverage against the owner of Arm.
It's like you just chose to ignore the entire point I made...

Which, amusingly, CarlJ outlines perfectly while trying to argue against me.

About the only collision with ARM's new owners I could see would be if Nvidia added some new extension to ARM, which Apple didn't have license to, and then some substantial software needed those extensions - say Microsoft released Windows for that architecture, and Apple wanted to be able to go back to Boot Camp and/or VMs for running Windows on the Mac. I see the likelihood of all those conditions obtaining to be extremely low. If Nvidia adds some new feature, and Apple doesn't need that specific implementation of it, then why should Apple care? If they need that functionality, they can roll their own, and it will fit their needs better.

This is EXACTLY the sort of thing I'm talking about.

Nvidia is best known for making graphics cards. What makes you think they wouldn't do something like this and create proprietary extensions for gaming or other processes that would provide a leg up for licensees?

And yeah, Apple could roll their own. But also, smart business is to look at every vulnerability and every attack vector that could target their vulnerabilities.

Just because you both think the examples I'm giving are dumb or unlikely doesn't mean they aren't still possible, and I promise you, Apple's/Cook's business strategy is very much about mitigating risk through control and leverage. This is no different.

The potential for an adversarial business to own and control a core technology that Apple relies on represents a significant business risk to Apple, so they are preparing for that risk no matter how unlikely or far out that risk may be.

I'm not saying this is impending doom. I'm not saying this is war. I'm simply saying this is smart business. Apple is playing offense to avoid playing defense down the road.
 
  • Disagree
Reactions: CarlJ
It's like you just chose to ignore the entire point I made...

Which, amusingly, CarlJ outlines perfectly while trying to argue against me.



This is EXACTLY the sort of thing I'm talking about.

Nvidia is best known for making graphics cards. What makes you think they wouldn't do something like this and create proprietary extensions for gaming or other processes that would provide a leg up for licensees?

And yeah, Apple could roll their own. But also, smart business is to look at every vulnerability and every attack vector that could target their vulnerabilities.

Just because you both think the examples I'm giving are dumb or unlikely doesn't mean they aren't still possible, and I promise you, Apple's/Cook's business strategy is very much about mitigating risk through control and leverage. This is no different.

The potential for an adversarial business to own and control a core technology that Apple relies on represents a significant business risk to Apple, so they are preparing for that risk no matter how unlikely or far out that risk may be.

I'm not saying this is impending doom. I'm not saying this is war. I'm simply saying this is smart business. Apple is playing offense to avoid playing defense down the road.

But that can’t happen. Apple has a license to everything, in perpetuity. That’s my point. Whatever extensions Arm invents, Apple can use. nVidia can’t say “nope, Apple, even though you created Arm, and have a license that says you get everything forever, that somehow doesn’t apply now because we bought Arm.”
 
  • Like
Reactions: CarlJ
Whatever license deal apple has, there’s little chance nvidia can do anything to change it at this point. As one of the three companies that formed the original Arm joint venture, you can bet that Apple’s rights are sewn up and not subject to whatever Nvidia may decide to do in the future.

Apple also is not doing anything at PowerPC at this point. If, for some reason, their own design team falls down and they can no longer compete with…um… IBM I guess?… then there are lots of other places Apple will be able to get Arm designs. And if, instead, it’s the stellar advancements made by Global Foundries that make this hypothetical future PowerPC such a great deal, Apple can just switch from TSMC to Global Foundries.

By designing the CPUs themselves, and in a world where every cutting edge fab is a contract fab (including, it looks like, Intel going-forward), there’s no reason for Apple to worry about PowerPC (and there is zero inherent advantage of PowerPC over Arm. I’ve designed PowerPC CPUs - they are a pain in the arse).
Impossible to say for sure about PPC but Apple did very well out of the 'not putting all their eggs in one basket' approach in the past. Given their essentially infinite R&D budget, it would be naive to presume they wouldn't maintain a small team running all sorts of weird and wonderful alternative tech, just in case it comes good down the line.
 
  • Like
Reactions: Localcelebrity
Source? That's not usually how licensing deals work.

They literally founded Arm. They supplied all the financing for Arm. Imagine you started a joint venture. You spin it off. Are you going to fail to get yourself a very good licensing deal before you do so? Or are you going to risk getting sued by your own child in the future?
 
  • Like
Reactions: CarlJ
"RISC-V still does not have core components like SIMD or virtualization approved"

So for starters, there are reasons for SIMD not being a "core" component in RISC-V, that was an intentional design choice, not keeping up with the joneses of whatever SSE/MMX/Neon Intel/AMD/ARM feature parity pissing match that really shouldn't be embraced by a core CPU ISA, indeed, even for Intel, those are *extensions* not a "core" component, further reading on such subjects here:

Similarly, the RISC-V specifications do have a virtualization mode, perhaps you are referring to the proposed H extension as not yet being approved? As it stands, RISC-V already has more provisions than x86's "real" or "protected" modes, and again, more recent instructions such as VMD, VMX, VMD etc. Intel/AMD64 Virtualization instructions are *extensions* and not "core" components of the original CPU ISA.

"RISC-V specifications describe the current virtualization mode using the symbol V . If V=1 then the system is currently in guest context, otherwise it is in host context (either supervisor or user mode). The RISC-V H extension introduces a full duplicate of CPU state: one copy for the guest and one copy for the host."

These are not bad things! From my vantage, that shows that the designers of RISC-V have been paying attention and are intentionally jettisoning old cruft, and taking a measured approach with what they want to integrate into their specification. Similarly, not having floppy disk drives is not a real detractor against a contemporary consumer computer in 2021, even if it might have been nearly unthinkable in the 1980s.

RISC-V also does not specify an FPU, but again, if you track research in academia advocating for posits/unum as alternatives to IEEE 754, a standard from 1985, I think we have maybe *just maybe* </saracasm> come a LONG long way since the mid 1980s and do not need legacies baked into a 21st century CPU ISA which began in 2010.

It may make a lot more sense to facilitate better alternatives which are already proposed (and some implemented, in free/libre open source github repositories for posit/unums as well as one firm even offering silicon based implementations) than to drag the 20th century legacies behind us.

Making spurious claims that RISC-V's omission of some things which 1970s and 1980s vintage CPU ISAs have kept dragging along with them, seems disingenuous to me or perhaps misinformed or a really superficial read, from the sort of perspective that maybe can only parse a list of features and see if there is a check mark next to them rather than evaluate whether the check box should even be checked in the first place given the place we are in space and time relative to other advancements.

I think it is OK to leave the past in the past, rather than make erroneous claims that a CPU ISA from 2010 is somehow at a disadvantage to its 1970s and 1980s predecessors, for omitting things which may no longer be particularly useful or relevant, or have better alternatives more worth implementing than suggesting "core" component omissions as somehow being necessarily relevant, let alone suggesting that implies some sort of inferiority.

RISC-V also does not handle fax documents nor RTTY natively. I am OK with that! I would be hella skeptical of any new CPU ISA which considered dated constructs part of its core ISA.

You misunderstand my point. The extensions you have mentioned are not yet ratified. They are merely proposals at various stages of readiness and the details are likely to change before the ratifications. They are not ready yet. It is not possible to build a stable software ecosystem on a hardware platform that is still being worked out.
 
You misunderstand my point. The extensions you have mentioned are not yet ratified. They are merely proposals at various stages of readiness and the details are likely to change before the ratifications. They are not ready yet. It is not possible to build a stable software ecosystem on a hardware platform that is still being worked out.
Welcome back :)
 
They literally founded Arm. They supplied all the financing for Arm. Imagine you started a joint venture. You spin it off. Are you going to fail to get yourself a very good licensing deal before you do so? Or are you going to risk getting sued by your own child in the future?

That's not how companies work. You can get a license to existing technology in perpetuity but you can't force an entity you don't own in the future to license anything new they make to you.
 
That's not how companies work. You can get a license to existing technology in perpetuity but you can't force an entity you don't own in the future to license anything new they make to you.

Of course you can. Anyone buying the company takes it subject to any existing licenses. Those existing license can say “any intellectual property Arm makes available for license to anyone is hereby licensed to Apple.” It can even say “any intellectual property Arm’s future owner makes avaialble for license to anyone is hereby licensed to Apple.”

The former is ubiquitous. The latter is rarer. I’m not suggesting apple gets a license to nvidia stuff, by the way - but anything Arm licenses in the future, whether or not Arm is owned by nVidia, is certainly licensed to apple.
 
  • Like
Reactions: CarlJ
I’m not suggesting apple gets a license to nvidia stuff, by the way - but anything Arm licenses in the future, whether or not Arm is owned by nVidia, is certainly licensed to apple.

There is no guarantee that this is true. Unless you've seen the agreement you cannot say this with any certainty and I don't believe it's public. Apple divested from ARM in the 90's. A lot has changed in the world and how legal agreements are written and handled since then.
 
There is no guarantee that this is true. Unless you've seen the agreement you cannot say this with any certainty and I don't believe it's public. Apple divested from ARM in the 90's. A lot has changed in the world and how legal agreements are written and handled since then.

No, not a lot has changed about how legal agreements are written. I’ve read and written a lot of them. And apple divested in 2003, not the ‘90’s.
 
You misunderstand my point. The extensions you have mentioned are not yet ratified. They are merely proposals at various stages of readiness and the details are likely to change before the ratifications. They are not ready yet. It is not possible to build a stable software ecosystem on a hardware platform that is still being worked out.
I don't misunderstand it, I disagree with it entirely.

Those "extensions" do not even belong in 21st century silicon anymore than floppy drives are still relevant, and wither talking about RISC-V's FPU omissions, or SIMD omissions, were intentional and deliberate.

In other words, your professed point, is a red herring. A false detraction against RISC-V.

Also see: https://www.sigarch.org/simd-instructions-considered-harmful/
 
I don't misunderstand it, I disagree with it entirely.

You are welcome to disagree with anything you wish (it's a free world), but I still don't understand what exactly you are disagreeing with. My statement boils down that essentially RISK-V ISA functionality is not yet ratified, which makes it not suitable for implementing stable software ecosystems at this time. Just to clarify, is this what you are disagreeing with?

Those "extensions" do not even belong in 21st century silicon anymore than floppy drives are still relevant, and wither talking about RISC-V's FPU omissions, or SIMD omissions, were intentional and deliberate.

Virtualization and parallel data processing does not belong in 21st century silicon? Ok then...

In other words, your professed point, is a red herring. A false detraction against RISC-V.

Also see: https://www.sigarch.org/simd-instructions-considered-harmful/

I entirely agree with the authors that Intel-style SIMD extensions have been a spectacular failure. For throughput, ALU-width-agnostic parallel processing is the only way forward. This is why I am looking forward to playing with ARM SVE/SVE2 (which IMO are better designed and cover more use cases than RVV, which seems to favor conceptual simplicity over flexibility and utility).

But I disagree that packed SIMD has no place in modern programming. "Vector" instructions are throughput-oriented, not latency-oriented. The scope of RVV is basically limited to streaming computations and has relatively high setup overhead. But there are a lot of compact data representations that can benefit from low-latency packed SIMD processing. Examples include geometric primitives, various data structures (e.g. hash tables to probe multiple elements at once) etc. Vector-style SIMD is not suitable here because you are only dealing with a small number of elements. For most flexibility and performance, you need both vector-style SIMD (which ARM provides today via SVE/SVE2) and packed-style 128-bit SIMD (which ARM provides today with Neon). Strictly speaking, RISC-V provides neither, as RVV is not yet ratified and packed SIMD exists as a vector-specific extension proposal that doesn't seem to generate significant interest in the committee.
 
You are welcome to disagree with anything you wish (it's a free world), but I still don't understand what exactly you are disagreeing with. My statement boils down that essentially RISK-V ISA functionality is not yet ratified, which makes it not suitable for implementing stable software ecosystems at this time. Just to clarify, is this what you are disagreeing with?

No. That is not it. Though for starters, allow me to disagree with your incorrect spelling of "RISK-V" rather than "RISC-V" RISC is after all, an acronym (Reduced Instruction Set Computing), and to have such a glaring typo, to me, is an affront to all sensibilities. If you can't get the basics correct, I find few if any merits in discourse with you given your already demonstrated adversarial nature in perspective because how do you expect me to clarify more nuanced elements of my thought process, when I am predisposed to believing that you are going to misinterpret them based upon what you have replied thus far?

Virtualization and parallel data processing does not belong in 21st century silicon? Ok then...

OK, this is a loaded statement, and appears to be putting words into my mouth which I never wrote. In other words, you are reframing your argument in terms which you understand, while simultaneously demonstrating a lack of understanding that virtualization instructions are part of the base RISC-V specification already. There are additional proposed virtualization extensions, which are not yet approved, but that is a far cry from suggesting that no virtualization instructions and parameters already exist.

Similarly, I have precisely no idea how to respond to the "parallel data processing" statement, given that existing RISC-V implementations on the market are already multi-core, and some massively parallel RISC-V designs have not only been demonstrated for years (e.g. GRVI Phalanx, 2016), some implementations are even open source (e.g. Olof Kindgren's SERV has been demonstrated on some FPGA kits as running > 5000 cores https://github.com/olofk/serv ).

So, from my vantage, you are continuing to present red herring falsehoods about RISC-V based upon what I can only presume is a profound lack of understanding of extant research. It reads as if you are filled with preconceived biases and incorrect notions rather than keeping even slightly aware of previous public research.

Not *all* proposed RISC-V extensions are implemented today, many are left as exercises to the reader, or for the future. That isn't a detraction against RISC-V as it already exists, today.

For example, RISC-V does specify 128-bit memory addressing, but currently no implementations in silicon implement more than RV64. Albeit, I am unaware of any systems on Earth which have exhausted the 16 exabyte addressing range currently possible with 64bit addressing (though I know people in the NSA and CIA, I am not privy to their data collection methodologies and would posit that they are utilizing data federation in smaller quantities rather than absolute memory addressing if they are archiving in excess of such amounts of data presently. Most data repositories I am familiar with at scale in commercial environments in recent years to which I am privy, seem to be in the petabyte range and there are also relatively few organizations which merit that much storage and again, they tend not to address that at a CPU level so much as they have data federation methodologies in place.)

I entirely agree with the authors that Intel-style SIMD extensions have been a spectacular failure. For throughput, ALU-width-agnostic parallel processing is the only way forward. This is why I am looking forward to playing with ARM SVE/SVE2 (which IMO are better designed and cover more use cases than RVV, which seems to favor conceptual simplicity over flexibility and utility).

Sure, but that is disingenuous, don't you think? Neither Intel-style SIMD nor ARM SVE/SVE2 are part of those core CPU ISAs as originally envisioned. They evolved, over time, which is why they are friggin' "Scalable Vector EXTENSIONS" it seems as if you want to have your cake and eat it too, and I am calling BS on that. A variety of extensions have been proposed to RISC-V, that does not mean they should be part of the base/core ISA. Leave that up to vendor implementations, as necessary.

But I disagree that packed SIMD has no place in modern programming. "Vector" instructions are throughput-oriented, not latency-oriented. The scope of RVV is basically limited to streaming computations and has relatively high setup overhead. But there are a lot of compact data representations that can benefit from low-latency packed SIMD processing. Examples include geometric primitives, various data structures (e.g. hash tables to probe multiple elements at once) etc. Vector-style SIMD is not suitable here because you are only dealing with a small number of elements. For most flexibility and performance, you need both vector-style SIMD (which ARM provides today via SVE/SVE2) and packed-style 128-bit SIMD (which ARM provides today with Neon). Strictly speaking, RISC-V provides neither, as RVV is not yet ratified and packed SIMD exists as a vector-specific extension proposal that doesn't seem to generate significant interest in the committee.

I don't know where to begin with this either, "modern programming" do you spend your time, as a "modern" programmer, writing assembly? Are you one of the privileged few who writes assemblers and compilers and gets paid to do so? Most "modern" programmers long shifted to writing in in a higher level abstraction such as C, since at least the early to mid 1970s, and a *substantially larger* amount of contemporary programmers, do not even write C, they write in languages implemented in C. I again, think your framing of this is disingenuous at best, but hey good on you for correctly writing "RISC-V" in this paragraph, "strictly speaking", that is the only correct reference to the CPU ISA.

Mention of "various data structures" screams "I looked at undergraduate computer science course offerings but am content to use late 20th century filler text without any real semantic meaning, because *gasp* 'hash tables' are so excruciatingly common in programming, I don't even think they need mention as an example. I almost get the sense that I am writing to a Markov bot. Stranger things have been known to happen online."

Regardless, you do seem consistent in your framing that since RISC-V does not provide what Intel and ARM have (or at least eventually did provide even if they were not base provisions in their extensions to their proprietary ISAs), then you do not consider it of merit. I do not share that perspective, just as I think diesel engines and petrol engines can and do coexist happily without necessitating that they share all design elements. Indeed, though there are rare examples of engines which can run on both (and even more fuel sources) those tend to be completely unnecessary. There is no need for a "one size fits all" solution, particularly not in a core CPU ISA. A lot of additional functionality, IMHO, can and should be implemented higher up in a software stack, and absolutely never in a silicon or FPGA's softCPU. RISC-V already provides "vector style" that is, quite literally, one of the elements the V refers to in "RISC-V" which was not an afterthought nor extension as it is with ARM'S SVE/SVE2 that you seem so intent on referencing repeatedly as if that is somehow an ARM advantage, when it is not.

So again, get out of here with your "RISC-V" provides neither. You are being slovenly with your evaluation, lumping errors together in your framing, failing to accurately discern what RISC-V does and does not provide and instead creating red herrings and straw men. It's exasperating.

I do not feel like wasting additional words with you in debate. I never come online to argue or debate, particularly when it seems evident to me, that you are not even current with extent research, it's thankless to pretend to educate, when I do not have a PhD nor teaching credential, and you are not my student.
 
Last edited:
No. That is not it. Though for starters, allow me to disagree with your incorrect spelling of "RISK-V" rather than "RISC-V" RISC is after all, an acronym (Reduced Instruction Set Computing), and to have such a glaring typo, to me, is an affront to all sensibilities. If you can't get the basics correct, I find few if any merits in discourse with you given your already demonstrated adversarial nature in perspective because how do you expect me to clarify more nuanced elements of my thought process, when I am predisposed to believing that you are going to misinterpret them based upon what you have replied thus far?

I made a typo, sorry. Still, I can't shake the feeling that you are being unnecessarily hostile. It is also strange that you are writing prolonged tirades about my typo but ignore the discussion at hand (about essential RISC-V functionality existing only as a specification draft).

There are additional proposed virtualization extensions, which are not yet approved, but that is a far cry from suggesting that no virtualization instructions and parameters already exist.

So you are claiming that efficient hypervisors can be realized using only ratified RISC-V (without the H extensions)? Why even bother with the H extension then?

Similarly, I have precisely no idea how to respond to the "parallel data processing" statement, given that existing RISC-V implementations on the market are already multi-core, and some massively parallel RISC-V designs have not only been demonstrated for years (e.g. GRVI Phalanx, 2016), some implementations are even open source (e.g. Olof Kindgren's SERV has been demonstrated on some FPGA kits as running > 5000 cores https://github.com/olofk/serv ).

You can start by responding the to fact that the vector extension is not yet ratified, and the draft has changed over the last years. That is precisely what I am talking about: unstable functionality is fine for research and (limited) production use if you have tight control over the software, but not for general purpose use, where you have to guarantee binary compatibility over a long period of time.

So, from my vantage, you are continuing to present red herring falsehoods about RISC-V based upon what I can only presume is a profound lack of understanding of extant research. It reads as if you are filled with preconceived biases and incorrect notions rather than keeping even slightly aware of previous public research.

The only thing I am pointing out is that the functionality you mention is not ratified, hence not formally part of the spec and hence not ready for general usage. I am not sure what about this statement caused such a negative reaction in you.

Sure, but that it disingenuous, don't you think? Neither Intel-style SIMD nor ARM SVE/SVE2 are part of those core CPU ISAs as originally envisioned. They evolved, over time, which is why they are friggin' "Scalable Vector EXTENSIONS" it seems as if you want to have your cake and eat it too, and I am calling BS on that. A variety of extensions have been proposed to RISC-V, that does not mean they should be part of the base/core ISA. Leave that up to vendor implementations, as necessary.

Who is arguing that they should be part of the core? The point is that no data-parallel RISC-V extension has been ratified as of today. You have experimental hardware implementations, sure. But there is no guarantee of ISA compatibility since the spec has not been finalized yet.

I don't know where to begin with this either, "modern programming" do you spend your time, as a "modern" programmer, writing assembly?

No, but I am using low-level intrinsics to provide efficient implementation of performance-critical code. That's beside the point though. Data-parallel instructions are used for much more nowadays than maximizing throughput over large data sets. Packed SIMD instructions are useful to deal with smaller objects where you care about latency rather than throughput.

Mention of "various data structures" screams "I looked at undergraduate computer science course offerings but am content to use late 20th century filler text without any real semantic meaning, because *gasp* 'hash tables' are so excruciatingly common in programming, I don't even think they need mention as an example. I almost get the sense that I am writing to a Markov bot. Stranger things have been known to happen online."

If I didn't know better I would almost think that you re trying to be insulting for no reason. Maybe you could keep your attention on the topic (concretely: the fact that substantial part of RISC-V today exists as unratified draft) rather wasting everyone time on insults?


RISC-V already provides "vector style" that is, quite literally, one of the elements the V refers to in "RISC-V" which was not an afterthought nor extension as it is with ARM'S SVE/SVE2 that you seem so intent on referencing repeatedly as if that is somehow an ARM advantage, when it is not.

Ugh, you are getting a bit tangled here. First, "V" in RISC-V stands for "5" according to the official history at least. Second, vector extension to RISC-V was first proposed in 2015 and has not yet been ratified (although I hear it is in final stages of approval). SVE is an advantage for ARM because it is a ratified, standard extension (part of v9 core) with mature compiler support. I am not competent enough to provide a feature-by-feature comparison of RVV vs. SVE. From first glance it seems to me that RVV is targeting streaming computations, while SVE/SVE2 are more general in scope. Not quite sure what you mean by being an "afterthought".

Regardless, you do seem consistent in your framing that since RISC-V does not provide what Intel and ARM have, then you do not consider it of merit.

My position on the RISC-V is that while it is a decent, minimalist ISA, it offers no inherent advantage over ARM's Aarch64, and much of what I consider essential functionality (efficient virtualisation, data-parallel instructions, bit processing) is still in draft phase. I also find some of RISC-V design decisions questionable. Still, I am looking forward to the time when the essential functionality is ratified and general-purpose high-performance RISC-V processors are available. It would indeed be great to have a good standard ISA that is not locked behind corporate IP. That said, I think that there are much more interesting proposals, such as ForwardCom by Agner Fog.

Let's also pay attention to the context. We were discussing the possibility of Apple moving Apple Silicon to RISC-C in the near future. Based on the fact that RISC-V spec is still work in progress and that the RISC-V ISA has no advantage over Aarch64, I don't think that such hardware transition makes any sense for Apple at this point.

when I do not have a PhD nor teaching credential

Fortunately I do (not in the topic of CPU design though, here I am merely a hobbyist ;))
 
Last edited:
I made a typo, sorry. Still, I can't shake the feeling that you are being unnecessarily hostile. It is also strange that you are writing prolonged tirades about my typo but ignore the discussion at hand (about essential RISC-V functionality existing only as a specification draft).
Yes, and you repeated another typo, "RISC-C" at the end of your response. So, if I am doing some due diligence to proofread and edit my posts, why can't you be bothered to keep up?

It seems rude to me, at a minimum.
So you are claiming that efficient hypervisors can be realized using only ratified RISC-V (without the H extensions)? Why even bother with the H extension then?
That is not the claim. You are the one, who by my reading, erroneously implied that RISC-V has no virtualization provisions, I pointed out, that it already does. Neither Intel's x86 nor ARM have virtualization provisions as part of their base ISA.

The "not yet ratified" proposed H extensions to presumably offer more robust virtualization support for RISC-V, may improve future virtualization offerings, but that is not a detraction from current RISC-V virtualization provisions in the base CPU ISA.

I really do not see what is so difficult for you to understand about that?

To make it clear: to me, it seems as if you are argumentatively framing that RISC-V is at a disadvantage by not implementing as much in the way of virtualization extensions, as Intel and ARM extensions have provided. That isn't an apples to apples comparison, and does not bear up to scrutiny. It reads like BS to me, and you should drop it, entirely from any detractions you may have against RISC-V for the time being, as contrasted to other CPU ISAs, which do not have *any* virtualization provisions in their base specifications such as x86/AMD64 and ARM.
You can start by responding the to fact that the vector extension is not yet ratified, and the draft has changed over the last years. That is precisely what I am talking about: unstable functionality is fine for research and (limited) production use if you have tight control over the software, but not for general purpose use, where you have to guarantee binary compatibility over a long period of time.

I guess I have administered many things in production which never underwent ratification, so I do not see any problem with that. Indeed, that tends to be the entire purvey of commercial computing and where "competitive advantage" tends to play out.

Guaranteeing binary compatibility is not the sort of thing that I have read anyone even advocating for since perhaps the era of decrepit HP3000 systems, and having administered those as well: guess what? THEY WERE NOT BINARY COMPATIBLE EITHER. Specific revisions were binary compatible. To profess that in 2021, we do not have JIT compilation paradigms, DLLs and other paradigms rather than static binaries, seems as if it is again, screaming ignorance in a field which is full of object code which is not binary compatible.

Sure, binary compatibility may be nice in theory, in practice? Could you run Atari ST code on a Commodore Amiga? They both had MC68000 CPUs, surely they should have been "binary compatible" right? I think that level of disingenuous begging the question inanity, is precisely what I infer from your writing, over and over again, oblivious to the reality of the larger landscape of how CPU ISAs are implemented in various implementations, many of which may be entirely unique to a vendor or code branch.

That sort of thing doubtlessly might have upset me in my early years as an assembly programmer, but I am not 10 anymore. I would expect others have grown up as well.

The only thing I am pointing out is that the functionality you mention is not ratified, hence not formally part of the spec and hence not ready for general usage. I am not sure what about this statement caused such a negative reaction in you.

Well, you aren't my psychologist, so without getting into my clinically diagnosed persistent depressive disorder, let's just leave the negative reactions in me out of it. I am also not here, nor anywhere, for a doctoral defense.

Who is arguing that they should be part of the core? The point is that no data-parallel RISC-V extension has been ratified as of today. You have experimental hardware implementations, sure. But there is no guarantee of ISA compatibility since the spec has not been finalized yet.

*sigh* Whatever your professed point is, it is lost on me. I already provided examples of prior art which demonstrate extent iterations of massively parallel RISC-V implementations. Why on Earth, or any other planet, you would think that the base ISA need to somehow guarantee ISA compatibility for how others may choose to implement it, is not something I will ever be able to fathom at this stage in my existence.

You might as well ponder: "there is no guarantee that FreeBSD's Linux compatibility layer will function as intended, the code is always evolving and has not been finalized yet."

That does not preclude the compatibility layer from actually functioning in practice, not merely theory. I see 0 reason for a CPU ISA to even attempt to offer such "guarantees" if anything, given that RISC-V is by design, libre/free open source, its licensing I believe, explicitly disclaims any implied guarantees or warranties.

If you want to get really pedantic, you can refer to: https://github.com/riscv/riscv-isa-manual/blob/draft-20200229-27b40fb/LICENSE and note that warranties are explicitly disclaimed.
No, but I am using low-level intrinsics to provide efficient implementation of performance-critical code. That's beside the point though. Data-parallel instructions are used for much more nowadays than maximizing throughput over large data sets. Packed SIMD instructions are useful to deal with smaller objects where you care about latency rather than throughput.
*sigh* Beating that dead horse again, not going to respond in detail with counter examples, again. Your claim that RISC-V implementations are not "data parallel" is erroneous. Plain and simple.

If I didn't know better I would almost think that you re trying to be insulting for no reason. Maybe you could keep your attention on the topic (concretely: the fact that substantial part of RISC-V today exists as unratified draft) rather wasting everyone time on insults?

Ah, so you profess to know better? Well, I suppose if I wanted to be insulting, I would have insinuated something along the lines of "your mother was a hamster and your father smells of elderberries". As previously stated, I am EXASPERATED with your replies. If you read my responses as a personal attack, that inference would be solely on your shoulders. If you want to lower the discourse to that level, I will just block or ignore you entirely rather than waste more of my words attempting to be anywhere approaching rational.

The "substantial part of RISC-V today exists as unratified draft" seems to ignore previously shipping silicon such as the SiFive HiFive1 (and Rev B), The SiFive Unleashed and the SiFive Unmatched, as just a few examples. Will RISC-V continue to evolve? Doubtlessly. I guess since I have spent much of my career actively deploying things which never underwent IEEE or IETF or other ratification processes, I do not consider ratification by committee to be a particularly substantial detractor. I prefer to toil on prototypes and advance the state of the art, not get trapped by rehashes and reimplementations years or decades after the fact.

Ugh, you are getting a bit tangled here. First, "V" in RISC-V stands for "5" according to the official history at least. Second, vector extension to RISC-V was first proposed in 2015 and has not yet been ratified (although I hear it is in final stages of approval). SVE is an advantage for ARM because it is a ratified, standard extension (part of v9 core) with mature compiler support. I am not competent enough to provide a feature-by-feature comparison of RVV vs. SVE. From first glance it seems to me that RVV is targeting streaming computations, while SVE/SVE2 are more general in scope. Not quite sure what you mean by being an "afterthought". I don't see how anyone can argue that SVE qualifies as "afterthought" while simultaneously arguing that RVV with its stateful configuration and execution model does not. Now, AVX512, that's an afterthought, and a particularly awful one.
*sigh* OK, "a bit tangled" while I do not dispute that the V in RISC-V is pronounced "five", apparently you failed to read one of the articles which I previously linked.

To make it stupidly obvious to you and anyone else who may be misfortunate enough to read this discourse, here is an excerpt:

"This was in many ways an alternative to RISC, where one took a vector processing approach instead. In fact the RISC-V guys may be more influenced by having worked last on vector processing than having invented the original RISC. Patterson and others really began believing in the power and elegance of vector processing from their IRAM project. Hence the V in RISC-V actually stands for both 5 and for Vector. RISC-V was from the beginning conceived as an architecture for vector processing."

To harp instead on your "SVE" being an advantage for ARM because it is "ratified" I guess you failed to acknowledge that RVV despite not being "ratified" already has an example/reference implementation from last year:

By "afterthought" I meant that vector processing is a *CORE PRINCIPAL* of the RISC-V CPU ISA, RVV extensions notwithstanding, whereas in ARM (going back to 1980s vintage ARM) SVE is absolutely not part of the core ISA (hence the E in SVE is "Extension"), and has come about much more recently in its design, specifically for AArch64.

Yes, there are additional afterthoughts in ARM and Intel implementations as well. IMHO, both are relatively awful, though perhaps for completely different reasons than you may think.
My position on the RISC-V is that while it is a decent, minimalist ISA, it offers no inherent advantage over ARM's Aarch64, and much of what I consider essential functionality (efficient virtualisation, data-parallel instructions, bit processing) is still in draft phase. I also find some of RISC-V design decisions questionable. Still, I am looking forward to the time when the essential functionality is ratified and general-purpose high-performance RISC-V processors are available. It would indeed be great to have a good standard ISA that is not locked behind corporate IP. That said, I think that there are much more interesting proposals, such as ForwardCom by Agner Fog.

I consider libre/free open source to be an inherent advantage against proprietary implementations of any technology. That is not unique to RISC-V, but it is most certainly not an advantage which ARM's aarch64 nor Intel's x86 nor AMD64 can claim, at all.

Your professed "essential functionality" I am tired of repeatedly providing counter examples to virtualization as being part of the core of RISC-V, and "data parallel" implementations already being existent as well. Why you seem to fail to grasp that is beyond me, draft phase or not. I can, and have, kept ahead of draft specifications and RFCs for decades. Treading over known ground is tiresome to me, and feels as if it is a waste of resources better spent doing more or less anything else.

If you think that ForwardCom is a more interesting proposal, given that in 2021, there is no shipping silicon, and an FPGA softCPU core at best from what I can glean, and it is only a proposal from 2017, with substantively less vendor buy in and pedagogical coursework, I do not know what to tell you.

Maybe in TIME ForwardCom will come closer to offering more than RISC-V does, but presently, it does not merit such an evaluation today. I am not going to waste more words on detracting against it, because it probably needs no detractions. I think much earlier in this thread, I already explicitly mentioned something to the effect of that in 20-30 years when consumers are benefiting from RISC-V research of today, we will probably consider ARM relativistically arcane, and be contemplating whatever should be replacing RISC-V. Maybe that will be ForwardCom, maybe it will be something we haven't even begun to contemplate yet. The future is unwritten for now.

Let's also pay attention to the context. We were discussing the possibility of Apple moving Apple Silicon to RISC-C in the near future. Based on the fact that RISC-V spec is still work in progress and that the RISC-V ISA has no advantage over Aarch64, I don't think that such hardware transition makes any sense for Apple at this point.
"RISC-C" argh, again, no, "RISC-V" please, it's six characters, get them correct!

The possibility of Apple moving Apple Silicon to RISC-V does not seem to be implied by the job requisition. Albeit, I applied to that requisition, so maybe I read it a bit more carefully than you did? What is clear, is that Apple apparently believes publicly that it is worth investing time and resources and a salary for an employee who is focused on RISC-V related research. Given that I have already invested a lot of my personal time and energy and money in RISC-V research, I thought it might be neat to get paid for it, but Apple isn't exactly a top choice of potential employers for me, particularly given the AppleToo "context" in recent events.

I could just as easily posit that the Aarch64 spec is "still a work in progress" and that we are likely to see Apple M1X or Apple M2 iterations in the future, that is a red herring, and fundamentally, not a detraction of RISC-V also sharing such properties.

To paraphrase Josh Paetzel, "all software is beta".

Fortunately I do (not in the topic of CPU design though, here I am memory a hobbyist ;))

Ooph. Well, reading that feels as if you are pissing in my cheerios.

Albeit, I was born in Menlo Park and grew up around Xerox Altos and Stars and saw network computing before most people had even heard of a modem. While my first formal instruction in programming was circa 1981 at a Junior college, I spent too much of my youth, fixing and repairing other people's code and hardware, while simultaneously being hazed by PhDs and CIA operatives who not only forced me to breadboard hardware prototypes and use punch cards (in the 1980s? SERIOUSLY!? Total dick behavior) before they would "grace me" with the "privilege" of running my own code on their systems. My parents were luddites, who despite having already paid for their son to take programming classes, ended up drinking a near beer while watching the 1984 Superbowl and buying a Macintosh 128, which was an affront to my sensibilities, not only because it was ridiculously overpriced, but also: black and white, and had not as much as a BASIC interpretor. A C64 would have been a superior investment and programming environment in that era.

Meanwhile, friends who were born into better, or at least richer families, had computers, plural, and much nicer ones such as Commodore Amigas, and rarely even Silicon Graphics workstations. A couple of those friends got jobs at the Naval Postgraduate School where I would routinely get invited to help them port code to Sun Microsystems SunOS workstations, and more rarely SGI Indy's and Indigo^2 and Reality Engine Onyx^2 systems. I was told that some of my code even was running on a $30 million Cray at Fleet Numerical, but no one ever allowed me to even touch the thing myself.

Note: I was not paid for that.

I was also: not anyone's student.

In more recent years, people have developed a term for that, namely: "stealth IT".

Which is a really nice way of saying: "exploiting minors for their labor".

That was under the purvey of PhDs.

So, I do not, as a general rule, take too kindly to higher degrees. As it was, it took me five years and no fewer than a private college, a junior college and a public University to get so much as a B.A.

Meanwhile I have had additional coursework from vocational colleges in computer security (which was candidly, years if not decades behind what I had been doing privately), schools in esoterica, and other vendor specific certification courses paid for by employers over the decades, my experience with PhDs remains: profoundly negative. I have potentially befriended a handful at best, and though I have pursued discourse in doctoral programs, in years of making efforts in subjects of interest with researchers who have published findings I considered to have some merit, precisely ONE has ever been so polite as to email me back a rejection letter. All of the others, one of whom is an ACM member with whom I shared beers and went so far as to tell me to reach out to him for additional studies, never replied after I made attempts.

So, while I am content to continue with lifelong learning and advancing the state of the art, I find that the ivory tower of academia seems markedly reserved for those who, by most appearances, were born into privilege that I will never attain no matter how many systems I have held root or wheel or sudoers or Enterprise Admins, or outfence level abstractions on.

Nonetheless, I do not, as a general rule, go out of my way to detract from others.

Your writings, as related to RISC-V, from my vantage and perspective, seem as if they are categorically aligned against it, profoundly negative to the point of being an example of the idiom "nipping it in the bud" and even going so far as to explicitly write a professed preference the much less fleshed out and younger proposal of ForwardCom in your own words.

I was friends with Doug Engelbart and other similar luminaries, and I personally feel the level of despondency I could see in his eyes with a planet full of know-it-alls misusing and abusing technology. I would personally rather be annihilated and never again reincarnated than continue to persist in the samsaric levels of hell coexisting with insufferable trolls, bullies and abusers, which has been the reality of most of my corporeal existence as a human in this incarnation, but empirically having already transcended the quantum suicide immortality paradox long ago, find myself without much in the way of recourse let alone solution to such conundrums. Instead, continually encountering argumentative trolls, or brazenly disingenuous individuals, with or without PhDs in the limited realms of interest I do still find online, as rare as they may be.

So, while you may be a PhD:

I am not your doctoral student, and this is not a thesis defense, and please, rather than attempt to reframe things again, or wonder what you might have possibly done to potentially offend me, go back and reread all of my references, and ask yourself why you are playing devil's advocate?

One last quip: "here I am memory a hobbyist"

Did you mean: "here I am merely a hobbyist"?

Because, if so, please, find better hobbies than replying to me. Sincerely, go solve homelessness, or world hunger and stay t.f. away from me.
 
Last edited:
Would be interesting to compare that CPU to an S6 in the Apple Watch. The benchmark you have linked has a lot of problems btw… first of all, trivial microbenchmarks are more of academic interest (I‘m not surprised that a simple core does relatively well in naive ALU tests, but how will it perform real-world code with indirect branches and cache misses?). Then the author takes 15W TDP as Ryzen‘s power consumption, which is probably closer to 30 watts for the duration of that test.

I would be much more interested in seeing some lighter SPEC or browser benchmarks. I wonder why the authors didn’t provide those.
Author also doesn’t seem to know about the command line powermetrics tool for MacOS.
 
Yes, and you repeated another typo, "RISC-C" at the end of your response. So, if I am doing some due diligence to proofread and edit my posts, why can't you be bothered to keep up?

I make a lot of typos. Sorry if it bothers you. While I do proof-read my posts, I can't catch them all (or even the most of them). I am afraid you will have to live with it.

That is not the claim. You are the one, who by my reading, erroneously implied that RISC-V has no virtualization provisions, I pointed out, that it already does. Neither Intel's x86 nor ARM have virtualization provisions as part of their base ISA.

You are nitpicking. My point is that RISC-V official, ratified spec does not have provisions for efficient virtualisation.

Guaranteeing binary compatibility is not the sort of thing that I have read anyone even advocating for since perhaps the era of decrepit HP3000 systems, and having administered those as well: guess what? THEY WERE NOT BINARY COMPATIBLE EITHER. Specific revisions were binary compatible. To profess that in 2021, we do not have JIT compilation paradigms, DLLs and other paradigms rather than static binaries, seems as if it is again, screaming ignorance in a field which is full of object code which is not binary compatible.

You really seem intended to take this discussion far away from the original context. Listen, I am not agains experimental functionality. I also don't claim that all hardware has to be binary compatible. But the original topic of this thread was the though of Apple Silicon moving from ARM to RISC-V. Binary compatibility with future hardware is a hard prerequisite. You can use a draft ISA in an experimental product. You cannot use a draft, subject to change ISA in a product that you intend to ship to millions of users and that you want to support for years to come across multiple software and hardware releases.

Why on Earth, or any other planet, you would think that the base ISA need to somehow guarantee ISA compatibility for how others may choose to implement it, is not something I will ever be able to fathom at this stage in my existence.

Because I think it's really silly to maintain different compiler branches just because different companies decide to implement the same functionality in a different way. Again, I am talking about general-purpose personal computing there, not specialized processors or experimental research hardware. Once you have a product that ships millions of units and that has to be supported for years to come, you don't have the flexibility to "play around with stuff" anymore. That's why standards exist. So that I can write code and not worry that the next hardware or software release of my platform will break everything.

To harp instead on your "SVE" being an advantage for ARM because it is "ratified" I guess you failed to acknowledge that RVV despite not being "ratified" already has an example/reference implementation from last year:

This is great. So does it mean that if I write a program using vector intrinsics that runs on that implementation, this program will also run without changes on any future RISC-V-compatible CPU that implements the vector extension? If not, I am not interested (at least not in the context of this thread which is about Apple and RISC-V-based Apple Silicon). Again, that's why you need standards. So that your implementation does not break when a vendor randomly decides to "shake the things up".

I consider libre/free open source to be an inherent advantage against proprietary implementations of any technology. That is not unique to RISC-V, but it is most certainly not an advantage which ARM's aarch64 nor Intel's x86 nor AMD64 can claim, at all.

Philosophical points which are hardly relevant to the current discussion. Apple ships hundreds of millions devices per year. They need a stable, robust hardware interface. Whether it's proprietary or open-source is hardly relevant at this point.


Your professed "essential functionality" I am tired of repeatedly providing counter examples to virtualization as being part of the core of RISC-V, and "data parallel" implementations already being existent as well. Why you seem to fail to grasp that is beyond me, draft phase or not. I can, and have, kept ahead of draft specifications and RFCs for decades.

I am not really sure how to reply to this. We seem to be talking past each other. I am talking about the current lack of the stable spec, you are "refuting" me with examples of draft implementations. I can quite imagine it's being tiresome, since it's a futile exercise.

The possibility of Apple moving Apple Silicon to RISC-V does not seem to be implied by the job requisition.

No it does not. Which is exactly what me and other people have been arguing. Did you read the entire thread? Because it seems to me that you are taking my comments (made explicitly in relation to the suggestion that Apple might use RISC-V instead or Aaarch64 in the future) our of context and assuming that I somehow have a personal vendetta agains RISC-V (I do not) or am trying to defame it somehow (I am not).

Albeit, I applied to that requisition, so maybe I read it a bit more carefully than you did? What is clear, is that Apple apparently believes publicly that it is worth investing time and resources and a salary for an employee who is focused on RISC-V related research.

Exactly. And this should the end of the thread. I mean, I am very happy to discuss — in objective terms — the merits of various ISA designed, as much as my flawed hobbyist understanding will allow me to.

Ooph. Well, reading that feels as if you are pissing in my cheerios.

Albeit, I was born in Menlo Park and grew up around Xerox Altos and Stars and saw network computing before most people had even heard of a modem. While my first formal instruction in programming was circa 1981 at a Junior college, I spent too much of my youth, fixing and repairing other people's code and hardware, while simultaneously being hazed by PhDs and CIA operatives who not only forced me to breadboard hardware prototypes and use punch cards (in the 1980s? SERIOUSLY!? Total dick behavior) before they would "grace me" with the "privilege" of running my own code on their systems. My parents were luddites, who despite having already paid for their son to take programming classes, ended up drinking a near beer while watching the 1984 Superbowl and buying a Macintosh 128, which was an affront to my sensibilities, not only because it was ridiculously overpriced, but also: black and white, and had not as much as a BASIC interpretor. A C64 would have been a superior investment and programming environment in that era.

Meanwhile, friends who were born into better, or at least richer families, had computers, plural, and much nicer ones such as Commodore Amigas, and rarely even Silicon Graphics workstations. A couple of those friends got jobs at the Naval Postgraduate School where I would routinely get invited to help them port code to Sun Microsystems SunOS workstations, and more rarely SGI Indy's and Indigo^2 and Reality Engine Onyx^2 systems. I was told that some of my code even was running on a $30 million Cray at Fleet Numerical, but no one ever allowed me to even touch the thing myself.

Note: I was not paid for that.

I was also: not anyone's student.

In more recent years, people have developed a term for that, namely: "stealth IT".

Which is a really nice way of saying: "exploiting minors for their labor".

That was under the purvey of PhDs.

So, I do not, as a general rule, take too kindly to higher degrees. As it was, it took me five years and no fewer than a private college, a junior college and a public University to get so much as a B.A.

Ok, well, sorry for your experience I guess? I am not from the USA so I didn't have to go through the weird system you guys call education, so our mileages might vary.

Because, if so, please, find better hobbies than replying to me. Sincerely, go solve homelessness, or world hunger and stay t.f. away from me.

Wow. And I though you were saying you are not angry? :D
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.