I made a typo, sorry. Still, I can't shake the feeling that you are being unnecessarily hostile. It is also strange that you are writing prolonged tirades about my typo but ignore the discussion at hand (about essential RISC-V functionality existing only as a specification draft).
Yes, and you repeated another typo, "RISC-C" at the end of your response. So, if I am doing some due diligence to proofread and edit my posts, why can't you be bothered to keep up?
It seems rude to me, at a minimum.
So you are claiming that efficient hypervisors can be realized using only ratified RISC-V (without the H extensions)? Why even bother with the H extension then?
That is not the claim. You are the one, who by my reading, erroneously implied that RISC-V has no virtualization provisions, I pointed out, that it already does. Neither Intel's x86 nor ARM have virtualization provisions as part of their base ISA.
The "not yet ratified" proposed H extensions to presumably offer more robust virtualization support for RISC-V, may improve future virtualization offerings, but that is not a detraction from current RISC-V virtualization provisions in the base CPU ISA.
I really do not see what is so difficult for you to understand about that?
To make it clear: to me, it seems as if you are argumentatively framing that RISC-V is at a disadvantage by not implementing as much in the way of virtualization extensions, as Intel and ARM extensions have provided. That isn't an apples to apples comparison, and does not bear up to scrutiny. It reads like BS to me, and you should drop it, entirely from any detractions you may have against RISC-V for the time being, as contrasted to other CPU ISAs, which do not have *any* virtualization provisions in their base specifications such as x86/AMD64 and ARM.
You can start by responding the to fact that the vector extension is not yet ratified, and the draft has changed over the last years. That is precisely what I am talking about: unstable functionality is fine for research and (limited) production use if you have tight control over the software, but not for general purpose use, where you have to guarantee binary compatibility over a long period of time.
I guess I have administered many things in production which never underwent ratification, so I do not see any problem with that. Indeed, that tends to be the entire purvey of commercial computing and where "competitive advantage" tends to play out.
Guaranteeing binary compatibility is not the sort of thing that I have read anyone even advocating for since perhaps the era of decrepit HP3000 systems, and having administered those as well: guess what? THEY WERE NOT BINARY COMPATIBLE EITHER. Specific revisions were binary compatible. To profess that in 2021, we do not have JIT compilation paradigms, DLLs and other paradigms rather than static binaries, seems as if it is again, screaming ignorance in a field which is full of object code which is not binary compatible.
Sure, binary compatibility may be nice in theory, in practice? Could you run Atari ST code on a Commodore Amiga? They both had MC68000 CPUs, surely they should have been "binary compatible" right? I think that level of disingenuous begging the question inanity, is precisely what I infer from your writing, over and over again, oblivious to the reality of the larger landscape of how CPU ISAs are implemented in various implementations, many of which may be entirely unique to a vendor or code branch.
That sort of thing doubtlessly might have upset me in my early years as an assembly programmer, but I am not 10 anymore. I would expect others have grown up as well.
The only thing I am pointing out is that the functionality you mention is not ratified, hence not formally part of the spec and hence not ready for general usage. I am not sure what about this statement caused such a negative reaction in you.
Well, you aren't my psychologist, so without getting into my clinically diagnosed persistent depressive disorder, let's just leave the negative reactions in me out of it. I am also not here, nor anywhere, for a doctoral defense.
Who is arguing that they should be part of the core? The point is that no data-parallel RISC-V extension has been ratified as of today. You have experimental hardware implementations, sure. But there is no guarantee of ISA compatibility since the spec has not been finalized yet.
*sigh* Whatever your professed point is, it is lost on me. I already provided examples of prior art which demonstrate extent iterations of massively parallel RISC-V implementations. Why on Earth, or any other planet, you would think that the base ISA need to somehow guarantee ISA compatibility for how others may choose to implement it, is not something I will ever be able to fathom at this stage in my existence.
You might as well ponder: "there is no guarantee that FreeBSD's Linux compatibility layer will function as intended, the code is always evolving and has not been finalized yet."
That does not preclude the compatibility layer from actually functioning in practice, not merely theory. I see 0 reason for a CPU ISA to even attempt to offer such "guarantees" if anything, given that RISC-V is by design, libre/free open source, its licensing I believe, explicitly disclaims any implied guarantees or warranties.
If you want to get really pedantic, you can refer to:
https://github.com/riscv/riscv-isa-manual/blob/draft-20200229-27b40fb/LICENSE and note that warranties are explicitly disclaimed.
No, but I am using low-level intrinsics to provide efficient implementation of performance-critical code. That's beside the point though. Data-parallel instructions are used for much more nowadays than maximizing throughput over large data sets. Packed SIMD instructions are useful to deal with smaller objects where you care about latency rather than throughput.
*sigh* Beating that dead horse again, not going to respond in detail with counter examples, again. Your claim that RISC-V implementations are not "data parallel" is erroneous. Plain and simple.
If I didn't know better I would almost think that you re trying to be insulting for no reason. Maybe you could keep your attention on the topic (concretely: the fact that substantial part of RISC-V today exists as unratified draft) rather wasting everyone time on insults?
Ah, so you profess to know better? Well, I suppose if I wanted to be insulting, I would have insinuated something along the lines of "your mother was a hamster and your father smells of elderberries". As previously stated, I am EXASPERATED with your replies. If you read my responses as a personal attack, that inference would be solely on your shoulders. If you want to lower the discourse to that level, I will just block or ignore you entirely rather than waste more of my words attempting to be anywhere approaching rational.
The "substantial part of RISC-V today exists as unratified draft" seems to ignore previously shipping silicon such as the SiFive HiFive1 (and Rev B), The SiFive Unleashed and the SiFive Unmatched, as just a few examples. Will RISC-V continue to evolve? Doubtlessly. I guess since I have spent much of my career actively deploying things which never underwent IEEE or IETF or other ratification processes, I do not consider ratification by committee to be a particularly substantial detractor. I prefer to toil on prototypes and advance the state of the art, not get trapped by rehashes and reimplementations years or decades after the fact.
Ugh, you are getting a bit tangled here. First, "V" in RISC-V stands for "5" according to the official history at least. Second, vector extension to RISC-V was first proposed in 2015 and has not yet been ratified (although I hear it is in final stages of approval). SVE is an advantage for ARM because it is a ratified, standard extension (part of v9 core) with mature compiler support. I am not competent enough to provide a feature-by-feature comparison of RVV vs. SVE. From first glance it seems to me that RVV is targeting streaming computations, while SVE/SVE2 are more general in scope. Not quite sure what you mean by being an "afterthought". I don't see how anyone can argue that SVE qualifies as "afterthought" while simultaneously arguing that RVV with its stateful configuration and execution model does not. Now, AVX512, that's an afterthought, and a particularly awful one.
*sigh* OK, "a bit tangled" while I do not dispute that the V in RISC-V is pronounced "five", apparently you failed to read one of the articles which I previously linked.
To make it stupidly obvious to you and anyone else who may be misfortunate enough to read this discourse, here is an excerpt:
"This was in many ways an alternative to RISC, where one took a vector processing approach instead. In fact the RISC-V guys may be more influenced by having worked last on vector processing than having invented the original RISC. Patterson and others really began believing in the power and elegance of vector processing from their IRAM project. Hence the V in RISC-V actually stands for both 5 and for Vector. RISC-V was from the beginning conceived as an architecture for vector processing."
To harp instead on your "SVE" being an advantage for ARM because it is "ratified" I guess you failed to acknowledge that RVV despite not being "ratified" already has an example/reference implementation from last year:
By "afterthought" I meant that vector processing is a *CORE PRINCIPAL* of the RISC-V CPU ISA, RVV extensions notwithstanding, whereas in ARM (going back to 1980s vintage ARM) SVE is absolutely not part of the core ISA (hence the E in SVE is "Extension"), and has come about much more recently in its design, specifically for AArch64.
Yes, there are additional afterthoughts in ARM and Intel implementations as well. IMHO, both are relatively awful, though perhaps for completely different reasons than you may think.
My position on the RISC-V is that while it is a decent, minimalist ISA, it offers no inherent advantage over ARM's Aarch64, and much of what I consider essential functionality (efficient virtualisation, data-parallel instructions, bit processing) is still in draft phase. I also find some of RISC-V design decisions questionable. Still, I am looking forward to the time when the essential functionality is ratified and general-purpose high-performance RISC-V processors are available. It would indeed be great to have a good standard ISA that is not locked behind corporate IP. That said, I think that there are much more interesting proposals, such as ForwardCom by Agner Fog.
I consider libre/free open source to be an inherent advantage against proprietary implementations of any technology. That is not unique to RISC-V, but it is most certainly not an advantage which ARM's aarch64 nor Intel's x86 nor AMD64 can claim, at all.
Your professed "essential functionality" I am tired of repeatedly providing counter examples to virtualization as being part of the core of RISC-V, and "data parallel" implementations already being existent as well. Why you seem to fail to grasp that is beyond me, draft phase or not. I can, and have, kept ahead of draft specifications and RFCs for decades. Treading over known ground is tiresome to me, and feels as if it is a waste of resources better spent doing more or less anything else.
If you think that ForwardCom is a more interesting proposal, given that in 2021, there is no shipping silicon, and an FPGA softCPU core at best from what I can glean, and it is only a proposal from 2017, with substantively less vendor buy in and pedagogical coursework, I do not know what to tell you.
Maybe in TIME ForwardCom will come closer to offering more than RISC-V does, but presently, it does not merit such an evaluation today. I am not going to waste more words on detracting against it, because it probably needs no detractions. I think much earlier in this thread, I already explicitly mentioned something to the effect of that in 20-30 years when consumers are benefiting from RISC-V research of today, we will probably consider ARM relativistically arcane, and be contemplating whatever should be replacing RISC-V. Maybe that will be ForwardCom, maybe it will be something we haven't even begun to contemplate yet. The future is unwritten for now.
Let's also pay attention to the context. We were discussing the possibility of Apple moving Apple Silicon to RISC-C in the near future. Based on the fact that RISC-V spec is still work in progress and that the RISC-V ISA has no advantage over Aarch64, I don't think that such hardware transition makes any sense for Apple at this point.
"RISC-C" argh, again, no, "RISC-V" please, it's six characters, get them correct!
The possibility of Apple moving Apple Silicon to RISC-V does not seem to be implied by the job requisition. Albeit, I applied to that requisition, so maybe I read it a bit more carefully than you did? What is clear, is that Apple apparently believes publicly that it is worth investing time and resources and a salary for an employee who is focused on RISC-V related research. Given that I have already invested a lot of my personal time and energy and money in RISC-V research, I thought it might be neat to get paid for it, but Apple isn't exactly a top choice of potential employers for me, particularly given the AppleToo "context" in recent events.
I could just as easily posit that the Aarch64 spec is "still a work in progress" and that we are likely to see Apple M1X or Apple M2 iterations in the future, that is a red herring, and fundamentally, not a detraction of RISC-V also sharing such properties.
To paraphrase Josh Paetzel, "all software is beta".
Fortunately I do (not in the topic of CPU design though, here I am memory a hobbyist

)
Ooph. Well, reading that feels as if you are pissing in my cheerios.
Albeit, I was born in Menlo Park and grew up around Xerox Altos and Stars and saw network computing before most people had even heard of a modem. While my first formal instruction in programming was circa 1981 at a Junior college, I spent too much of my youth, fixing and repairing other people's code and hardware, while simultaneously being hazed by PhDs and CIA operatives who not only forced me to breadboard hardware prototypes and use punch cards (in the 1980s? SERIOUSLY!? Total dick behavior) before they would "grace me" with the "privilege" of running my own code on their systems. My parents were luddites, who despite having already paid for their son to take programming classes, ended up drinking a near beer while watching the 1984 Superbowl and buying a Macintosh 128, which was an affront to my sensibilities, not only because it was ridiculously overpriced, but also: black and white, and had not as much as a BASIC interpretor. A C64 would have been a superior investment and programming environment in that era.
Meanwhile, friends who were born into better, or at least richer families, had computers, plural, and much nicer ones such as Commodore Amigas, and rarely even Silicon Graphics workstations. A couple of those friends got jobs at the Naval Postgraduate School where I would routinely get invited to help them port code to Sun Microsystems SunOS workstations, and more rarely SGI Indy's and Indigo^2 and Reality Engine Onyx^2 systems. I was told that some of my code even was running on a $30 million Cray at Fleet Numerical, but no one ever allowed me to even touch the thing myself.
Note: I was not paid for that.
I was also: not anyone's student.
In more recent years, people have developed a term for that, namely: "stealth IT".
Which is a really nice way of saying: "exploiting minors for their labor".
That was under the purvey of PhDs.
So, I do not, as a general rule, take too kindly to higher degrees. As it was, it took me five years and no fewer than a private college, a junior college and a public University to get so much as a B.A.
Meanwhile I have had additional coursework from vocational colleges in computer security (which was candidly, years if not decades behind what I had been doing privately), schools in esoterica, and other vendor specific certification courses paid for by employers over the decades, my experience with PhDs remains: profoundly negative. I have potentially befriended a handful at best, and though I have pursued discourse in doctoral programs, in years of making efforts in subjects of interest with researchers who have published findings I considered to have some merit, precisely ONE has ever been so polite as to email me back a rejection letter. All of the others, one of whom is an ACM member with whom I shared beers and went so far as to tell me to reach out to him for additional studies, never replied after I made attempts.
So, while I am content to continue with lifelong learning and advancing the state of the art, I find that the ivory tower of academia seems markedly reserved for those who, by most appearances, were born into privilege that I will never attain no matter how many systems I have held root or wheel or sudoers or Enterprise Admins, or outfence level abstractions on.
Nonetheless, I do not, as a general rule, go out of my way to detract from others.
Your writings, as related to RISC-V, from my vantage and perspective, seem as if they are categorically aligned against it, profoundly negative to the point of being an example of the idiom "nipping it in the bud" and even going so far as to explicitly write a professed preference the much less fleshed out and younger proposal of ForwardCom in your own words.
I was friends with Doug Engelbart and other similar luminaries, and I personally feel the level of despondency I could see in his eyes with a planet full of know-it-alls misusing and abusing technology. I would personally rather be annihilated and never again reincarnated than continue to persist in the samsaric levels of hell coexisting with insufferable trolls, bullies and abusers, which has been the reality of most of my corporeal existence as a human in this incarnation, but empirically having already transcended the quantum suicide immortality paradox long ago, find myself without much in the way of recourse let alone solution to such conundrums. Instead, continually encountering argumentative trolls, or brazenly disingenuous individuals, with or without PhDs in the limited realms of interest I do still find online, as rare as they may be.
So, while you may be a PhD:
I am not your doctoral student, and this is not a thesis defense, and please, rather than attempt to reframe things again, or wonder what you might have possibly done to potentially offend me, go back and reread all of my references, and ask yourself why you are playing devil's advocate?
One last quip: "here I am memory a hobbyist"
Did you mean: "here I am merely a hobbyist"?
Because, if so, please, find better hobbies than replying to me. Sincerely, go solve homelessness, or world hunger and stay t.f. away from me.