Re: Re: cisc/risc
I never bought this whole "there is no CISC or RISC" argument that has been popularized by the Wintel crowd for several years now. It is the stuff of FUD, intended only to confuse potential customers of RISC processors that staying with the x86 line is A-OK. If there were so little difference, why is Intel so desperately trying to make the transition to Itanium? (Right, Itanium isn't "RISC", it is "EPIC". Insert hyucks here.)
On today's x86 lines, the need to translate x86 instructions to RISC op-codes is an albatross around that design's neck. No one is writing software which is compiled for the op-codes---they compile it for x86. x86 software must contend with the ridiculous complexity of x86 instructions, as well as the nightmare mess of the x86 register set.
I would only accept the "RISC-like" definition of today's Wintel chips if software was no longer compiled for x86 and could totally sidestep the impossible mess of x86 registers. The outer wrapper of microcode and the type of software (compiled for RISC or for CISC) cannot be ignored in the RISC/CISC equation. If you were to compile for the "op-codes" and skip the whole x86 microcode baggage (the "translation" that is), you will have software that won't even run across the complete x86 family! AMD and Intel use different op-codes, and even different Pentiums use different op-codes from each other. Only x86-level code is *safe* for software compatibility and that means CISC software.
I would like to single out a single sentence that I take serious issue with:
There is a common misconception that RISC means fewer instructions; this has never been the case! In fact, even the 601 had as much or more instructions than its CISC counterparts of the time. RISC chips will often have more instructions because they ARE simpler instructions, taking fewer cycles to complete per instruction than RISC chips. You often need more instructions in order to do the work of a fewer very complex instructions. CISC chips have many complex specialized instructions that can take many cycles to complete, thus reducing the size of the compiled code and then need to have a huge set of primitive operations. The philosophy behind RISC rejects the need for complex instructions and has pushed for most instructions to be more primitive and to complete in 1 to 2 cycles. What this means is that RISC programs will be bigger when compiled (more primitive instructions, fewer "compact"/complex ones). And it goes without saying that RISC likes lots of generic registers.
The very acroynm for RISC is the source of this confusion, so I understand why so many people get this wrong (although I suspect some Wintel types get it wrong on purpose...!) It has been suggested that RISC and CISC be renamed to something clearer and truer to their fundamental philosophical differences. For RISC: "Reduce Instruction Set to the Compiler". Alas, I can't remember what was suggested for a new CISC name, but it meant this "let the chip handle the complexity of the instruction set, not the compiler". The point is this: When you compiled for a RISC chip, the compiler is doing the work of figuring how everything is scheduled and which instructions are needed to complete each relatively primitive operation, which typically only takes the chip a couple of clock cycles per instruction; but when you compile for CISC, the compiler is primarily working with those higher-level instructions, many of which may take many cycles to execute by the processor, including the processor's need to further decompose those instructions down and down into smaller and smaller operations. The chip is doing that decomposition---not the compiler! This is very UN-RISC like behavior. Remember that since different members of a CISC family may have different "RISC cores" (and hence, different op-codes), the only safe way to guarantee software compatibility with all members of that family is to compile your code for that high-level (read: complex) instruction set.
Some RISC chips also decompose instructions, true, but they are usually simple instructions to begin with. For a 2-cycle instruction, a later implementation of that chip may decompose it into two 1-cycle opcodes based on testing which indicated this decomposition would yield even further performance advantages. The compiler is still doing most of the "work"; decomposition in this context is more of an optimization by the processor than the type of complex translation needed by CISC chips with their so-called "RISC cores".
It is the software and the role of the compiler that makes a chip RISC or CISC, not just the chip design. And as long as you write CISC code for that outer high-level microcode wrapper, no matter how "RISC-like" the chip "core" may be, it is still a *CISC* chip.
Originally posted by Rincewind42
In the end, there is no true CISC or RISC anymore. Traditional CISC chips have been taking on traits from RISC designs to get more speed, and RISC chips have been gaining ever larger instruction sets in order to do more advanced things. RISC has increasingly become less about instruction set size, and more about instruction set complexity - it's about the only distinguishing factor between CISC & RISC chips. No matter how large the instruction set gets on a RISC chip, each instruction tends to do relatively little and use very few resources. On a CISC chip the large resource hogging instructions that it accepts may be broken down into simpler less intensive instructions on the fly to be fed into a RISC-like core.
I never bought this whole "there is no CISC or RISC" argument that has been popularized by the Wintel crowd for several years now. It is the stuff of FUD, intended only to confuse potential customers of RISC processors that staying with the x86 line is A-OK. If there were so little difference, why is Intel so desperately trying to make the transition to Itanium? (Right, Itanium isn't "RISC", it is "EPIC". Insert hyucks here.)
On today's x86 lines, the need to translate x86 instructions to RISC op-codes is an albatross around that design's neck. No one is writing software which is compiled for the op-codes---they compile it for x86. x86 software must contend with the ridiculous complexity of x86 instructions, as well as the nightmare mess of the x86 register set.
I would only accept the "RISC-like" definition of today's Wintel chips if software was no longer compiled for x86 and could totally sidestep the impossible mess of x86 registers. The outer wrapper of microcode and the type of software (compiled for RISC or for CISC) cannot be ignored in the RISC/CISC equation. If you were to compile for the "op-codes" and skip the whole x86 microcode baggage (the "translation" that is), you will have software that won't even run across the complete x86 family! AMD and Intel use different op-codes, and even different Pentiums use different op-codes from each other. Only x86-level code is *safe* for software compatibility and that means CISC software.
I would like to single out a single sentence that I take serious issue with:
Originally posted by Rincewind42
RISC has increasingly become less about instruction set size, and more about instruction set complexity - it's about the only distinguishing factor between CISC & RISC chips.
There is a common misconception that RISC means fewer instructions; this has never been the case! In fact, even the 601 had as much or more instructions than its CISC counterparts of the time. RISC chips will often have more instructions because they ARE simpler instructions, taking fewer cycles to complete per instruction than RISC chips. You often need more instructions in order to do the work of a fewer very complex instructions. CISC chips have many complex specialized instructions that can take many cycles to complete, thus reducing the size of the compiled code and then need to have a huge set of primitive operations. The philosophy behind RISC rejects the need for complex instructions and has pushed for most instructions to be more primitive and to complete in 1 to 2 cycles. What this means is that RISC programs will be bigger when compiled (more primitive instructions, fewer "compact"/complex ones). And it goes without saying that RISC likes lots of generic registers.
The very acroynm for RISC is the source of this confusion, so I understand why so many people get this wrong (although I suspect some Wintel types get it wrong on purpose...!) It has been suggested that RISC and CISC be renamed to something clearer and truer to their fundamental philosophical differences. For RISC: "Reduce Instruction Set to the Compiler". Alas, I can't remember what was suggested for a new CISC name, but it meant this "let the chip handle the complexity of the instruction set, not the compiler". The point is this: When you compiled for a RISC chip, the compiler is doing the work of figuring how everything is scheduled and which instructions are needed to complete each relatively primitive operation, which typically only takes the chip a couple of clock cycles per instruction; but when you compile for CISC, the compiler is primarily working with those higher-level instructions, many of which may take many cycles to execute by the processor, including the processor's need to further decompose those instructions down and down into smaller and smaller operations. The chip is doing that decomposition---not the compiler! This is very UN-RISC like behavior. Remember that since different members of a CISC family may have different "RISC cores" (and hence, different op-codes), the only safe way to guarantee software compatibility with all members of that family is to compile your code for that high-level (read: complex) instruction set.
Some RISC chips also decompose instructions, true, but they are usually simple instructions to begin with. For a 2-cycle instruction, a later implementation of that chip may decompose it into two 1-cycle opcodes based on testing which indicated this decomposition would yield even further performance advantages. The compiler is still doing most of the "work"; decomposition in this context is more of an optimization by the processor than the type of complex translation needed by CISC chips with their so-called "RISC cores".
It is the software and the role of the compiler that makes a chip RISC or CISC, not just the chip design. And as long as you write CISC code for that outer high-level microcode wrapper, no matter how "RISC-like" the chip "core" may be, it is still a *CISC* chip.