Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Re: Re: cisc/risc

Originally posted by Rincewind42
In the end, there is no true CISC or RISC anymore. Traditional CISC chips have been taking on traits from RISC designs to get more speed, and RISC chips have been gaining ever larger instruction sets in order to do more advanced things. RISC has increasingly become less about instruction set size, and more about instruction set complexity - it's about the only distinguishing factor between CISC & RISC chips. No matter how large the instruction set gets on a RISC chip, each instruction tends to do relatively little and use very few resources. On a CISC chip the large resource hogging instructions that it accepts may be broken down into simpler less intensive instructions on the fly to be fed into a RISC-like core.

I never bought this whole "there is no CISC or RISC" argument that has been popularized by the Wintel crowd for several years now. It is the stuff of FUD, intended only to confuse potential customers of RISC processors that staying with the x86 line is A-OK. If there were so little difference, why is Intel so desperately trying to make the transition to Itanium? (Right, Itanium isn't "RISC", it is "EPIC". Insert hyucks here.)

On today's x86 lines, the need to translate x86 instructions to RISC op-codes is an albatross around that design's neck. No one is writing software which is compiled for the op-codes---they compile it for x86. x86 software must contend with the ridiculous complexity of x86 instructions, as well as the nightmare mess of the x86 register set.

I would only accept the "RISC-like" definition of today's Wintel chips if software was no longer compiled for x86 and could totally sidestep the impossible mess of x86 registers. The outer wrapper of microcode and the type of software (compiled for RISC or for CISC) cannot be ignored in the RISC/CISC equation. If you were to compile for the "op-codes" and skip the whole x86 microcode baggage (the "translation" that is), you will have software that won't even run across the complete x86 family! AMD and Intel use different op-codes, and even different Pentiums use different op-codes from each other. Only x86-level code is *safe* for software compatibility and that means CISC software.

I would like to single out a single sentence that I take serious issue with:
Originally posted by Rincewind42
RISC has increasingly become less about instruction set size, and more about instruction set complexity - it's about the only distinguishing factor between CISC & RISC chips.

There is a common misconception that RISC means fewer instructions; this has never been the case! In fact, even the 601 had as much or more instructions than its CISC counterparts of the time. RISC chips will often have more instructions because they ARE simpler instructions, taking fewer cycles to complete per instruction than RISC chips. You often need more instructions in order to do the work of a fewer very complex instructions. CISC chips have many complex specialized instructions that can take many cycles to complete, thus reducing the size of the compiled code and then need to have a huge set of primitive operations. The philosophy behind RISC rejects the need for complex instructions and has pushed for most instructions to be more primitive and to complete in 1 to 2 cycles. What this means is that RISC programs will be bigger when compiled (more primitive instructions, fewer "compact"/complex ones). And it goes without saying that RISC likes lots of generic registers.

The very acroynm for RISC is the source of this confusion, so I understand why so many people get this wrong (although I suspect some Wintel types get it wrong on purpose...!) It has been suggested that RISC and CISC be renamed to something clearer and truer to their fundamental philosophical differences. For RISC: "Reduce Instruction Set to the Compiler". Alas, I can't remember what was suggested for a new CISC name, but it meant this "let the chip handle the complexity of the instruction set, not the compiler". The point is this: When you compiled for a RISC chip, the compiler is doing the work of figuring how everything is scheduled and which instructions are needed to complete each relatively primitive operation, which typically only takes the chip a couple of clock cycles per instruction; but when you compile for CISC, the compiler is primarily working with those higher-level instructions, many of which may take many cycles to execute by the processor, including the processor's need to further decompose those instructions down and down into smaller and smaller operations. The chip is doing that decomposition---not the compiler! This is very UN-RISC like behavior. Remember that since different members of a CISC family may have different "RISC cores" (and hence, different op-codes), the only safe way to guarantee software compatibility with all members of that family is to compile your code for that high-level (read: complex) instruction set.

Some RISC chips also decompose instructions, true, but they are usually simple instructions to begin with. For a 2-cycle instruction, a later implementation of that chip may decompose it into two 1-cycle opcodes based on testing which indicated this decomposition would yield even further performance advantages. The compiler is still doing most of the "work"; decomposition in this context is more of an optimization by the processor than the type of complex translation needed by CISC chips with their so-called "RISC cores".

It is the software and the role of the compiler that makes a chip RISC or CISC, not just the chip design. And as long as you write CISC code for that outer high-level microcode wrapper, no matter how "RISC-like" the chip "core" may be, it is still a *CISC* chip.
 
All this talk about g6..i mean..apple redesign the case 3x times with g4 and went from 350 mh z to 1.42 ghz. I can see the g6 in late 2005. But that's just me...
 
Re: Re: Re: cisc/risc

Originally posted by Jon the Heretic
On today's x86 lines, the need to translate x86 instructions to RISC op-codes is an albatross around that design's neck.

You've simplified it to the point of totally misrepresenting reality. The part of the x86 ISA that is executed in the core is a simple (RISCy) subset of the x86 ISA, with the more CISCy parts being executed in software (microcode). Believe it or not, modern RISC cores go the opposite direction, combining simple opcode combinations into more functionally complex opcodes in the core.

An important thing to realize is that it is easier to optimize a modern CPU core for performance if you have a "simple CISC" ISA than a RISC ISA, which is one of the reasons early "simple CISC" cores (e.g. the original Pentium) did so well performance-wise versus RISC architectures. It allows you to make assumptions that reduce the complexity of making more high-performance architectures by explicitly eliminating instruction dependencies that may be ambiguous in RISC cores.

In fact, if you look at modern PPC cores, they actually translate basic RISC opcodes to something that looks more like the simple CISC opcodes natively executed on most current x86 cores. The problem is that the assumptions under which these old ISAs were designed no longer hold with modern CPU cores, such that neither classic RISC nor classic CISC is particularly optimal. Classic RISC is too simple to optimize throughput in the core, and classic CISC tends to have clunky register structures that can be difficult to use well.

An ideal ISA for modern processor cores bundles more functionality in its opcodes than a classic RISC ISA, but without all the unnecessary and extraneous ISA baggage, like nasty register models, found in older traditional CISC ISAs. An example of an ISA that is more closely optimized for modern processor cores is the AMD64 ISA, which uses simple CISC opcodes but uses a clean RISC-like register model.
 
Re: Re: Re: cisc/risc

Originally posted by Jon the Heretic
I never bought this whole "there is no CISC or RISC" argument that has been popularized by the Wintel crowd for several years now. It is the stuff of FUD, intended only to confuse potential customers of RISC processors that staying with the x86 line is A-OK. If there were so little difference, why is Intel so desperately trying to make the transition to Itanium? (Right, Itanium isn't "RISC", it is "EPIC". Insert hyucks here.)


There is a common misconception that RISC means fewer instructions; this has never been the case! In fact, even the 601 had as much or more instructions than its CISC counterparts of the time.

I hope no one mistakes length for correctness. Very little with this write up is correct. I am only going to attend to the two largest mistakes.

As implemented, there is very little difference in the processor cores of today's CPUs caused by the ISA. Anyone wishing to find out more can easily go to ArsTechnica or any one of a number of other sites talking about the CPU core designs. If you are technically inclined, you can often find the architecture presentations by the designers themselves with a quick Google search.

When talking CISC versus RISC you are only talking about ISAs and then you have to go back to the early to mid 80's to discuss real differences. RISC did not suddenly appear in early 90's with the PowerPCs. The PowerPC came relatively late to the game and I would even argue that it is neither a CISC nor RISC ISA. Because it came late, it had the advantage of picking the best parts of both camps.

In order to understand what the original RISC ISA designers were trying to do, you need to understand the popular CISC ISAs of the day. On a 32-bit processor, you had some 8-bit instructions, some 16-bit instructions, some 32-bit instructions and some multiword instructions. Never mind the complexity of operands. You had all sorts of memory addressing instructions ranging from simple "fetch the 32-bit word at address A and place it in this 32-bit register" to "fetch the 32-bit word at address (A plus four times value of register B) and place it in address (C plus four times value of register D)".

Anyone who has written assembly on a VAX 11/780 immediately recognizes what a CISC ISA is. It has a very powerful set of operations available to it. On the other hand, the level of ISA complexity seems almost insane. You could almost translate FORTRAN 4 instruction per instruction into VAX assembler. :D

The original idea of RISC was to simplify the ISA so that all instructions and operands were of the same size (i.e. 32-bits) and to simplify the memory addressing options. An ideal RISC ISA would have only two memory operations available: store register A at the address in register B and read the address in register C to register D. Both operations would work on 32-bit words. All of your other operations would be ALU such as add, subtract, bitwise-or, rotate right, etc. Each of those instructions would only operate on registers and not be able to access memory.

In order to overcome the slow access to memory due to the simple addressing scheme, lots of registers were needed. Some designs had as many as two orders of magnitude more registers as their CISC contemporaries.

However, neither RISC nor CISC were completely right or completely wrong so today the difference is largely meaningless. Anyone who refuses to believe that can simply look at the SPEC scores and chip design overviews on the Web. Chip performance today has nothing to do with CISC versus RISC ISAs.
 
Re: Re: Re: Re: cisc/risc

Originally posted by tortoise
In fact, if you look at modern PPC cores, they actually translate basic RISC opcodes to something that looks more like the simple CISC opcodes natively executed on most current x86 cores. The problem is that the assumptions under which these old ISAs were designed no longer hold with modern CPU cores, such that neither classic RISC nor classic CISC is particularly optimal. Classic RISC is too simple to optimize throughput in the core, and classic CISC tends to have clunky register structures that can be difficult to use well.

Actually, the 970 doesn't decompose 99% of it's instruction set. The only instructions that are updated are the load/store multiple word instructions, the load/store with update instructions and perhaps a few others. I suspect that there will continue to be very few decomposed instructions - all of the currently decomposed ones were originally not recommended as they could execute more slowly on some PPCs (not that that had been the case before the 970).

I suspect that we are unlikely to see few if any more instructions broken down like this.
 
Re: Re: Re: Re: cisc/risc

Originally posted by ktlx
I hope no one mistakes length for correctness. Very little with this write up is correct. I am only going to attend to the two largest mistakes.

This hyperbolic comment above is just shy of a personal attack ("very little...is correct" and "two largest mistakes".) Riiighht. Because I am bigger than that, I will ignore such obvious baiting for now. Meanwhile, I do find it curious that you make little attempt to support your over-strong position (which was what again...?), except through oblique irrelevancies about instruction size (as opposed to the more central issue about complexity and the role of the compiler which the original author was responding to) and noting (with a resounding duh!) that the PPC 601, while over a decade old now, isn't the first RISC chip. Gee, thanks for that wonderful but completely irrelevant revelation. It should be noted that when the PPC601 was released people were not dismissing the RISC-CISC distinction as meaningful. That's the point...

(You are slightly less than oblique in one area, although it still doesn't address the original article that he so readily slams. You claim that SPEC scores indicate there is no overhead to the ISA translation (something which I feel is inevitable). Using SPEC scores to make this comparison across processor families can not support this wild claim, as this entails comparing Apples to Oranges which differ along far more variables than just CISC and RISC. A meaningful comparison would be to compare the performance of a program that uses the ISA x86 to the execution of a program that didn't use the ISA and was written directly to use the opcodes, bypassing the microcode. The comparison could be made ON THE SAME CHIP; that would say a lot about the overhead of that ISA and wouldn't be Apple and Oranges...)


It is interesting that others here have alluded to the several year old ArsTechnica article which was one of the first public annunciations that there is no difference between RISC and CISC (<http://arstechnica.com/cpu/4q99/risc-cisc/rvc-1.html>). While I love ArsTechnica, the article and its position are NOT gospel and NOT universally accepted. Wintel lovers do like its conclusions, I have noticed in the years since it was first published, as it suits their positions. I think that is sufficient reason to question its conclusions more criticially...

In fact, quite a few people in the forum that originally responded to the ArsTechnica pronouncement of the "post-RISC era" <http://slashdot.org/articles/99/10/21/0848202.shtml> didn't buy it any more than I did, and they made the exact arguments I made above.

Let's look at a few of these counter point of views:

Originally posted by Anonymous Coward
The author of the article rightly notes that a basic design philosophy difference is where the burden of reducing run-time should be placed. The original RISC philosophy was to place the burden on software--this is especially true of (V)LIW processors--whether the programmer, the compiler, or the set of libraries. CISC (or more rightly "old style") design philosophy sought to place the burden on the hardware.

Originally posted by coats
If you take the point of view that a P6 is a RISC core running an x86 interpreter, then still the user-visible architecture is not RISC. It would only be RISC if you let me program the core directly with its native micro-ops. "Hannibal" still doesn't understand this distinction between architecture and implementation.

Originally posted by hattig
RISC does stand for Reduced Instruction Set Chip, but that doesn't mean less instructions, it means less Instruction Formats. Think of how many different instruction formats x86 has, with varying lengths of instructions, non-orthogonal instructions, etc, compared with the simplified instructions provided by RISC processors, which might have as few as 3 or 4 different instruction formats....The article was silly really, the author didn't look beyond the word 'Reduced' in RISC, thought it meant less instructions, then saw that most RISC chips have tonnes more instructions than most CISC chips, and arrived at the wrong conclusion.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.