Originally posted by Raiden
Is compiling when you take the code you write in C++ (or any other language), and turn it from being code into being a program?
Exactly.
Good compilers can often find ways to do the work better than the way that the program was originally written - yet do exactly the same thing.
For a simple example, if some of the calculations in a loop have the same answer each time through the loop - the compiler will move the "loop invariant" computation to before the loop and save those results in a temporary variable. This avoids recomputing the number many times - it's done once and reused.
Smart compilers are getting very clever at analyzing program flow to find optimizations like this.
This is often called the "front end" of the compiler - it analyzes and improves the source code of the program.
----
On another level, once the compiler figures out the "improved program", it needs to generate the best sequence of machine instructions in order to actually build the program. This is called "code generation", or the "back end" of the compiler.
The back end is what needs to know about the machine architecture, AltiVec, number of registers, cache, etc. By understanding the underlying CPU and memory system, it can make the best use of the CPU, and get faster execution.
A compiler like "gcc" runs on many platforms - it will have a common front end for all platforms, and a specific back end for each. It will have an x86 back end (maybe even different back ends for 386/486/Pentium/P4), PPC (750,74xx,970), MIPS, IA64....
Compilers like Intel's or IBM's have an additional advantage in that the front end can do more to help the back end. For example, the front end can flag that certain sections of the code might be suitable for AltiVec or SSE2 parallel operations.
A general compiler like "gcc" has to handle many different types of systems, and might not have specific checks in the front end to save information like that for the back end.