I'm starting to take issue with current CISC processors being described as a complex instruction decoder wrapped around a RISC core. If you do that, then you need to describe the 8080 and 6502 as CISC wrappers around a RISC core.
For decades now, processors have had an orthogonal general purpose core that's controlled by the instruction decoder. In a 6502, the 8-bit ALU that performs your "add to accumulator", is the same ALU that (when double pumped) calculates the effective address for indexed addressing modes.
You have to look way back to find processor architectures where each different instruction is executed by its own circuitry. I think some of the 4-bit embedded microcontrollers are like that. Also perhaps processors built by students out of TTL jellybeans.
RISC is about reducing instructions that have multiple resource and scheduler hogging steps.
The difference is that starting around 1992 or so, you started having CISC processors where the instruction decoder is decoupled in such a way that it has its own state machine and a microcode ROM, and there is a separate instruction pointer different than the ISA instruction pointer; what happens is that CISC instructions are broken into a micro-op sequence of risc-like instructions, which are issued to the ALUs/fetch units. These microops may be issued out of order based on dependency analysis, etc. The microops are risc-like in that they (1) have fixed length; (2) have standardized formats; (3) cannot access both memory and registers in the same instruction; (4) are “simple” (in various ways).
So, as far as the scheduler/register renamer/ALUs/Load-store unit are concerned, they see only “risc” instructions.