It's not so much RISC vs CISC. It's just that it's a completely architecture, and executables for one won't normally run on the other. Windows actually exists for ARM already, you can buy an ARM Windows laptop
today. In the past, Windows has also been available (though maybe not widely) on PowerPC, DEC Alpha, MIPS and Itanium. All of these are RISC architectures, so it's clear that Microsoft has a long long history of being interested in putting Windows on other platforms than x86, and maybe RISC in particular. In contrast, the M68k architecture from the 80's was also very much CISC, but it could never run Windows.
For those interested, here's a longer explanation of architectures.
CPU's execute binary machine code programs. Briefly, they are encoded primitive instructions that can only do super simple things like add and subtract, multiply, divide, jump to another part of the code base on some condition, move data from memory into the registers, or move from registers to memory. And more, but effectively very primitive operations. These operations are very closely tied to the hardware. They cannot understand html, they have no idea what Windows is, nor macOS/Linux, and while some may have hardware support for higher level functionality, they can't play music or videos, they don't know how to do encryption, they don't know anything about networking etc etc. Now, these primitive operations are encoded as binary code (zeros and ones) and they can be loaded from memory and executed. This is how programmers can start making these processors actually do something meaningful.
However, through the history of computing, there have been many different ideas on how to best build processors, many different ideas on how to best create this encoding of the machine code, and so on. CISC for example, is the idea of making the primitive operations do more, so they become more useful, powerful and easier to use. Which is great for machine code programmers, but the downside is that the processor hardware gets increasingly complex (=expensive) to build and maintain. RISC on the other hand goes the other way. It aims to make the primitive operations as primitive as possible in order to make the hardware much easier to build. Early on this meant that they could run higher clock frequencies for example, but that's less of a point now. And there are variations within RISC and CISC as well. The point is that the encodings are different, so a program that is encoded for one processor, will just be garbage on another and won't run.
Now the thing is that programs on your desktop, your laptop, your iPad and iPhone (Android phones are slightly different), servers, most embedded devices etc etc. are all using these binary encodings and are therefore incompatible across processor architectures. You'd think that in 2018, lightyears away from the beginnings of computing in the 60's and 70's, we wouldn't be doing things this way anymore, but we still do. There are exceptions of course, but a very large part of the programs we use carry this incompatibility.
The thing with current Mac hardware is that it's very similar to PC/Windows hardware. Current Macs are effectively flavors of PCs, with some extras. This is why you can bootcamp into Windows very easily, while you were never able to do this back in the PowerPC days as I recall.
So if this is the case, why can't you take Notepad (or any other Windows program) from your PC and run it on your Mac? They are programs for the same hardware architecture then, aren't they?
Yes. However... a program like Notepad also depends on some other things. It needs to be able to open the application window. For this it needs to interact with the operating system. Notepad asks the operating system to produce a window, and it's actually the operating system that does the hard work there. It's the same for macOS, and the same for Linux. Problem is just that they all use different methods of asking for that window to be opened. If a Windows program (like Notepad) asks macOS to open a window, it's not going to understand the request and again we have garbage. It's like when you go to a fast food place and you try to order, but they don't speak your language so they just slap a bluescreen in your face. Well, they might.
So there are multiple levels of architecture at play, the processor architecture, and also the operating system architecture. This whole structure is actually incredibly complex today, for all three major operating systems. Anyone is welcome to dive into binary file formats and shared library dependencies and find out what amazing problems you can come across. It used to be somewhat simple back in the day
Of course, there has always been a lot of interest in bridging these architecture gaps. We want access to Windows apps on the Mac, that's why we bootcamp into Windows. Some may want to run macOS on non-Apple hardware. Developers want to be able to test run iPhone apps on their MBP. There are several methods for achieving this, some work better than others. But in short, if you want things to run quite fast, you'd better not be inserting too many translation layers (like emulation). It very quickly slows things down very much. We can successfully emulate old games consoles or Commodore machines like the C64 or Amiga, but that's simply because todays computers are on the order of hundreds of thousands times faster. It's not going to matter if we lose 2x or 4x or even 10x due to emulation. But emulating current architectures is a whole different thing.
Now I wrote above that CISC was great for machine code programmers, and RISC wasn't. This was meaningful in the 70's when we wrote programs in machine language, but ever since then we use compilers to create them for us. So when crossing architecture boundaries from one processor to another, a simple solution is simply to recompile the program. This is how Mac moved from PowerPC to Intel, and it's how it would move from Intel to ARM. For the apps that get recompiled it won't actually be much of a problem at all. The main issue there is that many apps won't get recompiled, for various reasons. It's the same when bridging from Windows on x86 to Windows on ARM. Apps could get recompiled and run much faster, but so far that mostly hasn't happened. And of course, the other architectural gap is harder. A Windows program won't seamlessly recompile for macOS, and vice versa.
So if Apple wanted to put macOS on ARM, would it have to create a version of macOS for ARM? Yes. But they already have one. Why can I be so sure? macOS and iOS share the same low level building blocks. You can think of it as the same operating system, but with different graphical interfaces. There's a bit more to it than that, but effectively, Apple would already have 95% of an ARM macOS based on what they have for iOS and on recompiling macOS. Most of it should work seamlessly. There will be a remaining 5% or less that is effectively getting some device drivers and kernel things to work on the new device. Tricky stuff, but it's been done many times before.
So this is why people are concerned about not being able to run Windows if MBP switches to ARM. It's not a problem to build the hardware or the operating system. It's mostly not a problem with Mac apps, though only a subset of apps would carry over. It is however a problem with Windows apps, because you have not one, but two architectural gaps to cross over. Both the processor gap and the operating system gap. As long as x86 is alive and well and is where the performance is, that's also where the apps are going to be.
In closing, I hope this was useful to you or someone else. Or that anyway even read it! If you did, maybe at the very least it provided some distraction while waiting for unreleased fruit branded products
For the hackintoshers in despair, just imagine the insane value of a Raspberry Pi Hackintosh and you get to smile again

[doublepost=1531102991][/doublepost]
It's not really possible for it to be x86, simply because the licensing for x86 is very locked down and unavailable for new entries. In theory everything is possible, but it's very unlikely. Not a technology thing, I'm sure Apple could make an x86 processor just the same as an ARM one.
Emulation is also... not a technology thing I would argue. We don't have the technology today to do near native speed x86 emulation on ARM. I would however, expect that this could be developed if just enough money, time and effort were put into it. I don't currently see anyone having a strong enough incentive to put solid amounts of money into this, so that's why I wouldn't expect it to happen in the near future.