Well, not much really. And it has been longer than 2015. Think about it. We hit 3Ghz 10 years ago and are still hanging out in that neighborhood. Apple doesn't make the processors.
As far as GFX goes, yes, I am sure a high end nvidia SLI rig, cooled with liquid nitrogen and backed up by a Tesla would be considerably faster. This is a laptop. If you want a high end gaming rig, buy one.
As others have pointed out, Radeon offers advantages in Apple's openCL applications like Final Cut. Nvidia's Cuda is nvidia only while openCL is open to any GFX maker.
As far as the higher end cards go like the 480, there is still the power and heat budget to consider as much as the price point. If anyone wants to build a better ecosystem, I am game to switch. However, I really like what I am getting from Apple. Is it the bleeding edge fastest? No. But, it gets out of the way and lets me work most of the time. That is what I want most.
Gigahertzs isn't everything. The amount of current through a CPU goes as the frequency times the junction capacitance per transistor times the total amount of capacitors. The power dissipated is the current times the voltage. So, while we've stayed at roughly 3 Ghz frequency, we've gained a hell of a lot more transistors. This is possible through reduction of transistor size, which lowers the junction capacitance (and hence drops the amount of current pumped per cycle).
The performance of a CPU goes as the clock speed times the work per clock (IPC) accomplished. You see, there are other ways to improve performance than clock speed of a CPU. We've seen massively parallel CPUs compared to the 3Ghz Pentium 4s - we've got quad core, 8 thread mobile CPUs. We've seen addition of instructions that greatly speed up specific tasks - AVX, for example. We've seen addition of better branch prediction, better pipelining and instruction merges, etc. All these features enhance total performance per clock and uses additional transistors to accomplish the task. Hence the focus of the last decade or so has been to enhance the total performance largely through parallelism and execution efficiency (better utilizing the transistors). As such, we've seen total performance increases not through much of clock speed increase, but rather, IPC and parallelism.
I think a lot of people would prefer if the CPU and GPU stayed separate. However, giving the huge push into heterogeneous computing, it would make sense that any CPU maker capable of making a GPU would integrate the GPU into the CPU to make eventual way for this heterogeneous compute future, where massively parallel instructions are sent to the GPU part automatically and the CPU takes care of the more serialized and loopy parts of the code. Such a future is coming. If one were to integrate a massively parallel compute logic into a CPU, you might as well add ROPs, tessellation, etc, to make it a fully functional GPU, so that you are not wasting precious silicon space with parallel compute units that aren't used often. I think we'll see a lot more benefits of this heterogeneous approach very soon. I am thinking that when Zen comes out, we'll see more of this Apple-AMD partnership with OpenCL and so on, bear fruit.