I'll elaborate a bit more on this since you seem to understand what the other person glanced over.
AMD is struggling financially and thus their long-term future is always foggy but Intel is not going to take either one of them out anytime soon. Intel can't in the graphics department but might give Nvidia more competition in the HPC market with Xeon Phi and chip away at the bottom segment who really wouldn't have been Nvidia/AMD's segment initially. Hard to say if it fairs well against Tegra but time will tell. "Good enough for Apple to drop them entirely" is not the reason for either one's theoretical demise. AMD is simply not very profitable and the other is moving more into HPC, mobile and cars. Differentiation is nothing new.
The IGP will still not cut it for most intensive tasks, fine for casual to intermediate use but Intel and Apple aren't writing great drivers. Intel has had this problem for years and artificial segmentation creates a crappy inconsistent experience. Apple paired the Mac Pro with AMD but the professional market is mostly Nvidia's and I feel they fell short by pinning too much on OpenCL and not enough on actual stability and custom-tailored performance. Tangential points but for such control, these two companies put up mediocre efforts.
Iris Pro is pretty expensive. For the performance compromise, it really makes no sense unless one had to meet a certain thermal design threshold. The price difference is around what a mid-range dedicated GPU would cost which although introduces another point of failure, will definitely boost overall performance as screen density quickly rises. OpenCL and such is nice but for most, you get more value and performance from a mid-range dedicated chip and their experienced software teams working on drivers. Doesn't matter if Nvidia runs CUDA--they support it and will back it up.
The GTX980M performs at around half that of the 980 desktop card from a mobile chip. Now that's using a lot of power but the architectures have been tweaked and Intel isn't going to match that with Skylake or anything for awhile. It'll be Iris Pro in the base models I am sure but for high-end configurations, there will be a dedicated offering. Die shrinks and stacked memory could keep the same general TDP range but scale up results.
Intel's problem is getting people to actually upgrade. Desktop enthusiasts can get by just fine with a Sandy Bridge. For workstation, most will drop in a dedicated card. The low-hanging fruit is picked but the needs of a casual user can easily be met by tablets and smaller all-in-one devices. That's the market Intel sees and despite the process and technical advantage, it hasn't been a great ride for them. So this is what 14nm is really for: having a process small enough to compete with ARM and TSMC and Samsung on the SoC devices. Processor performance doesn't increase as expected but for a majority of users, it hasn't been an issue for years. Neither the clockspeed nor number of cores. Does the average person count seconds it takes to render something? No, but there is a noticeable gap between 6 and 18 hours of battery life.
Apple would be unwise to drop Nvidia or AMD. If so, they're really assuming real professional users are stupid chumps with nowhere else to turn to. Fatal thinking there. There's really nothing that needed to be said about graphics since Intel can coexist with them just fine but since it was brought up...