Except the AMD and VIA licenses to not apply if the design is not an "AMD" or "VIA" design. However, you forgot the most obvious choice - IBM. They have a license to fab anyone's x86 design. All you have to do is pay them.
Once again...
if AMD sells them with their name on them, well then... they're an AMD design. Just because AMD opens their engineering drawings up for scrutineering and tweaking and collaboration with the PA Semi/Intrinsity team at Apple only to be sold as a *NEW* AMD processor, doesn't make it not an AMD part. If it's based off of their design and sold as their design, it's still an AMD design. It's not that hard to figure out. I mean, Chrysler sold the Crossfire despite it being engineered by Mercedes on one of their platforms, with their engines, and it was built at Karmann. It was still a Chrysler design though. The same is true of the Chrysler 300, which Chrysler (under Fiat control) still sells, and of which is built on a reengineered 2 generation old E-Class Mercedes platform. Semantics can take you farther than you'd think, and if the core of the new AMD architecture has ties to Athlon, Phenom, etc. but is tweaked by PA Semi/Intrinsity for sale by AMD, it's still an AMD chip.
If taken another way, if you're going to get so tight regarding what is and isn't an AMD or VIA design, then neither ultimately have the right to upgrade their own design to anything new because it would deviate from their original core design. If that's how the license is written and interpreted, well folks... it's an end-game before it even started. You can't compete with Intel or anyone else that has the rights and ability to alter their original design on a whim.
I'm pretty confident that, that was NEVER the case. After all, if it was... we'd never had AMD's x86-64 taking the lead over EMT64 because it wouldn't be allowed. Helping develop a chip is no different than a company changing their own inherent designs, so the real testament is who builds it and sells it. Just because Apple collaborates on the design and direction of future AMD product doesn't mean that the ultimate greater benefits on the design wouldn't go straight to AMD.
Similarly though, nothing is to say that Apple couldn't rub AMD's back for gains in sharing some of the patents for processors from AMD's portfolio to help them with their A# (i.e. A4, A5, A6...) and the on-die GPU's using some of ATi's patents as well. If AMD is as laggard and behind the times as you say, then this is a perfect opportunity for Apple to step in, have PA Semi/Intrinsity people help fix what ails them, and gain some IP in the process.
It'd still be an AMD design in the end.
Except it is a nearly 10 year old microarchitecture.
x86 is older than that as a whole... what's your point? For a 10 year old architecture it's still holding it's own remarkably well in multi-core Phenom machines against Intel's latest. They're not THAT far behind Intel's latest. They might not be the standard-bearers anymore (that to be honest has been gone for quite awhile [I already highlighted that even though Opteron, pound per pound, was more powerful... it was also energy thirsty and expensive, you gained more by building multiple Intel Xeon machines over 1 Opteron in overall performance per watt], yet they've still managed to keep bumping things to stay in the game)... but once again, it's not like Intel is mopping the floor with the ARM-based processors out there with Atom either. By that reckoning, should we consider Intel doomed? Sure, AMD hasn't done as well as Intel overall (although once again... Intel has no real competitor for Tegra or Tegra 2)... but who has on the desktop really? The reality is... AMD has done better than everyone else but Intel. VIA isn't remotely in the game there. IBM? Yeah... umm... not many new G6's or Power5's on the consumer/business desktop are there? Hrm... not many Cell desktops either?
AMD actually already has a deal in place to use IBM's fabs if necessary. And, as I point out, the easiest way to get an x86 license is to do what Montalvo, Transmeta, Rise, etc. do/did - go to IBM.
IBM's fab #'s have never been nearly as large as what AMD themselves had. True, they've had some bleeding edge tech, but they've also had problems with yields with the PowerPC's over the years just as Freescale/Motorola has. How is that any major advantage? Sure IBM's found ways with their fabs to get down to 32nm, but so has AMD, it's just that they never had enough of them to compete with Intel. Intel could have problems with yields on over half of their fabs and still produce multiple times more than IBM and AMD combined could at theirs on a good day with high yields. They also scale better to smaller dies quicker than AMD, IBM, and Freescale have. While I don't doubt they will use IBM's fabs to meet demand, if necessary, I also believe they'll continue to use their old fabs.
Bottomline... AMD will still be a huge customer from their old fabs. They might tap into some of IBM's unused production, but I still wager they'll stick primarily with their own (former) fabs first and foremost. The reason for the spin-off is simple... look at Intel and how many fabs they've closed as they continue to shrink manufacturing of dies to smaller and smaller sizes. As the chips shrink the need for capacity has shrunk. That's a big part of Intel looking into the SSD market because from a financial standpoint, it saves them $ vs. closing down more fabs and laying off skilled labor. Fab costs = a huge expense and by AMD freeing themselves of that, they place the burden for updating fabs to be competitive into someone else's hands. Since they've been playing cat/mouse with Intel for an eternity at this level, pushing the fabs out into a separate company that they license mfg. through helps them out, and it allows the chip fabs to expand their business beyond AMD's work. Not that this wasn't necessarily the case before (I believe I read at one time, IBM and NVidia both hired some fab work through AMD) but I'm sure it was more of a sensitive issue for companies within the same fields concerned about confidentiality on their designs.
In as far as an x86 license... yeah Apple could go to IBM, but what gain is there? I mean, if Apple wants to clean sheet design their own x86 design and look at integrating their own GPU (From where? Their tech they bought is primarily based around more compact embedded graphics chip designs, not for desktops.) on silicon, great... but let's face the reality here. Apple isn't going to go that far with the desktop. They don't have enough overall volume to justify it. They're better off collaborating with an existing partner to drive the tech and improve the tech than they are to clean sheet it and build their own chips on the desktop. It's not that they couldn't do it, it's just that there's not as many positives in doing so as you'd think, and the strict revenue stream for continued R&D is much the same as it was for both IBM and Motorola/Freescale. By working with AMD or VIA, a co-designed chip that's sold as an AMD or VIA chip could be sold in enough volume to justify the custom workings. The reasons for AMD over VIA = many, as AMD is more competitive overall and AMD has their own graphics processors that are competitive if not standard-bearing in price, performance, and efficiency. That's the primary reason why Apple assisting designing AMD's next-gen chips makes some element of sense. AMD reaps the overall rewards in sales and sells parts to Apple on-cost, and likely gains some IP access on patents to use for future Apple processor's and graphics processors for the iPhone/iPod/iPad. Call it... synergistic and without a need for Apple to buy AMD, without a need for Apple to invest heavily in AMD or ATi, and also without the requirement of an exclusive contract. In doing so, they also push Intel more into favorability with their needs... whether Intel buys NVidia or at least re-opens contracts with NVidia for Ion2... either way it's beneficial to Apple's direction with OpenCL.
Now working in collaboration with AMD, they can "assist" them with improving their 10 year old design to speed bump it or improve the efficiency of it to better compete with Intel. In doing so, with the ATI video accelerators that AMD has in their portfolio, Apple could gain a valuable alternative to Intel's chips that have better spec performance across the board than Intel does with IGP and any discrete secondary processor. This is especially true considering how far NVidia's lost their way on the desktop with regards to Fermi which while it's quite powerful, it's also a huge energy hog, and the performance isn't that much better than ATI's tech *TODAY*. ATI still has better stuff coming down the pipe later in the year to trump Fermi.
The point being... it's taken 8-10 years for Intel to get where AMD was in 2002. In some ways, yeah they've actually surpassed them as they're more energy efficient and have been long prior to the Core i3/i5/i7. In some ways though, from a financial "per part" case, AMD still holds some significant advantages over Intel. That's before you get to the reality that as anemic as Atom is... the Athlon in the Zino HD has better performance on the desktop and is cheaper than the Pentium Dual-Core, Celeron, Core 2 Duo, etc. For some applications, AMD has their advantages still. Not too bad for a company that, according to many, has sat on their hands for 10 years, eh?
I personally, in no way, believe that AMD hasn't been working on advancing what they have, but considering Intel's schemes of kickbacks from the OEM's and their marketing muscle, strength in their # of fabs, ability to continuously and feverishly upgrade fabs, etc. they've been able to keep pace or prove superior even when their architecture wasn't necessarily the best all-around. That said, AMD's obviously not made anything landmark, but once again... what Intel has done has largely been to play catch-up on chip designs while racing AMD to get to smaller dies more quickly until they could get their chipsets on-par or superior. When AMD was at 65nm, Intel was at 45nm. That was how they kept a performance and efficiency advantage even with chipsets whose designs were clearly not superior to AMD's on paper, but were superior in how they were scaled down. AMD's energies over that time were spent trying to catch Intel at the fab level which was never a war they could win, but they had to at least try to keep pace. If they didn't, they would've been out of the game long ago. That took a lot of $ and energy and ultimately, there was little they could've done to beat Intel at that level.
Oh and for the person questioning my comment on the NVidia sale to Intel...
I don't want to see it either, but they are vulnerable at this stage and it's a fit that suits both. Intel's GPU segment has been crap for forever and a day. NVidia has sorta diversified themselves to the point where they're in a quandary. Tegra and Tegra 2 are great but don't offer much over some of their competition at this stage (i.e. A4, Snapdragon, Cortex A9, etc.). Fermi came way too late to the market and sure, it's the overall performance champ but it barely beat a year old ATi design and it requires way too much power to do so and is more costly. Don't get me wrong... I'm an NVidia fanboy and prefer them over ATi honestly (ATi's drivers/software are garbage) but... I'm just being realistic here. After Intel locked them out of the more recent chips with Ion to the point NVidia's had to take extra time with workarounds and look at discrete options as their primary focus even as AMD and ATi use combined solutions (that almost lock them out of that market as well), it's pretty safe to say that NVidia has significant needs as does Intel when it comes to GPU on chip performance.
Similarly to the fact that Apple won't develop an x86 chip of their own on their own for sale, bet your bottom dollar that NVidia won't either, and NVidia ultimately has greater benefits for doing so (they know how to market to PC builders, they have a desktop GPU line that would integrate well in terms of their Ion tech, they have experience in motherboard and chipset manufacturing, and they would gain from the development and volume). Reality is... you're not going to remotely compete with Intel on sales and you're better off to either work with Intel or AMD than to go it on your own. It's not like Transmeta, Montalvo, and Rise are selling in anywhere near the volume AMD and Intel are, much less Via for that matter. If Intel won't work with NVidia, there's 2 options... assimilate or die. I'd love to see a 3rd option for NVidia, but I just don't see it with the way things are going. I mean, yeah they can stick around and be a niche player in the high end graphics card market for gaming and workstation-level work, but they've even sorta' slipped up with Quadro there too. They're becoming a Jack of all trades, master of none too quickly. I'd hate to see them go the way of 3dFx but it's looking that way.