If Apple is 'Apparently' So amazing at chip design, leaving everyone in their amazing wake.
One can only ponder why they have to wait for Intel, AMD, or in another dimension Nvidia to supply them with the goods to make computers with.
I mean, hey, why not just design your own chips from scratch then?
hmmmmmmm?
Most operating systems are designed with x86 chips in mind.
Using the A series chips would require compilation from top to bottom of every single application and means most libraries would no longer be compatible with macOS, which also automatically removes the ability to run Windows on any Mac or any Windows application through Wine or similar.
These are just the architecture changes problems, now you have to think of performance: in its best day no A series chip can even remotely compete with the Intel chips present in macs and they cannot compete with the AMD gpus used in macs or the integrated ones.
Also it makes no sense to compete against these companies, they have decades of expertise on their fields, it's like trying to play against NBA players, they're great at what they do and you'd need years to reach their level (assuming you could).
Apples work in their A series chips is fantastic, taking into account the time they've been doing it and the speed improvements they've been able to achieve but they're used on a system with very specific characteristics for which they're specially designed and they're great at their required TDP but are unmatched for the giants of Intel at the 35/45 TDP level.
You could of course ask why they couldn't just stick multiple A9 chips together to create a competing offer (excluding the issues from architecture changes) but they'd have to either create amazing hardware to make sure everything is running at their best speed (no multiple cores waiting for nothing) or rewrite the OS to take advantage of this solution but writing good concurrent/parallel systems is considerably hard.
So in an a nutshell: break compatibility with the worlds most used OS and multiple libraries/applications, require complete recompiling of everything (hoping no bugs show up because someone wrote code under a specific assumption of the CPU architecture, compete against multi-billion dollar companies on their fields in which they have decades of expertise and be forced to hack some solution since their current implementation simply can't compete on pure performance levels.
They have lots of money but this would be a spectacularly hard task which would cost lots of money and time.
Verdict: at this point in time this is a completely stupid idea, in the future it might viable if they can provide similar performance levels and solve the compatibility issue it could be a great move.