Well, that is a poor excuse since a lot of their software problems end up being ridiculously obvious issues hours after a public release and not strange fringe cases found weeks or months later. Also Apple has a beta program set up months in advance that gets into the "public" users hands and yet does not identify many glaring release issues.
Apple also controls their own hardware and has a limited range of product variations to target their software on so when Android or Windows can release on a much wider set of hardware and has greater initial quality and performance then many recent Apple OS releases, it just speaks volumes to the actual level of quality of Apple's development processes.
Lastly a trillion dollar valued company should not have a "small" development team incapable of delivering better initial release quality, not for something a prominent as the software required for the hardware that made them a trillion dollar company. I don't think this comes down to incompetent developers, but I think obviously there is a culture of poor executive leadership and overall denial at Apple where they think they are still producing the kind of quality they were known for back when Steve Jobs used to chew the heads off his development team when the color of an icon didn't come out right.
Excusing Apple for the plethora of iOS and Mac OS release bugs is nonsense. The company wants more money for their products and so my expectations of initial quality is, and should be, far higher then the average software company. If Apple wants to sell $300 phones full of bugs then my expectations will match the value of the phone. But sell me a $1300 phone and it is quickly broken or crippled by the next iOS patch or major release is inexcusable.
How many test plans have you written?
How many automated test suites have you written?
Do you have any knowledge of combinatorics?
Writing good tests is
HARD, and it is impossible, for a large code base, to guarantee that all bugs have been eliminated, or even that all serious bugs have been eliminated. You can improve your process by requiring reviewers for every change, by using code coverage tools and requiring monotonically increasing test coverage for merges to mainline, by using static analysis tools (though this is fraught with false-positives), by using test generators, by using "fuzzing" tools, etc., but given the combinatorial challenges in a large code base and the time required to write and run tests, if you demand 100% statement and condition/decision coverage, you will never release.
Even then, you cannot cover all combinations of user environments (especially now, with plugins that allow for changing the behaviors of built-in applications with third-party code;
e.g., Mr. Number and NoMoRobo for phone calls, picture editing plugins, ad blockers, etc.), and then there is the impossibility of recreating every possible sequence of user interactions in order to catch all corner cases.
Finally, there is the notion, to which I was first introduced at a talk by a guest researcher at Lawrence Livermore back in the early '90s, that at some point, you reach a minimum field-fault density. At that point, every attempt to fix additional bugs ends up introducing new ones, and the only way to get better from that point is to redesign and rewrite (at least portions of) the code base. He called this the "Reimann-Belady Curve," but I've not been able to find it in a search, so I may be misremembering it.
Throwing money at the problem
DOES NOT fix it. See,
e.g., Fred Brooks,
The Mythical Man-Month.