IMO, the problem has nothing to do with any particular manager, and everything to do with over-reliance on dogfooding as a testing methodology. Dogfooding works very well when your engineers have the same hardware as your customers and use the software in the same way as your customers. Unfortunately, it doesn't work nearly as well when either of those assumptions is not met.
With new hardware, you'd expect a period of time right after the hardware ships when there aren't enough people dogfooding the software on the new hardware to find problems. And sure enough, that's when 8.0.1 broke, and it's the new hardware that had problems.
Apple's attempts to keep hardware-specific changes out of developer builds only compounds this risk by delaying merges until late in the game. And if they ship separate builds for new hardware (as they have often done in the past), those merges could happen after the first release of the OS, in which case the first bug-fix release is when any merge mistakes are most likely to be made.
To solve this, what Apple needs to do is stop hiring people to be QA testers as a means of deciding whether they're good enough to be engineers. Instead, hire actual, competent programmers to write unit tests, to design test harnesses, etc. Don't look for "detail-oriented" QA testers to do manual QA testing. That's no better than dogfooding, and it's a waste of time and energy. Only automated testing provides any real win, IMO.
And once you get a test environment built, all of your engineers should contribute to the tests for new code, and your QA engineers should continue looking through old code for new test opportunities for a while before becoming a normal engineer, but continuing to maintain the test framework as part of his or her responsibilities.
For example, to catch call failures, Apple should have a set of devices, one per carrier, and a series of tower simulators configured for each carrier. When a new build drops, those devices should immediately make a series of test calls on their simulated home network and on roaming networks under various signal strength conditions ranging from crap to solid, with various availability of 3G, EDGE, GPRS, 1xRTT, LTE, etc. If the carriers support VoLTE, they should have that turned on in the tower half the time and off the other half. And they should simulate handoffs under various conditions to ensure that they work as often as possible. And so on.
Additionally, when testing a new build, the PRLs should be checked against the carriers, and if more than a tiny percentage of towers appear or disappear, it should be a red flag that results in manual confirmation by a human being.
And any changes to the carrier bundle should be checked for validity against a set of known-valid keys and values, and unexpected values should result in continuous email warnings until either the test suite or the bundle gets fixed. And the moment an updated carrier bundle gets submitted into a build, the test harness should send a summary email indicating which values changed, which values were added, and which values were removed, to ensure that those changes were expected.
That's the level of testing I expect from a company the size of Apple, not just sending their engineers out into the world with phones running a new version of the OS. Yes, you should do that, too, simply because you may not always be able to replicate real-world environments in your test lab, but haphazard dogfooding is not a substitute for a thorough matrix of controlled tests.
If I'm wrongif they ran a battery of phone tests that still didn't catch this problemthen, and only then should you blame the testers and their management for not getting the tests right. Until that level of testing happens with regularity, though, you should instead blame the people at the top for not making testing a high enough priority.