What would be good is for modern consumer operating systems to have less primitive scheduling algorithms. The literature has been discussing methods of prioritisation since the '60s. Your goals are simple:
1. Don't let background apps harm the UI experience of the foreground apps _at all_, by giving them little to zero quantum when a UI response/update is required;
That research literature doesn't mention anything about starvation?
What is the utility of giving background tasks zero quantums for an extended period of time? How is the OS scheduler suppose to know what is OK to starve one to death. And if it is OK to starve an background process to death why is it running in the first place? Do you alert the user you are about to starve it?
Second, once you allow unbounded background processes they will compete against each other in addition to with the foreground app. It is not just a one foreground and one background problem.
iPhone OS can simply borrow from the unix priority/nice mechanism that Mac OS X uses (if it already doesn't). There is little evidence that foreground apps not getting enough cycles is problem for iPhone OS. (the CPU is severly underpowered) Everything Apple has talked about in the design rational for turning off apps has been about battery life. Not that their selected ARM core is underpowered.
Finally, you say "foreground apps". Well if you start enough of those then you eventually run out of resources. Or you want the priority to follow the UI focus? (I'll dive deeper into adding lots of inserts into the scheduler and multithreaded apps later ). And it isn't them consuming UI events that is really going to kill off performance and/or resource consumption. Unless adding multiple users you are limited by how many events a user can generate. No single human is going to overwhelm a modern CPU running a managable workload.
2. Use flow control to improve network performance of interactive applications;
Is it really going to improve performance? It is one thing to run traffic shaping on a external router. All the overhead in that case get sucked up on that external CPU. Likewise, if you have "too many" cores inside your box and you are plugged into the wall for power, you can throw the overhead at these under leveraged resources.
Does that even remotely sound like the normal context of an iPhone?
Furthermore, traffic shaping is usually on protocol/pattern. You are talking having to track not just what is tracked now, but also ports and the process id that each port is using. Then your shaper has to factor yet another dimension into its shaping process. Sure it could be done, but how much overhead have you added?
Again.. where is the evidence this is primarily a bandwidth problem?
Lots of home network routers use ARM cores very similar to the one in the iPhone. They can do traffic shaping and what not also. However, they are also plugged in.
3. Don't let background apps suck up too much total CPU power over time. Publish recommended limits and the throttling algorithm used on misbehaving processes. I still don't understand why modern consumer operating systems don't react more gracefully to that runaway 99% process: why isn't it customary to inform your OS if you're about to use a huge amount of resources, so if you're doing it otherwise it can be assumed you're broken and need aborting?
A couple of factors to consider:
And if low CPU cycles but turn on the radio full blast for an hour?
Or low CPU cycles and burns lots of energy rewriting flash cells?
It doesn't have to be a runaway fork bomb that is burning up the power. It is not solely a "misbehaving program" problem. If there 5 programs all chirping away at moderate rate the internet over a 3G network that is going to be a significant power drain. 6, 7, or 8 of them will be even more. It isn't the individual programs managing but the aggregate power. That is yet another data structure for your scheduler to count and manage. You'd have to be able to do accurate blame assignment for all the downstream energy consumption. Don't really even do that for CPU consumption. The scheduler hands out the quantums in the first place. Furthermore you going to have to insert lots more data into your kernel schedule data structures. That requires getting locks. While you structures are locked up the scheduler can't do its job of doling out new quantums because it is not much more busier getting updates.
Fourth what about zombies? There could be mostly comatose processes but they sip resources over a long time.
Why isn't it customary? One, if you inform the OS about every loop construct you are about to enter how many more kernel traps is your program going to take now? Is that kernel trap going to prematurely end your quantum ? ( what if there are 5 other programs all insisting they are more important than you. ) Aren't these kernel costing you overhead. In normal circumstances how much more energy/time are you burning up
The pre declaration makes sense for batch jobs. Regularly run and fairly predictable jobs. For anything that depends upon either user input and/or shared resources how is the code doing to do a accurate estimate?
Never mind that you now need a GUI application to show folks what is running on their phone. You want the average joe blow phone user to manage deamons. Seriously?
Also just how huge is your scheduler? The more complex you make it and the more time it takes to get in and get out of it.
The consumer scheduling method prevalent in the late '80s through to mid-'90s was cooperative multitasking, of course. Now I'm certainly not suggesting that across the OS, but imagine as a first approximation the foreground app never relinquishing control to other apps (ignoring necessary background daemons) except when it's idling. When there's a UI event, the foreground app immediately gets control back.
First, the kernel is further in front of the line than the foreground app. It has to get the UI event that is already prioritized ahead of all of the apps. The notion that the event is not caught and handled immediately isn't true now. If the foreground app is paused on a "wait next event" kernel trap it will get put on the "ready to run" queue and get the next quantum it is priority allows it to get.
Second, your ignoring multithreaded applications. One thread in an app could be doing a "Wait next event", but other threads could be off doing other work. A app doesn't have to idle just because it is waiting on UI events. For it to be idle all the threads would have to join and declare they all had nothing to do. (for example on a game even if the user doesn't do anything they could be competing with objects in the game. Those objects in the game aren't going to stop just because the user isn't doing anything. )
The current iPhone application approach now
is somewhat a form of cooperative mutlitasking now. When you are in foreground you run. When you go background you turn yourself up. The emphasis is on startup/shutdown store/recovery state.
P.S. another reason is security. One reason folks don't want to drop background worker drones onto the iPhone is that you can't have installable background worker drones. Your virus or trojan would only run when the user ran it explicitly.