Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
To me, the PUSH notification is not the best solution at all.
Let´s say in IM is good idea, but how about radio stream?
Secondly, I hope, that there will be an option to set it in all apps differently.
in some apps only badge, in others popup window.

But I still dont think, that apple is right about battery usage. A use backgrounder with one or two backround apps(usually for a limited time) and battery life is not that bad as the say, and solution to have one or two apps in background is allways better.

With new model-with better RAM, better battery, ...
PALM pre is able to have BG, why an iPhone isn´t?
and how about capacity of apple push servers? if millions of users will for every app use this service, I would preffer shorter battery life, then situation, that the push service will not be runnig due to data overload

I agree. Push is better for some things, but it's totally useless in others. We need background apps for this to be a worthwhile upgrade. I can't justify another purchase otherwise. Not for now anyway.
 
What is the utility of giving background tasks zero quantums for an extended period of time?
Unless your foreground process wants 100% CPU for an extended period of time, why would this be contemplated? And, if this is required - say, for an advanced game - then sometimes my "first approximation" won't cut it, and we need to allocate some proportion of resource usage to background processes. For a second non-profiled approximation, let's allow the foreground process to be guaranteed up to 80% CPU time, but no more, on a system with busy background processes. I say "sometimes" because there are tasks that are happy waiting until the system is idle.

How is the OS scheduler suppose to know what is OK to starve one to death.
In any final implementation our usual option is not to starve to death but to reduce CPU time/bandwidth/etc allocated to the app.

Considering this option, your question becomes, "How can a process guarantee a sufficient amount of resource x per time unit t?" it might ask for it, and be granted it in the competition among background tasks. For example, with my 80% second approximation a request for 5% CPU might mean guaranteed minimum 1%, but usually 5%.

And if it is OK to starve an background process to death why is it running in the first place?
But, you're right, something needs to be done about, say, those 50 processes you've launched which each would like to consume 5%. When a task can no longer be given what it has explicitly asked for, or accounting data shows that it's always being pre-empted because it's using its maximum default allocation, you can notify that task about where it's over-consuming. This gives it the chance to degrade gracefully in a way it knows best (e.g. if you're running a P2P client, seed to fewer people); if it does not adjust its behaviour, or if some resource is so low (e.g. memory) that it is imperative that some processes exit, then after optional further warnings, you demand that it gracefully shut down.

A task can describe its purpose to indicate to the system whether it's appropriate for it to be targeted early in the throttling / quitting procedure. For example, a word processor which might have high memory usage has no problem being quitted and restarted, so it can announce itself as happy to be the first to go. A P2P client can indicate that it normally has heavy requirements but be happy to gracefully reduce its consumption. A chat client (without offline messaging), which in fact spends most of its time just select()ing in the background anyhow, would be within its rights to ask to be the last to die.

Do you alert the user you are about to starve it?
If you're killing a process, such as the chat client above, the death of which will actually make a difference to the user, then yes, you need to tell the user. If you can't reasonably do 2 things at once on the hardware, such as real-time voice recording and encoding while playing some OpenGL masterpiece, then yes, you notify the user - as early as possible, so you don't get the usual desktop experience of a pointless combination of stuttering audio and jerky gameplay.

But mostly you'll just be gracefully degrading performance of background apps or quitting apps which will re-open in exactly the same state as before, and the user need not know this.

Everything Apple has talked about in the design rational for turning off apps has been about battery life.
OTOH, "need to conserve the battery" might be modelled as a resource-hungry process. But what is it about background apps that Apple thinks is going to take up so much battery? They're mostly not touching the UI or the USB, so that leaves the CPU and the radios. The average ARM CPU runs a few hours at 100% load on four AAA cells, and the average background process is likely not CPU-bound (even if CPU were a problem, you could throttle to a nice slow clock)... so my stab in the dark is that, if Apple were telling the truth, they would be worrying about network usage.

Of course, if the background app is one that does a lot of network transfer anyway, then that's the user's business. This leaves the case of, say, a backgrounded chat client, which pings the server every so often (or vice versa), whereas a push scenario would start with a custom notification event Y. So is it that a ping requires either some data connection to be maintained across the cell network, or the connection to be brought up and torn down at each ping? whereas to send Y can be done in a different manner across the cellular network (i) quickly; (ii) without requiring high power to the radio between Ys. Do GPRS/3G/etc connections really not have an idle mode which only requires occasional keep-alives and allows the radios to stay in low power?

And it isn't them consuming UI events that is really going to kill off performance and/or resource consumption.
What does that mean? Using any modern desktop under heavy load confirms that interactive response almost always suffers. Using, say, certain builds of BeOS or QNX confirms that it shouldn't have to.

It is one thing to run traffic shaping on a external router. All the overhead in that case get sucked up on that external CPU.
Nothing you have told me about traffic shaping suggests that allocating bandwidth appropriately among half a dozen processes is going to be resource intensive.

Does that even remotely sound like the normal context of an iPhone?
Absolutely. A typical power-user iPhone might run a voice app, IM app, web app and downloader simultaneously. The voice must not degrade at all, we don't want the IM/web app to lag, and the downloader can wait on everything. Request priorities; shape!

And if low CPU cycles but turn on the radio full blast for an hour? Or low CPU cycles and burns lots of energy rewriting flash cells?
These are just more resources to be rationed.

chirping away at moderate rate the internet over a 3G network
Why are you "chirping away at moderate rate" by being continuously connected at full power to a high speed, power-hungry network?

going to have to insert lots more data into your kernel schedule data structures. That requires getting locks.
Lots more? Locks? Why can't each thread simply have a number incremented by the scheduler when its turn is over? Similar accounting figures reflect network, etc access. A separate kernel accounting process is responsible at regular intervals, and when the scheduler feels overworked, for collecting all this data, sorting by process, and changing priorities or instructing processes to change their behaviour. Maybe for some data it's more efficient to lock and pool by process/system from the start - a fine-grained lock that's typically only needed for half a dozen instructions isn't hugely expensive!

Why isn't it customary? One, if you inform the OS about every loop construct you are about to enter how many more kernel traps is your program going to take now?
Why the hyperbole? If Photoshop is about to convert a 100MB image, or an unzipper about to unzip a 500MB file, they know they're about to do a lot of CPU/disk/memory work. Profiling allows typical resource requirements as a function of input size to be examined during testing, and this information can be embedded in the release. This sort of information can be collected dynamically, though I'm sure you'd prefer to switch that off for your end users.

In normal circumstances how much more energy/time are you burning up
In answer to this question in general - profile! Of course you're going to get some overhead, just as you get overhead for choosing ObjC/Cocoa over straight C. You're asking questions as if none of this sort of thing has been implemented before, on operating systems over two decades old running on hardware less powerful than what you find in your pocket (hello, pile of VAXstations in the corner!).

Never mind that you now need a GUI application to show folks what is running on their phone. You want the average joe blow phone user to manage deamons. Seriously?
No you don't - see above. One of the things Apple got right in all of this was to mock the WinCE task manager. But instead of observing that they'd just discovered a problem with operating system resource allocation, they decided that the OS was fine and that it was time to bolt on some new nonsense.

First, the kernel is further in front of the line than the foreground app. It has to get the UI event that is already prioritized ahead of all of the apps. The notion that the event is not caught and handled immediately isn't true now.
But handling a UI event usually requires some processing to provide an immediate UI response. It's that sort of thing that gets pre-empted when the average system is under high load. This is why I can shake a desktop window around the screen hopelessly but it won't redraw for 10 seconds.

for example on a game even if the user doesn't do anything they could be competing with objects in the game. Those objects in the game aren't going to stop just because the user isn't doing anything.
Idleness happens on a centisecond scale though. If it takes me 10ms each to update the screen 20 times per second, while worker threads update object state 30 times a second each time taking 10ms, I still have a 47% idle machine, even allowing for 5% scheduler overheads. If I'm writing my game with the aim of maximising performance by using 100% CPU, well, the scheduler can just pretend to the app that hardware is 20% less powerful - if this is going to destroy performance, the programmer is making an unusual level of assumptions about precise hardware specs.

P.S. another reason is security. One reason folks don't want to drop background worker drones onto the iPhone is that you can't have installable background worker drones. Your virus or trojan would only run when the user ran it explicitly.
Sorry, this is a complete red herring. If you're writing malware and managing to sneak it onto the App Store then Apple telling you not to write background apps is not going to ensure that you create an app that only behaves maliciously in the foreground.

Thanks for the interesting response. I need to go AFK, so I may not read reply.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.