There are public and unpatched 0days that acheive remote code execution in Windows 7.
http://www.eeye.com/Resources/Security-Center/Research/Zero-Day-Tracker/2011/20110402
There's a lot more than what that website points out, however like I said earlier, the severity of the issue is based the frequency of the likeliness for it to happen out in the real world. Again I've heard of no such issue outside of specific environments created to make the issue surface. If I continue to use my Win7 machine, the likeliness of those issues to put me at critical risk is lessened due to everyday (no special) means that's already in place. Firewalls mitigate some of the risks, type of antithreat software in place, if you're on a managed system, whatever permission/policies in place including firewall restrictions takes a lot of those risks away.
It's like if you went outside while it was raining, you could possibly die from a stray lightning bolt hitting and killing you... or at the very least severely injuring you, however what's the likeliness of this to happen, however does that mean everyone should not go outside while it's raining? Proper risk assessments don't just include severity, but also incorporates the frequency (probability) of such a issue to happen. Control methods (patches, updates, etc.) are then deployed in accordance with the resulting risk level.
A proper risk management system incorporates the residual risk(s) after assessing the severity and probability (likeliness) together. If the residual risk was that severe, I'm almost certain a solution would've been made available.
Safari was not sandboxed at the time so this is irrelevant now.
Even sandboxing isn't perfect, like I mentioned earlier it's only a matter of time, not just limited to Safari. Chrome operates in a sandboxed environment and it was circumvented, just because Safari's a product of Apple does not magically make it immune to future hack attempts. What this illustrates is while many people like to think Apple's taking the lead in many things, the browser wars clearly show that it was trailing behind and only recently came up to speed.
Its relevancy is actually determined by the total demographic of users of the current Safari versus users of the older, still vulnerable version. Whether users are unable, unwilling, unaware of upgrading to the latest version is a real factor, the issue remains that the risks aren't eliminated just because a new version is out. Same for when I see posts about "Windows"... while Windows 7 is a vast improvement over XP in almost every way, I can't totally discount XP-related issues because it's still officially supported and demographically there are a lot of XP users today.
Being first to fall is irrelevant because the browsers are not pitted head to head but are hacked one at a time on a predetermined schedule.
Safari is always scheduled to be first. Most likely due to Apple not sponsoring the contest or having a bug bounty program like all of the other browser vendors that have products in the contest.
Actually being the first to fall is quite relevant, esp if it only took between 5 to 10 seconds to crack the browser where users (contestants) were forced to get into the system remotely (through the WAN) which is a lot more difficult than trying to do it within the LAN. If you recall, Apple made bold remarks about how secure and advanced Safari was over all the other browsers and all it took was 5-10 seconds from "the outside" to bust through it. Other well known browsers took much longer to circumvent, Chrome either prevailed or took the longest to get through.
While browsers aren't "pitted" against each other, in an indirect sense they are. Each company strives to continue their improvements in order to stay with the latest demands and security risks. Whatever advantage 1 company's browser has is usually only a temporary lead until the others catch up and/or surpasses it. Each product collectively pushes other products to improve.
The battery issue I feel is a very small risk. The article is only 1 dimensional in such that it only addresses the severity of the risk, not the likeliness or probability of it actually happening in the world. It would be like saying that a desktop user addresses a huge risk to my machine just because I use a laptop and carry it around from place to place... so if I drop it I could destroy my machine. While there's no question about the severity of what could happen "if" I dropped my machine, in reality I have never dropped any laptop before, I have "controls" in place to mitigate that issue down to nearly inconsequential.
Security professionals would like everyone to live a very paranoid life, to them just being connected to the internet is a huge risk on security in general, but that doesn't mean you should unplug your computer and use it 1980's style, limited to local software and resources. Technically some keyboards and mice have firmware as well, but you don't see huge security posts about how it can brick your machine.
So yes, it CAN happen, but I believe the likeliness of any of the circumstances for any of what's posted above is much less than what the article suggests.