If you can afford RAID, you can afford a UPS, or at least a dedicated backup.
AS mentioned, raid is not a backup, it is a measure to continue operation in the event of hardware failure, and gain increased speed over and above what a single drive can provide. However, controller failure, theft, double drive failure, etc still means your data is gone. As does software/data corruption through power failure, etc.
Put it this way; it doesn't matter how fast your drives are if they don't reliably store your data. If your data is important, buy a UPS, a dedicated backup location and THEN worry about making the machine faster or able to work through component failure with RAID.
RAID10 is really no worse in a power failure than no RAID. Or rather, a single drive is "just as bad" as RAID is in a power failure. RAID 10 is a RAID0 stripe across 2 (or more) RAID1 disk groups.
Irrespective of whether the controller writes to the array fully, all your apps will crash hard without closing files and flushing in-flight data to disk properly. I.e., the controller can only do what apps tell it to do - if the apps suddenly hard crash, the controller may not have updated the application data fully as it was never told to do so.
edit:
big business will typically have a "SAN" - a big box of disks with a dedicated controller that other machines connect to over a dedicated storage network (fibre channel, iscsi, etc). The only way to back these up is either via site-site replication of snapshots (to another SAN at a different location), or tape.
A baby SAN may have say, 16 disks in a RAID 10 or RAID50 (striped RAID0 across multiple RAID5s) array. Larger SANs may have hundreds or thousands of disks.
We're about to upgrade from our baby 16 disk SAN (14 disks in a RAID50 with 2 hot spares) to something with 48 disks, even the baby iSCSI SAN we have, has >72 hrs of battery backup on its storage controllers (emails a warning if it has less, lol), the new one we're about to procure has a mix of SAS, SATA and SSD drives in several RAID arrays to keep hot data on fast disks or SSD cache, and less frequently used data on SATA. This box is going to be RAID5 with quite a bit of SSD caching and battery backup.
However, ideally for maximum speed (if you can afford the cost of more drives), even in the enterprise, RAID10 wins. You can create massive RAID 10s with a RAID0 striped across tens (or hundreds) of RAID1 groups....
AS to how long a RAID10 can go with a disk failure? Well, if it is a RAID 10 of 4 disks, (2x 2 disk RAID1s), you will work in a degraded state until the other disk in the same RAID1 group fails. Then, you're screwed and need to go back to backups. So, you NEED to know when a disk fails.
If you have say, a 6 disk RAID 10, with 3 disks in each RAID1 group you can handle up to 2 drive failures in each RAID1 before failure. 4 disk RAID1 groups = 3 failures, etc. In practice, the most common setup is 2 disk RAID1s (or if you have REALLY mission critical data, like say - a bank, 3 disk RAID1s) and some hot spares in an array.
Bear in mind, however that if one of your disks fails, and the other disks are of the same make/model, its quite possible you'll see another disk failure quite soon. Especially if they failed due to over-temp, power spike, etc... It is best to replace "ASAP". In fact, enterprise arrays include "hot spares" to automate this process - essentially a drive fails, is identified by the RAID controller, the hot spare takes over and starts rebuilding automatically, and the SAN emails the admin (or storage vendor) to order a replacement disk...
Hot spares are especially important with RAID5 or RAID50 - as when rebuilding or running in a degraded state, the remaining disks must work much harder to rebuild the data, and rebuilding takes a lot longer than with RAID1 or RAID10. Pushing disks hard is more likely to make one that may be slightly flaky, fail. 2 failures in a RAID5 group = go back to tape.... so you want to rebuild ASAP...