Yeah - this is where things get a little confusing. How exactly do you rate the controller cards?
Definitely, as some will get a better idea from throughputs (MB/s), such as DAS (Direct Attached Storage = storage is available only to the system it's attached to), and others from IOPS (i.e. SAN systems running large scale databases for example).
In your case, it's a DAS situation from what you've posted so far (most MR members use this type of storage).
As you said, the max for 4 SATA devices is going to be 270MB/s = 1080MB/s. Suppose you have a new SSD that can get that much bandwidth - we know the PCIe is not going to be the bottle neck.
Actually, the PCIe lanes can be an issue as well under certain circumstances. For example, each drive capable of pushing 3.0Gb/s ports to the limit, enough disks to have one port essentially using a single lane on it's own (i.e. 8x disks on an 8x lane card), and either the slot or card using the PCIe 1.0 specification (PCIe 1.0 = 250MB/s per lane).
BTW, 6.0Gb/s RAID cards are designed with Gen 2.0 PCIe spec, but would experience the same issue with fast disks (250+ MB/s) if it's in a PCIe Gen 1.0
slot. Another way to throttle, is if the lane count of the slot is less than that of the card (i.e. running an 8x slot in a 4x active slot that it fits; 8x or 16x lane physical connector, but not wired for that many lanes electrically).
It all depends on the specifics, so as the old saying goes, "The Devil's in the Details".
Individually, the SATA per drive is not limiting, but the controller seems to be a bit of a question mark. You said the IOP runs at 800MHz, which isn't a problem - I think it's just difficult to compare or understand exactly what cards would be bottlenecks, or how to compare them.
First off, a proper RAID card has either a SAS or SATA controller, dedicated processor (removes the RAID processing from the CPU), and a cache. It's also designed to handle it's own recovery (essentially, it's a dedicated computer in it's own right, but aimed at a specific use, which is to handle the disks in specific configurations for increased redundancy and/or throughputs, depending on the level implemented). When dealing with parity based arrays, you have something called the write hole, and proper cards include a hardware solution (NVRAM). They need batteries and/or a UPS to help with it, as it also requires power (ideally, you run both, but some card makers don't actually offer batteries, as the UPS, particularly an Online type, are expected to be used, and even the card battery can't help you if the data is larger than the cache).
What you're looking at, are Fake RAID controllers, which are nothing more than a SATA controller chip. The computer uses drivers to handle the RAID functions, which means system resources are consumed to do this, reducing the available clock cycles for other functions.
It's like comparing a sports car to a bicycle. They're that different, especially as you move to other RAID levels (i.e. some Fake RAID controllers include the ability to run RAID 5, but aren't suited, as they don't posses an NVRAM solution to the write hole). No cache to hold data in the event of a power failure, and they don't have the recovery capabilities that true hardware cards do either.
Simply put, if you run a parity based array, put the money into a true hardware card and UPS at a bare minimum, as you
will get burnt if you don't,
not if.
Bit of a side note really, as you're not indicating you want to do this (RAID 5/6/50/60), but could help you to understand the differences between true RAID cards and software based implementations.
As it happens, a RAID 0, isn't that stressful, so it won't eat up that much of the system's compute cycles. But as you're wanting to run SSD's in a stripe set, you hit the problem of the ICH throttling, as it's only allowed ~660MB/s, and you're planning a system that can push ~1GB/s.