People are pretty funny about SSDs - consider benchmarks to generally be 'the best you'll ever see under just the right conditions.'
There are lots of other cases that for the most part, aren't really benchmarked or investigated often, although thankfully the differences between a lot of small file size reads/writes vs large files is. What about when you have a large number of programs open at the same time, and each is reading and potentially writing data at the 'same'/near same time?
Regardless, almost any of the current gen SSD drives are 'enough' - you're not going to generally see the benchmarked numbers for any of them, except when performing benchmarks identical to those being used originally, and generally only after a clean system boot without other apps started.
Now, IO is buffered, in system RAM as well as on the drives themselves, and each program will have a different profile, meaning some will be doing a lot of reads, some a lot of writes, of varying sizes, etc. Putting the apps themselves on an SSD will result in two things - faster loading of apps, and possibly for some apps that dynamically are loading and unloading plug-ins or other data as part of the app, some operations may become quicker - I can't think of too many apps that load plug-ins only on demand vs all at once, but it's possible.
Next is your data set, meaning the data to be operated on - once your program itself has loaded (quickly, from the SSD), most programs act on some sort of data, whether it's an office type program that is opening a document, spreadsheet, or is being edited saved, a game that is reading in level and graphics data, a video editing program reading in video, likely in buffer sized 'chunks' at a time - with some having mostly read operations, some mostly write, and many a somewhat balanced mixture of both. if you use things like iostat, top, etc - any time you're in an IO wait state, that's waiting on a device for a read or write, and there may be benefit in speeding up your storage. For any program manipulating large data sets (or files), it's worth noting does the program do a one time read and then keep the data in RAM, in which case the initial load may not be all that fast, but once loaded, processing of that data (until saving it) is mostly cpu limited, versus a lot of IO activity, which can benefit from having it's working data (video/movie, whatever) being stored on faster storage.
This is mostly simplified, and I may be off on not all SSDs also maintaining an on-SSD cache, but it's close enough for explanation purposes.
The short answer? The best scenario is to maximize your throughput by having as many drives as possible so that in operation, you've got full bus saturation and no or few IO wait states. On a laptop, that means in general Apps on one drive, and the active data set on another equally fast or ideally faster drive, on different channels. There's also not much wrong with launching apps from a slower drive if you've got large data sets, and putting the data on a scratch drive (might be an external but direct connected RAID, or might be an internal SSD). One the app itself is loaded, it's in system RAM, and disk IO becomes mostly limited to whatever data set is being operated on....ideally on fast storage.
A nice setup would be an internal small SSD for apps + OS, or just a faster HD for that, then an external RAID (like the new TB arrays) attached as your scratch or data drive. Failing that ($$$), a single SSD has pretty comparable performance for a data/scratch disk. The next 'best' is a single big SSD for apps and data, or a normal HD for apps, but SSD for data.
I don't know what your particular data size is, but I'd say doing what you can to not only 'launch apps fast' but let them do their *work* fast, is what you should be aiming for..if it means going to a larger SSD, or trying apps on a normal HD but your data on the SSD, we can't really say - you may be working with 1-10MB data files, or GBs of data.