Encryption and decryption are actually very similar to the sorts of graphical tasks GPUs traditionally do. Modern GPUs are not set apart from CPUs because they can do "graphical" sorts of calculations. They are set apart from CPUs because they have a ton of parallel "stream" processors (hundreds of them) which can perform operations that can be segmented very easily. This is where vector calculations (graphics calculations which are essentially just physics problems) and encryption/decryption (non-graphics calculations) are similar. Each sort of calculation can be broken down into smaller calculations which can then be performed on different stream processors at the same time.
This is a vast oversimplification but suppose you have a calculation which looks like this [(A+B), (C+D), (E+F)]. So it's one problem composed of three problems. If you only had a single processing element you could only perform the calculations in some sort of order like this:
Calculation 1: A+B
Result
Calculation 2: C+D
Result
Calculation 3: E+F
Result
If you have more than one processing element, let's say you have 3 processing elements, then you could divide the workload and perform each of the 3 calculations at the same time instead of one after the other. GPUs, these days, have hundreds of stream processors and each one is suited to performing simple sorts of operations. So, for any problem, if it can be broken down and split up into many smaller problems you can perform them all at the same time. This is true for graphical problem as much as it's true for non-graphical problems that satisfy the same conditions.
So the fact that a non-graphics applications gets a huge speed up is only evidence that the calculations of the application are easily segmented and broken down into smaller calculations which can be performed in parallel on the many stream processors of the GPU. It's not evidence that applications which do not have calculations that can be broken down, so to speak, will get a speed boost. In fact, applications with calculation that are not sufficiently similar to "graphics" sorts of calculations will not get any speed boost at all.
Everybody seems to forget the fact that almost all regular OS/Application instructions work on data that is either read in from memory or from the disk. Here the bottleneck is the I/O from memory/disk, which is not going to speed up with the use of GPU. No matter how fast your CPU is, even a 100GHz CPU will not make a huge difference to applications like Finder, Safari (Web pages are stored in memory cache, not much processing needed after page is loaded), mail (network lag is much larger than disk access delays), compilers (seeking data from disk), etc. Even applications like Maya or video editors need to work on data read from disks/memory, the processor (whether CPU or GPU) will spend most of its time waiting on the data to process, no matter how fast you make the processor by offloading the work to the GPU, the data to be manipulated by the instructions cannot be all stored in the small processor caches, hence the processors end up waiting for data.
Good point. This has already been shown to be a limiting problem facing the potential speed gains afforded by offloading tasks to the GPU. During a developers conference it was shown that in some video encrypting scenarios where video was being converted from one format to another the GPU was outputting frames faster than modern I/O systems were capable of transferring and storing the information. So, yes, GPU acceleration is going to speed things up a lot. But, the speed boost is limited by how fast out I/O systems are and that's something that, it seems, engineers have been ignoring.