I'm running some high-end simulations of Saturn's rings. The code is written and highly optimized in C++, though it only runs on a single processor. I have an 8-core Mac and I want to run 6 simulations at once, leaving 2 processors for other stuff. What I want these simulations to do is use all of the possible processor power they can - 100% of a CPU each (they're not multi-threaded due to the nature of the simulation). However, watching them using top shows they're averaging between around 90-98%, and only rarely topping 99% or 100%. They have a very small RAM footprint (52 MB VSIZE) so I don't think it's a memory issue. They're only outputting a few lines of text about once every 20 seconds, and they're outputting a 5.3 MB binary file every ~2-3 minutes. So I know that once every 20 seconds they'll slow a tiny bit, and once every 2-3 minutes they'll slow a lot since disk I/O is the slowest thing you can do. But is there a way I can make them use more processor power while they're actually doing the simulation computations? When I was running them last week, they seemed to be finishing in about 36 hrs. Now they're taking 40, but as far as I can tell, it's the same load. That may seem like a small increase, but I have about 300 of these things to do, and they're only going to get more complicated.