That really depends on the program, on how "parallelizable" the application is.
The simplest way to think of it is like this: Let's say you have a program that first has to calculate A. Then, when it's done that, it uses the result of A to calculate B. Then, when it's done that, uses the result of B to calculate C, then C to D, and so on. That's a *serial* problem there. The calculation of B can't begin until A is done, so it doesn't matter how many processors you have running, all computation is held up on one spot.
On the other hand, let's say you have an application that needs to calculate A, B, C and D, but those four values are not dependent on each other at all. In that case, you can use four processors at the same time, to calculate all four values at the same time.
Think of it like baking a cake. You can't start putting on the icing until the cake is done baking. And you can't start baking the cake until the ingredients are all mixed together. But you can have people simultaneously getting out and measuring the ingredients.
So that problem is partially parallelizable, but the majority of its workload is a serial process.
Some software applications, just by their very nature, will never be able to do anything useful with multiple processors.