A load average is basically a weighted score that measures, well, CPU load.
The three numbers are averages that spanned out over different time periods. The first average represents load over the past minute. The second one is over a period of five minutes. And the third is over fifteen minutes.
Okay, so what's load? It's basically a measure of how many processes have been queued by the CPU, and whether there's a "line" of processes waiting for CPU time. A load of 1.00 means that over a period of time, the CPU has had an average of one process to run, and no processes have been forced to wait.
Load averages of below 1.00 mean the CPU was basically running idle at some point over the past 1/5/15 minutes. A load of below .5, and the CPU has been doing more waiting than working.
And consistent loads over 1.00 mean that there were more processes than there were CPU time to run those processes. At what point a CPU becomes "overloaded" is somewhat open to debate. If you never get below 3.00, I'd say you probably have an underpowered computer for what you're making it do, though occasional spikes in the high 2s and low 3s aren't a big deal.
In your case, you had a load average of 2.15 1.85 1.47. That would indicate to me that within the past minute one or more processes were running that consumed all of the CPU's available time, and then some. For every one process the CPU was running, there was an average of 1.15 processes waiting to get worked on. That doesn't necessarily mean those processes weren't being worked on at all, but that those processes had to "take turns" so to speak.
But, the load wasn't nearly as high 5 minutes prior, and was even less 15 minutes prior.
By contrast, here's the uptime on one of the servers I administer:
Code:
$uptime
12:43pm up 433 days 19:58, 4 users, load average: 1.52, 2.10, 2.23
In this case, about 10 minutes before I got this load average, a major database re-index had taken place. Since then it's been pretty quiet.
I once witnessed a Sun Fire V240 server end up getting caught up in a number of infinite-loop processes thanks to a careless programmer. The load went from less than 1.00 to above 10 over a few hours, at which point it started running abysmally slow and we could no longer troubleshoot it while it was running, and it had to be shut down.