I love how I recently upgraded my RAM to 4 gig, and now I average 1.25 active, when before I was averaging .75 gig. heh Funny how that works.
Regardless instead of activity mon try typing top in a terminal. It's about as accurate as your going to get.
Now as for the inactive vs free, theres a little truth to both theories presented in this thread.
Inactive memory is still owned by some process or another, as such if the process wakes up and needs the memory, it will move back into active accordingly.
You will note that, if you use the purge command you'll find that not only a brief second (or more depending on system spec and whats running) of pause due to the fact that some of that memory is owned by processes still active and as such can't be fully released so you are effectively forcing it to swap slowing your system to a grinding halt while it tries to safely write the memory to your HD which in turn is pushed back to inactive as it finds out you are no where near using up all the physical memory and are essentially wasting harddrive writes and cpu time for a different color on a pie chart, but that you simply cannot actually get inactive memory down to 0.
There is good reason for this, there may be chunks of memory for which a process may not access very often, and as such it get put into inactive so that it may be used for a process that needs more active memory, should there be not enough free.
At the point that the memory left in inactive is requested by a new memory needy process the system decides if the memory needs to be swapped (because of the nature of the memory be it writeable, read only, whatever and activity of the owner process) it is here that that it will either be given to the new process whole heartedly or swapped and then given over (this is all assuming there isn't enough in the free column to handle it, or that you are not running a process that already has memory in the inactive that simply needs to be called back up into active, which is likely).
It's this point which claims at minor slowdown come into play, as the decision to swap or not and the actual process of swapping can be quite intensive thus causing symptomatic slowdown perceived by the end user. 9 times out of ten the slowdown is the CPU hit combined with swapping owned memory (if whats being run is exceeding the free memory).
Swapping is even slower on fairly full harddrives also, much like the fuller your harddrive the slower your system. More data, longer to find the data your looking for, or the swapped data in this case. This is why most nix system admins generally setup a separate swap partition (even going as far as to have a separate drive dedicated to it), and generally put it at the top of the table so its not hard to find, which is one of my biggest gripes about OSX's default setup, which imho should default to something similar.
BSD was started in 1977, by people much smarter than most, and the memory management system has evolved for over 3 decades, with the input and tweaking thousands of people who are arguably smarter than the people who first wrote the thing. In short, BSD knows a hell of allot more about memory management than the average mac user, and purging on your own should be avoided, and could lead to unexpected instability. YMMV
Want to add linux seems to handle this better on the surface due to its paging model, it separates machine dependent and machine independent layers at a much higher level in the software making the paging process snappier. The downside to that being that the software code is more, how can I put it, selective making the software less adaptable to underlying hardware changes, which means more code and why some linux users are obsessive to compiling their bins to the kernel (gentoo and the like).