Anandtech has
a recent article that I was just reading comparing different server configs for serious hardcore virtualization (super expensive datacenter servers).
Pragmatically, that article doesn't tell you much. The benchmarking is done by running on top of ESX. ESX isn't supported on Macs ( last I checked. )
Running on ESX and running one Fusion are substantially different. Fusion runs on top of Mac OS X. ESX runs on the raw hardware.
Will be a difference if the virtualization takes into account processor/memory affinity when loading up the VMs. So in a dual package set up if have two VMs If run one on processor package 1 and second on processor package two then there are no memory accesses across the processors. If Fusion takes step to separate and pin the VMs onto specific processors can get a bump. if have perfect control can run two VMs so that are leveraging different pools of memory ( don't think fusion does the fancy VM overlap sharing that raw iron versions with some images.) Or can pin them together on single pair if there is some shared data that probably would allow Mac OS and other stuff to migrate to the other one.
It is possible to pin threads to a core just not sure if Fusion is trying to do that and what Mac OS X will do with other workloads in response. Should try to balance things out but perhaps not like ESX does it.
VMs tend to have a large memory footprint ( unless running embedded oriented OS on the VM. ). The dual package set up is going to offer more memory bandwith than the single package one.
That would be a small effect though until amount of memory in dual package box significantly passes the single package one.
Here is a link to some data where used a couple different Mac ( a duo versus quad versus ... ) and got different results.
http://www.mactech.com/articles/special/1002-VirtualizationHeadToHead/index.html
[ ignore graphics charts after "page 4" because I think that starts to wonder into GPU differences rather than core ones. ]
The Quad does better in many places on the charts associated with the article. That is despite the iMac having an addition 1GB of memory and slightly lower clock speed. Better I/O helps run VMs faster. It is not purely a CPU power crunching problem.
The bigger issue though is what the 2 and possibly 3 VMs are doing. If you want to have three VMs that are all active simultaneously (for example two are running servers services and the 3rd is a client banging on those services) then more real cores is better because have many more threads of work to do. If each one of the VMs thinks it is a running on a dual and they are trying to run a non trivial load factors then you will need cores to match that workload. 2 * 3 is 6 cores. If those are all busy running Windows/Linux/whatever code what is left to run Mac OS X and Fusion? At that point need more cores. ( trying to run a server cluster inside a single box).
On the other hand if you run one VM and pause the other 2. Then pause that one and unpause another. Then don't have a high concurrent workload. Similarly if running different clients for testing purposes then only really running them serially. You may have multiple VMs open but if the majority of those OSes are running at <2% load factor that not worth adding more cores. (trying to consolidate running multiple lightweight machines in a single box. ) Memory is probably a more pressing issue than more cores. The question will revolve more around how much max memory you need to keep all those mostly non-executing VMs happy.
The default install that Fusion uses for Windows is 1 virtual processor and 1GB (for Vista/Win7). So three of those generic installs is only 3 cores and 3 GB. if have 6 core 16GB box you are set unless there is some RAM hog on the Mac OS X side. You'd have 13 GB and 3 cores left even if those 3 VMs are running full blast. That is still a pretty good machine.
You want to have a big buffer from being anywhere need going into virtual memory swapping by Mac OS X. Likewise need to leave room for Mac OS X and the mac apps to get time on cores.
Even with using ESX those two problems, "active cluster in a box" and "VM consolidation" , lead to deploying onto different sever configurations.