I have both HyperV and vSphere (ESX) going... again, it depends on your hardware.
VMWare's memory management capabilities can't be beat, especially if you have multiple guest VMs running the same OS.
Microsoft Hyper-V only just got dynamic memory support with the release of Hyper-V R2 SP1, but it's limited in functionality and not nearly as capable as VMWare's implementation. It's also guest-specific and only works with the last two generations of Microsoft's OS, and none of the Linux variants.
Also, bear in mind that CPU resources are usually the very last hardware resource that will get used up. "Quad core with 8GB of RAM" doesn't say much. I have 8 VMs running on a dual-core with 8GB of RAM on Hyper-V. CPU still sits mostly idle because the guests are not CPU intensive.
I have a second Hyper-V system with a Q6600 and 8GB of RAM, and that one runs fewer VMs because it has a weaker disk I/O subsystem - small 128GB SSD and 500GB RAID-1 WD Blacks; versus the RAID-10 array (4x500GB WD Black, Intel ICHR) in the dual-core Hyper-V system.
I've got another 10 VMs running on my ESX system (signature), and that total memory utilization is currently only at 8.5GB... But that system runs my most intense VMs simply because it's also got a 8-drive RAID-10 array. And this is what you should focus on. When it comes to virtualization, your first and second priority should be to ensure that your storage subsystem is robust enough, and that you have enough memory.
Networking resources and CPU resources will be consumed after disk I/O (either space of IOPS) and memory resources are exhausted.
For VMWare ESX, check out the Whitebox HCL
for a list of community-supported whitebox hardware.
For Microsoft Hyper-V, check out the requirements for Server 2008 R2.