At present, there's no VM that runs at hardware speed. There never will be. There have been improvements, of course. You can send VM processor instructions down to certain AMD and Intel processors and they'll basically run as is. Those processors can understand "this instruction belongs to this VM, this one to that, run them as is and don't let the streams cross". However, there's no way to do that with graphic cards, sound cards, etc. A hypervisor in this case needs to act like a traffic cop, and generally speaking only one VM will have hardware level access at a time. Not to mention, most OSes will freak out if they see (for example) their video card disappear when another VM "owns" it.
Performance is quickly approaching native speed.
IOMMUs allow safe direct hardware access within VMs, including unmodified guests. Nested/extended page tables also help quite a bit.
Certain Intel chipsets have included an IOMMU (under the marketing name VT-d) for the past few years. Mac Pros since the Early 2008 model should have VT-d capable chipsets.
Your description of I/O within most VMs is inaccurate. Until IOMMU support, very few x86 VMs allowed any such thing as "hardware level access" for anything except the host OS or a single privileged VM. Xen is one exception; it supports PCI passthrough support without an IOMMU on paravirtualized Linux guests, with some limitations. Without PCI passthrough, I/O within a VM still goes through the host kernel.
Linux and Windows NT support some form of PCI hotplug already, assuming the VM or hardware supports it. Windows can also change the display driver of an active session (for example, an active session can move from a physical display to a virtual frame buffer - this is how RDP works.)
One thing that surprised me is that the so-called bare metal hypervisors aren't anything more than a custom stripped down OS. In some cases it's linux. If I recall correctly vmware uses an old, old version of Redhat in their "bare metal" hypervisor.
This is false. None of the bare-metal hypervisors are actually a "custom stripped down OS". VMware ESX uses Linux for its management console only; this runs as a VM on top of the VMkernel (hypervisor). The VMkernel itself includes driver code from Linux, but it is
not Linux. Most I/O is provided by the VMkernel.
Xen is a hypervisor. I/O is usually provided by "dom0", a privileged VM running on the hypervisor. This is usually a standard Linux distribution running a Xen-patched kernel. Other VMs can also access hardware without going through dom0 (with IOMMU support, any OS should be able to; without it, only paravirtualized guests can.)
Hyper-V is another hypervisor. I/O is provided by the parent partition, a privileged VM running on the hypervisor. This is a standard Windows NT 6 kernel.
In all of these cases, the hypervisor is a separate entity. For Xen and Hyper-V, there is a privileged guest OS that is usually responsible for all hardware access.
KVM is actually part of Linux - VMs run as if they were processes on a Linux machine.
The other thing that surprised me was that Xen is written in Python. There's not any fancy C code or anything....just Python.
Nonsense. The Xen hypervisor is written largely in C (and C code is hardly "fancy".) Python is used for many management tools.
The fellow speaking also stated that Intel is working on a cpu that would have the hypervisor code as part of the chip. These are interesting times but I'm not so sure it's anything that your normal Mac user really cares about.
Hypervisors are software. Some systems include hypervisors as part of their firmware. Intel has, however, developed certain CPU and chipset features that improve virtualization capabilities and performance.