Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Exactly, which is my point about the Snow Leopard release. Snow Leopard's focus is on the Enterprise level push. So we'll be seeing more corporate apps coming out with Mac compatibility. I won't be surprised if Citrix and a whole slew of corp apps companies make the keynote at WWDC.

i would be surprised. that would be a heck of a keynote... iPhone 3.0, Snow Leopard... and all these rumors about a netbook/touch pad... don't think i'd sit through a 4 hour keynote..

then again.. yeah i would haha
 
At present, there's no VM that runs at hardware speed. There never will be. There have been improvements, of course. You can send VM processor instructions down to certain AMD and Intel processors and they'll basically run as is. Those processors can understand "this instruction belongs to this VM, this one to that, run them as is and don't let the streams cross". However, there's no way to do that with graphic cards, sound cards, etc. A hypervisor in this case needs to act like a traffic cop, and generally speaking only one VM will have hardware level access at a time. Not to mention, most OSes will freak out if they see (for example) their video card disappear when another VM "owns" it.

Performance is quickly approaching native speed. IOMMUs allow safe direct hardware access within VMs, including unmodified guests. Nested/extended page tables also help quite a bit.

Certain Intel chipsets have included an IOMMU (under the marketing name VT-d) for the past few years. Mac Pros since the Early 2008 model should have VT-d capable chipsets.

Your description of I/O within most VMs is inaccurate. Until IOMMU support, very few x86 VMs allowed any such thing as "hardware level access" for anything except the host OS or a single privileged VM. Xen is one exception; it supports PCI passthrough support without an IOMMU on paravirtualized Linux guests, with some limitations. Without PCI passthrough, I/O within a VM still goes through the host kernel.

Linux and Windows NT support some form of PCI hotplug already, assuming the VM or hardware supports it. Windows can also change the display driver of an active session (for example, an active session can move from a physical display to a virtual frame buffer - this is how RDP works.)

One thing that surprised me is that the so-called bare metal hypervisors aren't anything more than a custom stripped down OS. In some cases it's linux. If I recall correctly vmware uses an old, old version of Redhat in their "bare metal" hypervisor.

This is false. None of the bare-metal hypervisors are actually a "custom stripped down OS". VMware ESX uses Linux for its management console only; this runs as a VM on top of the VMkernel (hypervisor). The VMkernel itself includes driver code from Linux, but it is not Linux. Most I/O is provided by the VMkernel.

Xen is a hypervisor. I/O is usually provided by "dom0", a privileged VM running on the hypervisor. This is usually a standard Linux distribution running a Xen-patched kernel. Other VMs can also access hardware without going through dom0 (with IOMMU support, any OS should be able to; without it, only paravirtualized guests can.)

Hyper-V is another hypervisor. I/O is provided by the parent partition, a privileged VM running on the hypervisor. This is a standard Windows NT 6 kernel.

In all of these cases, the hypervisor is a separate entity. For Xen and Hyper-V, there is a privileged guest OS that is usually responsible for all hardware access.

KVM is actually part of Linux - VMs run as if they were processes on a Linux machine.

The other thing that surprised me was that Xen is written in Python. There's not any fancy C code or anything....just Python.

Nonsense. The Xen hypervisor is written largely in C (and C code is hardly "fancy".) Python is used for many management tools.

The fellow speaking also stated that Intel is working on a cpu that would have the hypervisor code as part of the chip. These are interesting times but I'm not so sure it's anything that your normal Mac user really cares about.

Hypervisors are software. Some systems include hypervisors as part of their firmware. Intel has, however, developed certain CPU and chipset features that improve virtualization capabilities and performance.
 
Hmm.... dunno.....

First off, I read on a Citrix forum today that they suspect the xenclient version released for the Mac will NOT be a type 1 hypervisor. Rather, they have a group working on porting the entire thing to work in a type 2 setting, since that gives them "greater ease in installation and more flexibility".

If that's the case, I suspect reality is, Apple still isn't allowing them to release it as a "type 1" for their hardware, and they figure the main reason to do a native Mac release is so the product can be managed with the rest of their management products.

That said though, the Citrix demo video clearly shows things like full screen video playing smoothly while 3D CAD is worked on in another instance. That would appear to be trying to emphasize superiority in accelerated video over traditional VMs?


But a type 1 hypervisor on Mac hardware is something that VMware or Parallels could have done ages ago, the issue is without Apple supporting OS X Client (not Server) as a hosted OS, it is totally worthless to OS X users. Basically until Apple is willing to let us run OS X client in a type 1 hypervisor (and they don't allow this at this point, even on Apple Hardware) you could only run Linux and (some versions) of Windows.

The other important point I want to make is that Type 1 hypervisors are much better suited for server environments than workstations, and although they are certainly more efficient in distributing resources across VM's the current client hosted offerings, like Fusion and Parallels, are way better for client workloads. Don't expect type 1 hypervisors to support accelerated graphics emulation and so forth, that hardware isn't designed for low level virtualization like the processors are. You are just barking up the wrong tree here claiming that there would be performance gains since they would be limited to workloads unlike anyone here is likely to be using a Mac for. And before someone starts talking about all the headless xServers they are running, I will point you at Parallel's Hypervisor solution that runs OS X Server (and has done for some time I believe.)
 
This might be good for high security environments. For everything else it's BS. The Xen hypervisor actually means _more_ overhead, not less. And integration will be minimal. (security feature)
 
GREAT. But I rather have Mac OS X on special PC-Windows machines out there like the OQO model 2+ or the Sony Vaio P Series. THAT IS THE REAL NEED!!!
 
This is great news indeed.

Xen is free and open source. XenSource the developing company offering support and commercial solutions was bought by citrix awhile back.

Xen is indeed the way to do virtualization. You can even buy data-center type servers from HP and Dell that have onboard 'xen' hardware chips so the machines 'boot up xen' - in a very crudely put way of describing it.

Amazon EC2 is powered by Xen, as is GoGrid and others.
 
Xen is indeed the way to do virtualization. You can even buy data-center type servers from HP and Dell that have onboard 'xen' hardware chips so the machines 'boot up xen' - in a very crudely put way of describing it.

There is no such thing as a "'xen' hardware chip". Xen is software, and some server vendors offer to include the full XenServer software configured with their systems. It could be on some sort of flash device.
 
Accelerated graphics, sound, direct USB support etc, these are the holy grail of any type of virtualisation.

... Some type 2 hypervisors (like VMware Workstation and Fusion, and Virtualbox) do have limited support for DX9 accelerated graphics or USB passthrough however. If Citrix XenClient really shoots for type 1 virtualisation for clients, and can provide either direct hardware driver access, or a thin passthrough layer, then that could be a big step forward. I'm keeping an eye on this one.

I just want to throw out some information and hopefully get the real experts here to help us (me) out..

How I understand it --- and I am talking solely about the "Type 2" situation of having a conventionally-installed host operating system running a guest OS in a virtual machine --- is that traditionally all hardware devices except the CPU have to be virtualized within the guest OS. This is why a windows guest VM running on OSX only "sees" a generic VGA card instead of a powerful GPU.

('m not sure about the mechanism that VMware Fusion and Parallels desktop are using in their experiment 3D support.... I'm assuming they are using the Virtual machine manager to intercept the DirectX/OpenGL calls and route them to the GPU and back via a separate process outside of the VM... anyways)

And apparently, the reason why the virtual machine guest OS can't be directly exposed to the real system hardware has to do with maintaining isolation between the VM and the host operating system. When the guest OS accesses memory, it doesn't actually see the real memory addresses. The virtual machine software acts as a translator between the memory addresses the guest OS sees and the actual physical memory addresses of the system.

This is a problem because hardware components like the GPU use DMA (direct memory access) to communicate with the CPU. Since the guest OS uses its own memory address table, if it could communicate directly with the GPU it would give the GPU the wrong memory addresses to use.

The solution to this problem rests in the concept of "I/O virtualization" using a new chipset feature called an "IOMMU" which stands for "I/O memory management unit". (Intel calls it "Intel VT-d"). This is hardware that takes over the job of keeping track of and translating memory addresses for the virtual machine. With this new technology, operating systems running in virtual machines can communicate directly with system hardware like GPUs, network cards, Firewire, etc via DMA.

The end result of this for the average Mac user is that a future version of VMware or Parallels will be able to give your Windows XP virtual machine direct access to the GPU. So all you would need to do is install the proper driver for it in the guest OS and off you go playing games or doing 3D/DCC/CAD work at native speed.


As far as availability, I know the current Intel Nehalem models have it. I'm not sure about AMD. Check out Nvidia's "SLI Multi-OS" which is this technology being used with Nvidia's workstation GPUs.

http://www.nvidia.com/object/sli_multi_os.html

Anything know more about this? Or if all the nehalem chipsets will support this? Or if VMware and Parallels are ready to support this? I welcome your input...
 
It's kind of ironic that I just came back from a conference where one of the sessions covered how virtualization with a hypervisor worked. Some others here have already described much of it.

One thing that surprised me is that the so-called bare metal hypervisors aren't anything more than a custom stripped down OS. In some cases it's linux. If I recall correctly vmware uses an old, old version of Redhat in their "bare metal" hypervisor.

The other thing that surprised me was that Xen is written in Python. There's not any fancy C code or anything....just Python.

The fellow speaking also stated that Intel is working on a cpu that would have the hypervisor code as part of the chip. These are interesting times but I'm not so sure it's anything that your normal Mac user really cares about.

Xen is actually a mix of C and Python with some machine code tossed in. See last week's FLOSS Weekly podcast with the Xen guys on.
 
There is no such thing as a "'xen' hardware chip". Xen is software, and some server vendors offer to include the full XenServer software configured with their systems. It could be on some sort of flash device.

That's what the guy meant; it's on a flash hardware chip as opposed to being loaded off of a disk onto the hard disks.

No need to be argumentative. HP and Dell both sell servers with Xen (and ESXi) loaded on embedded flash. I consider flash hardware, don't you?
 
This sounds as though it will require a relationship with the hardware manufacturer to work, and I can see Apple doing everything it can to prevent this happening on the mac, as it marginalises its efforts at a corporate presence hugely. As an addendum, doesn't Apple's OSX EULA explicitly forbid its virtualisation?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.