Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
steeldrivingjon said:
But there's still contention for bus, RAM, hard drive, etc.

Running a second OS (or a second copy of OS X) would be a worse performance hit than running a single OS, because the second OS is a hell of a lot of overhead.

Virtualization or no, the CPU(s) and other resources are going to be shared among all the processes and threads that are running on the computer. The fewer processes and threads that are running, the better for the performance of any given application on the machine.

There certainly is context switching - every time an OS gives some time to a different running process. Virtualization doesn't prevent that.

The way to get optimal performance on a media center Mac is to minimize the number of processes that are running, and raising the priority of the processes that are displaying video or playing sound. You'd probably want a core dedicated to video, and a core dedicated to sound and miscellaneous housekeeping processes of the OS.

And you *certainly* don't want a second operating system eating up RAM that could be used for buffering video and audio.

Virtualization or no, you only have a limited amount of computer resources which are shared among the processes running. The fewer processes you have running, the better those processes will run.

If you want performance in a media center, virtualization is definitely not the answer.


Aaaaahhh.... This is a good discussion. I agree with you, but only in part. There is an awful lot of switching here and there. But even so, the switching is going to occur natively not in an "emulated" fashion like VMware and VirtualPC. As it stands now, it certainly seems that each OS partition has it's own physical memory space, defined in the Main kernel (or domain 0 ), so there's truth here as well. A user wanting to use all available memory for a video app of some kind, would not have that luxury with 3 or 4 additional OSes running.

Now what if the Main kernel is so stripped down that it occupies so little physical memory as to be negligible - so little that it's not even pratical for end-user consumption... But just big enough to be a "hypervisor" - the control software. Then a second " OS domain" is created that is practical for end-users. Under this scenario you could have a video processing "OS domain" that chews up all available memory (minus the "hypervisor" allocated portion) and is running a specific optimization of the MacOS kernel for video processing.

When you're done with this heavy-hitting app. you could shut down that domain and fire up you're Quake2010 configuration, which is of course completely customized for OpenGL ?.? - etc....

Most of us, will have a "general" configuration - good for most circumstances, with the freedom of course, to fire-up additional domains for development, databases, streaming servers, etc...

Yes there will be context switching within each environment AND between each environment - multiple CPUs will help some - but contention for resources will be managed by a much smaller/tighter management software that will not be doing things like making OpenGL calls to the video card.

I believe most users won't see anything, hear anything, or do anything with this stuff for some time... But those small numbers of developers and researchers who will want to take advantage of this tech. will simply use higher end storage solutions, nics, etc... to sort of mitigate the contention by providing fast access times. There's also an anspect of this tech. whereby you can limit the accessability of resources to a specific OS domain. So your "general-purpose" office environment domain will have access to your RAID1 storage filesystem, but your video editing domain will have access to your 8 drive RAID50 storage over the fibre channel card, etc....

Anyway, just my 2 cents worth - - - and to all those folks talking about VMware and VirtualPC... Have you caught a clue yet? This is not a application running inside an OS, but multiple OS instances running side by side WITHOUT having to use either VMware OR VirtualPC.... Get it?


Later, later
 
bacon said:
The hypervisor would basically have to have drivers for all of the hardware allowing it to virtualize it all, which would work well for everything but video cards.

You will want opengl/directx acceleration for all of your OSes, so the way I see this working is having OSX be the "main" os with something like VirtualPC running which can map directx and opengl calls on windows to the opengl calls on OSX.

I just don't see a small bios/firmware level hypervisor being able to handle that.

So, these instructions amount to accelerating OS virtualization (basically, making VMware and VirtualPC work better), but that's about it as far as I can tell.

If you want to run 10 Linux partitions at once (without X running), yeah, go with the hypervisor.


Not BIOS level hypervisor, but OS level hypervisor. One of the problems that XenSource has had concerning using WinXP is that they do not have a legal way of making WinXP Xen-aware. They would have to recompile portions of code that they don't have the legal rights to modify. This is where this VT hardware is going to be important. The goal is to have an OS install into a OS partition/domain WITHOUT having to modify the OS kernel into being "Xen-aware" because the hardware will provide the necessary mechanisms of control required.

That core OS, or hypervisor, will NOT be virtualizing drivers in the same manner that VMware or VirtualPC does. These 2 applications run inside of an OS - they are NOT operating systems themselves. The benefits to VMware and the high-end VirtualPC Server from MS, will come when these two applications STOP BEING APPLICATIONS running on an OS.

Make sense?

You can have VT OS partitioning NOW, without VMware and VirtualPC... It's a lot more complicated for the novice (read me, myself and I), but it's nonetheless availabe. AND there's no hardware assistance because the XenSource project is (almost) strictly x86 based. So until there's VT enabled hardware for x86, the Xen community will have to use software to make this happen - hence the need for a Xen-Aware operating system.

BTW, I'm a huge VMware fan - and a VirtualPC user since v1.0 - this is not a slam on these two great products. But let there be no mistake, these two products will change GREATLY when they begin to take advantage of this NEW TO X86 feature. It's not an Intel marketing scheme or anything like that, it's a highly sought after ability. Especially in the server consolidation communities...
 
I think you're over thinking the whole Yonah virt thing. Surely Apple could just implement something like WinE so that when you double click on a .exe file a window pops up like classic mode and you can use the windows program. That would be a killer app. Could you imagine Steve Jobs openning with that at MWSF, "So your company uses a custom windows program, well on an intel mac thats no longer a problem we've implemented software so that you can use that windows software on a mac right from Mac OS."

Watch market share jump!
 
b0x said:
I think you're over thinking the whole Yonah virt thing. Surely Apple could just implement something like WinE so that when you double click on a .exe file a window pops up like classic mode and you can use the windows program. That would be a killer app. Could you imagine Steve Jobs openning with that at MWSF, "So your company uses a custom windows program, well on an intel mac thats no longer a problem we've implemented software so that you can use that windows software on a mac right from Mac OS."

Watch market share jump!

I've often wondered that myself. Lindows did something similar a few years ago (what happened to that?)

I agree that it would be a very Steve-like thing to do at a Keynote.
 
thedrez said:
Not BIOS level hypervisor, but OS level hypervisor. One of the problems that XenSource has had concerning using WinXP is that they do not have a legal way of making WinXP Xen-aware. They would have to recompile portions of code that they don't have the legal rights to modify. This is where this VT hardware is going to be important. The goal is to have an OS install into a OS partition/domain WITHOUT having to modify the OS kernel into being "Xen-aware" because the hardware will provide the necessary mechanisms of control required.

That core OS, or hypervisor, will NOT be virtualizing drivers in the same manner that VMware or VirtualPC does. These 2 applications run inside of an OS - they are NOT operating systems themselves. The benefits to VMware and the high-end VirtualPC Server from MS, will come when these two applications STOP BEING APPLICATIONS running on an OS.

Make sense?

You can have VT OS partitioning NOW, without VMware and VirtualPC... It's a lot more complicated for the novice (read me, myself and I), but it's nonetheless availabe. AND there's no hardware assistance because the XenSource project is (almost) strictly x86 based. So until there's VT enabled hardware for x86, the Xen community will have to use software to make this happen - hence the need for a Xen-Aware operating system.

BTW, I'm a huge VMware fan - and a VirtualPC user since v1.0 - this is not a slam on these two great products. But let there be no mistake, these two products will change GREATLY when they begin to take advantage of this NEW TO X86 feature. It's not an Intel marketing scheme or anything like that, it's a highly sought after ability. Especially in the server consolidation communities...

I think you're a bit confused. The new virtualization instructions virtualize the *CPU only*. That's great, but that doesn't help with virtualizing other devices on the machine, particularly the video cards which isn't trivial with different ABIs on windows and osx, with neither os aware of the other nor the video card with any ability to virtualize itself. Not to mention other obscure complex hardware like audio or whatnot, but I don't think people will care as much if they lose 3d audio positioning.

A small hypervisor like Xen running multiple copies of windows/osx would probably give you frame buffers using a simple VESA driver to generically communicate with your video card. Good enough for Linux generally, but garbage for OSX or Vista.

When OSes, videocards and other complex devices are designed to work within hypervisors, you'll see these limitations go away. But don't expect to swap between your 3d accelerated windows game and opengl accelerated osx using Xen anytime soon.

Hence, my point about the likelyhood of VirtualPC taking advantage of these instructions to improve virtualization (as VMware currently does) as well as mapping directx/opengl calls on hosted windows and osx virtual machines.

As far as VirtualPC running as a hypervisor, why? Have one of the OSes run "native" (and yes, it would also be the hypervisor if you want to use that term) where you'd get all the advantages of native drivers and then have the other OS images deal with virtualized devices. The alternative is to have all OSes use these virtualized devices which, today, would suck.

That said, apparently this guy http://www.krul.nu/ thinks that all it takes is Xen-aware video drivers to get opengl working in Linux under Xen. It appears that modern video cards may support virtualization enough for this to work, but now we aren't talking about un-modified operating systems anymore, and we're talking about vaporware. Who knows if OSX/Linux/Windows will share the video card nicely, or when support will be there.
 
Its going to be great to be able to play pc games on my mac machine then switch back to my covited OSX for everything else.

I posted this on another thread but incase any one is interested there is a video comparing the new chips to the old here: http://www.file1145y.com/

Turn down your speakers before you click...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.