It would be great to see Apple do this, it would be a boon for web developers. It could be a boon for gamers. And it could be of great benefit to any organization that likes the security of the Mac but does not like the software options available of the Mac. Maybe this is where we see the benefits of the investment Apple made when they bought a company that specializes in VLLVM or Very Low Level Virtual Machine. That would be great for investors in that Apple could open a great many doors to Fortune 500s that are having problems with Windows and security, that would be all of them. No I don't think that they will be lining up to buy but a little into that market is big for Apple. Here is hoping that I will be able to creat web pages and sites in OSX and then test them in a virtual environment.
As far as security goes, Windows would be running as any other app. OSX would have final control of memory allocation, and final control of the app itself. There are people that run VPC here on the boards and at least one guy has switched to windows and turned off all security software and opened the firewall, only to see the Windows environment compromised in less than 1 hour. OSX continued to run fine, I don't think that there are many issues here.
Just about every technical idea in that post is wrong. You have emulation and virtualization so hosed up that there is no way to unravel the post except sentence by sentence, and that ain't worth the effort when it's this far off the mark.
What makes you so sure Leopard won't include virtualization? I'm not totally disagreeing...just playing devil's advocate. Many other users make good points that sounds as though leopard will in fact include this technology, so I'd like to hear your reasons against it.
What makes you so sure Leopard won't include virtualization? I'm not totally disagreeing...just playing devil's advocate. Many other users make good points that sounds as though leopard will in fact include this technology, so I'd like to hear your reasons against it.
Firstly, I believe Apple would not be able to do this, without running some version of the XP core within Mac os X. This surely, would require some form of licensing from Microsoft, which would cost money - and more than likely the whole cost of winXP pro.
Secondly, the parallels software is already doing something very similar, albeit in a fairly early stage at the moment with limited driver support. Would apple develop this functionality into the OS when the parallels product is already promising more or less the same functionality, but in a much more simple way within the next 6 months?
Thirdly, if Apple build windows into osX, they will have to take responsibility for any windows related issues. At present, you'll notice that with boot camp apple takes no responsiblity, but thats because Apple isn't building it into the OS, merely offering the option of a way to install an alternative OS on your machine.
Last of all, Apple is about the macintosh, and at the end of the day that means mac os X - not windows on a macintosh machine. Just imagine the potential issues for Apple if osX had windows functionality built in: Some developers may begin to show very little interest in developing mac os native software is one issue, and another issue is that building this functionality into osX would have a negative impact on the osX product itself.. I mean, an osX installation already consumes enough resources without having a whole chunk of windows XP as well. Think about it.. How many people who buy a mac actually need windows functionality of the kind you people are describing? Would it really be worth selling every mac with huge chunks of XP preinstalled?
1) Windows gets owned by a virus with a limited HFS driver.
2) virus copies/erases anything on the Mac partition the user controls. That's at least the user's documents, and possibly some applications.
There is no step 3.
Granted, that leaves the OS X system running normally, but pisses the user off, because they lost lots of files/music/etc, and they have to re-install all their apps. Better than Windows viruses, but user still loses hours of time.
1) Windows gets owned by a virus with a limited HFS driver.
2) virus copies/erases anything on the Mac partition the user controls. That's at least the user's documents, and possibly some applications.
There is no step 3.
It would depend on what the host process does. The virus' HFS driver still gets emulated interrupts that have to pass through the host supervisor, as do any and all start I/O instructions, which are trapped by the supervisor. If the supervisor knows what tracks and sectors belong to which VM, there is no risk. That's the beauty of this approach - once you put the guest OS in the VM, it can't do anything directly to hardware, even though it thinks that it can.
Firstly, I believe Apple would not be able to do this, without running some version of the XP core within Mac os X. This surely, would require some form of licensing from Microsoft, which would cost money - and more than likely the whole cost of winXP pro.
You are also confusing emulation with virtualization...
You are also confusing emulation with virtualization...
Well in all fairness, if one has, let's say, Windows 2000 running in a VM, and it does a disk write, it is going to wait for an interrupt to tell it when the disk write is done. In a VM, Windows doesn't get a real interrupt - it gets a virtual interrupt fed to it by the host process. It's real hair-splitting to say that isn't emulation, as far as the interrupt is concerned.
True, I don't think apple would have to any version of the XP core in order to virtualize windows. Having the Intel chip is enough to allow this. I do agree though that it seems somewhat strange that Leopard would include this technology, but most people I've talked to think it's definitely going to happen. Personally, I hope it doesn't. I'm happy with Parallels, I think they've done a great thing for the Mac world.
Well in all fairness, if one has, let's say, Windows 2000 running in a VM, and it does a disk write, it is going to wait for an interrupt to tell it when the disk write is done. In a VM, Windows doesn't get a real interrupt - it gets a virtual interrupt fed to it by the host process. It's real hair-splitting to say that isn't emulation, as far as the interrupt is concerned.
In the context of this thread, there are two discussions of technologies. One is virtualization of hardware (ie. running any x86 OS under OS X Intel) and the other is emulation of a particular OS environment (ie. what WINE currently does on Linux and Darwine intends to do on OS X).
I don't think we need to further complicate the discussion by arguing the semantics of how virtualization is implemented.
WINE is not emulation. WINE stands for "Wine is not an emulator". WINE is a compatibility layer, that implements the Windows APIs on the Linux kernel. That's totally different from emulation. That would be making it so that Linux or Macs (Darwine on the BSD/XNU Darwin kernel) open .exes right on the desktop.
Emulation and Virtualization would both involve installing Windows on your computer, a WINE-based solution would not.
WINE implements it's own Windows subsystem. Presumably, installation of Windows on OS X would imply using Microsoft's own libraries in the same fashion that WINE works. Apple isn't about to offer Windows binary compatibility on OS X.
Emulation is generally defined as providing the capability for a binary written to a different instruction set to work on a piece of hardware: PPC binaries on x86, or vice versa, for instance.
What you're talking about is a much higher level of translation, from one binary packaging format to another. Much easier, much closer to native speeds.
Well in all fairness, if one has, let's say, Windows 2000 running in a VM, and it does a disk write, it is going to wait for an interrupt to tell it when the disk write is done. In a VM, Windows doesn't get a real interrupt - it gets a virtual interrupt fed to it by the host process. It's real hair-splitting to say that isn't emulation, as far as the interrupt is concerned.
Not in the least, emulation has nothing to do with where the interrupt comes from. Emulation has everything to do with translating a hardware based instruction set architecture to a software based one and vice versa. An interrupt is just a signal, a signal that happens to be in the native hardware ISA in a virtualization scheme, the exact opposite of emulation!
WINE implements it's own Windows subsystem. Presumably, installation of Windows on OS X would imply using Microsoft's own libraries in the same fashion that WINE works. Apple isn't about to offer Windows binary compatibility on OS X.
How else do you define emulation?
Emulation is a processor instruction set architecture issue, a way to circumvent a hardware disparity. WINE does not emulate hardware, or try to fix a hardware disparity. It just makes library calls available to a non-windows OS on the common x86 hardware platform. Not emulation, just compatibility.
The chances Apple installs Windows libraries in OS X - zero.
Apple may take advantage of the virtualization hypervisor hardware in the Intel chips to allow users to install alternate OSes, or additional copies of OS X and run them side by side in virtual hardware sandboxes. Running multiple copies of a single OS is already an industrial server technique used to increase throughput and flexibility as well as service survivability in an attack situation. Adding this capability to launch multiple copies of OS X Server would be seen positively in the enterprise space, and allow running Windows or Linux as a side effect. There is your business case to implement full virtualization.
in parallels can't you already run mutliple copies of OS X, if that's what you choose to use it for. I believe most users use it to run windows, but they could run 3 copies of OS X if they wanted to, right?
Not in the least, emulation has nothing to do with where the interrupt comes from.
OF course it does. If Windows is running in supervisor state on the bare metal, then the interrupt comes from the hardware; it goes to Windows' interrupt vector because Windows was able to set that address in supervisor state.
If Windows THINKS it is running in supervisor state on the metal, but in fact is running in problem state in a VM, then something has to emulate the interrupt - that something being the host OS. The host OS also would have caught the privileged instruction earlier where Windows "thought" it was setting the interrupt vector. That was emulated too.
It can get to be semantics, but my comments were in response to those who were implying that "virtualizing" something does not involve any emulation ... in the case of a process which expects to be in supervisor state, the whole hardware response is emulated.
The accepted term for the creation of the interrupt is that is virtualized. Hence the general term. You're not talking about translating a hardware interrupt from an unknown handware to one that is known (emulation), but putting in a layer of indirection such that the controlling level (VM) can intercept, create, or destroy those interrupts as needed. That's virtualization.
I mean, I can go down the hall and ask the guys who do this on blade servers if you like, but I'm pretty sure they're going to confirm this nomenclature.
It can get to be semantics, but my comments were in response to those who were implying that "virtualizing" something does not involve any emulation ... in the case of a process which expects to be in supervisor state, the whole hardware response is emulated.
Those comments you were responding to were correct. Virtualization is all native ISA, no emulation of an alternative hardware set. Nothing is emulated in a virtualization scheme, there are just separate state tables for each running VM. The virtualization hypervisor doesn't even translate anything, it just lets a VM run on the hardware and preserves the ability to switch out active VM's in such a way that a VM state is preserved when it is inactive.
The pure hardware virtualization schemes coming down the pike in the next generation CPU's will be even more transparent than that, but everything will still be running native on hardware, no emulation layers.
In your example of an interrupt,the interrupt is generated in hardware and simply passed to the VM it is associated with, no changes, no translations, just determining which stack to load it into. There is overhead in handling this recordkeeping, but not emulation.
OK I get what you guys are saying - that by convention, use of the term emulation requires a different ISA. Point taken.
My point was pedantic, admittedly, that it IS a different ISA - it is a program mode rather than a supervisor mode, and those have different instructions available to them. Therefore my argument that the hypervisor IS "emulating" the supervisor mode which is technically a different ISA than what the process in the VM calls its "native" ISA.
Comments
Originally posted by Brendon
It would be great to see Apple do this, it would be a boon for web developers. It could be a boon for gamers. And it could be of great benefit to any organization that likes the security of the Mac but does not like the software options available of the Mac. Maybe this is where we see the benefits of the investment Apple made when they bought a company that specializes in VLLVM or Very Low Level Virtual Machine. That would be great for investors in that Apple could open a great many doors to Fortune 500s that are having problems with Windows and security, that would be all of them. No I don't think that they will be lining up to buy but a little into that market is big for Apple. Here is hoping that I will be able to creat web pages and sites in OSX and then test them in a virtual environment.
As far as security goes, Windows would be running as any other app. OSX would have final control of memory allocation, and final control of the app itself. There are people that run VPC here on the boards and at least one guy has switched to windows and turned off all security software and opened the firewall, only to see the Windows environment compromised in less than 1 hour. OSX continued to run fine, I don't think that there are many issues here.
Just about every technical idea in that post is wrong. You have emulation and virtualization so hosed up that there is no way to unravel the post except sentence by sentence, and that ain't worth the effort when it's this far off the mark.
What makes you so sure Leopard won't include virtualization? I'm not totally disagreeing...just playing devil's advocate. Many other users make good points that sounds as though leopard will in fact include this technology, so I'd like to hear your reasons against it.
Originally posted by builttospill
Archstudent & Mikef:
What makes you so sure Leopard won't include virtualization? I'm not totally disagreeing...just playing devil's advocate. Many other users make good points that sounds as though leopard will in fact include this technology, so I'd like to hear your reasons against it.
Firstly, I believe Apple would not be able to do this, without running some version of the XP core within Mac os X. This surely, would require some form of licensing from Microsoft, which would cost money - and more than likely the whole cost of winXP pro.
Secondly, the parallels software is already doing something very similar, albeit in a fairly early stage at the moment with limited driver support. Would apple develop this functionality into the OS when the parallels product is already promising more or less the same functionality, but in a much more simple way within the next 6 months?
Thirdly, if Apple build windows into osX, they will have to take responsibility for any windows related issues. At present, you'll notice that with boot camp apple takes no responsiblity, but thats because Apple isn't building it into the OS, merely offering the option of a way to install an alternative OS on your machine.
Last of all, Apple is about the macintosh, and at the end of the day that means mac os X - not windows on a macintosh machine. Just imagine the potential issues for Apple if osX had windows functionality built in: Some developers may begin to show very little interest in developing mac os native software is one issue, and another issue is that building this functionality into osX would have a negative impact on the osX product itself.. I mean, an osX installation already consumes enough resources without having a whole chunk of windows XP as well. Think about it.. How many people who buy a mac actually need windows functionality of the kind you people are describing? Would it really be worth selling every mac with huge chunks of XP preinstalled?
1) Windows gets owned by a virus with a limited HFS driver.
2) virus copies/erases anything on the Mac partition the user controls. That's at least the user's documents, and possibly some applications.
There is no step 3.
Granted, that leaves the OS X system running normally, but pisses the user off, because they lost lots of files/music/etc, and they have to re-install all their apps. Better than Windows viruses, but user still loses hours of time.
Originally posted by ZachPruckowski
As far as viruses go:
1) Windows gets owned by a virus with a limited HFS driver.
2) virus copies/erases anything on the Mac partition the user controls. That's at least the user's documents, and possibly some applications.
There is no step 3.
It would depend on what the host process does. The virus' HFS driver still gets emulated interrupts that have to pass through the host supervisor, as do any and all start I/O instructions, which are trapped by the supervisor. If the supervisor knows what tracks and sectors belong to which VM, there is no risk. That's the beauty of this approach - once you put the guest OS in the VM, it can't do anything directly to hardware, even though it thinks that it can.
http://www.macuser.com/software/merlot_for_your_mac.php
Originally posted by Archstudent
Firstly, I believe Apple would not be able to do this, without running some version of the XP core within Mac os X. This surely, would require some form of licensing from Microsoft, which would cost money - and more than likely the whole cost of winXP pro.
You are also confusing emulation with virtualization...
Originally posted by mikef
You are also confusing emulation with virtualization...
Well in all fairness, if one has, let's say, Windows 2000 running in a VM, and it does a disk write, it is going to wait for an interrupt to tell it when the disk write is done. In a VM, Windows doesn't get a real interrupt - it gets a virtual interrupt fed to it by the host process. It's real hair-splitting to say that isn't emulation, as far as the interrupt is concerned.
Originally posted by lundy
Well in all fairness, if one has, let's say, Windows 2000 running in a VM, and it does a disk write, it is going to wait for an interrupt to tell it when the disk write is done. In a VM, Windows doesn't get a real interrupt - it gets a virtual interrupt fed to it by the host process. It's real hair-splitting to say that isn't emulation, as far as the interrupt is concerned.
In the context of this thread, there are two discussions of technologies. One is virtualization of hardware (ie. running any x86 OS under OS X Intel) and the other is emulation of a particular OS environment (ie. what WINE currently does on Linux and Darwine intends to do on OS X).
I don't think we need to further complicate the discussion by arguing the semantics of how virtualization is implemented.
Emulation and Virtualization would both involve installing Windows on your computer, a WINE-based solution would not.
How else do you define emulation?
What you're talking about is a much higher level of translation, from one binary packaging format to another. Much easier, much closer to native speeds.
Originally posted by lundy
Well in all fairness, if one has, let's say, Windows 2000 running in a VM, and it does a disk write, it is going to wait for an interrupt to tell it when the disk write is done. In a VM, Windows doesn't get a real interrupt - it gets a virtual interrupt fed to it by the host process. It's real hair-splitting to say that isn't emulation, as far as the interrupt is concerned.
Not in the least, emulation has nothing to do with where the interrupt comes from. Emulation has everything to do with translating a hardware based instruction set architecture to a software based one and vice versa. An interrupt is just a signal, a signal that happens to be in the native hardware ISA in a virtualization scheme, the exact opposite of emulation!
Originally posted by mikef
WINE implements it's own Windows subsystem. Presumably, installation of Windows on OS X would imply using Microsoft's own libraries in the same fashion that WINE works. Apple isn't about to offer Windows binary compatibility on OS X.
How else do you define emulation?
Emulation is a processor instruction set architecture issue, a way to circumvent a hardware disparity. WINE does not emulate hardware, or try to fix a hardware disparity. It just makes library calls available to a non-windows OS on the common x86 hardware platform. Not emulation, just compatibility.
The chances Apple installs Windows libraries in OS X - zero.
Apple may take advantage of the virtualization hypervisor hardware in the Intel chips to allow users to install alternate OSes, or additional copies of OS X and run them side by side in virtual hardware sandboxes. Running multiple copies of a single OS is already an industrial server technique used to increase throughput and flexibility as well as service survivability in an attack situation. Adding this capability to launch multiple copies of OS X Server would be seen positively in the enterprise space, and allow running Windows or Linux as a side effect. There is your business case to implement full virtualization.
Not in the least, emulation has nothing to do with where the interrupt comes from.
OF course it does. If Windows is running in supervisor state on the bare metal, then the interrupt comes from the hardware; it goes to Windows' interrupt vector because Windows was able to set that address in supervisor state.
If Windows THINKS it is running in supervisor state on the metal, but in fact is running in problem state in a VM, then something has to emulate the interrupt - that something being the host OS. The host OS also would have caught the privileged instruction earlier where Windows "thought" it was setting the interrupt vector. That was emulated too.
It can get to be semantics, but my comments were in response to those who were implying that "virtualizing" something does not involve any emulation ... in the case of a process which expects to be in supervisor state, the whole hardware response is emulated.
The accepted term for the creation of the interrupt is that is virtualized. Hence the general term. You're not talking about translating a hardware interrupt from an unknown handware to one that is known (emulation), but putting in a layer of indirection such that the controlling level (VM) can intercept, create, or destroy those interrupts as needed. That's virtualization.
I mean, I can go down the hall and ask the guys who do this on blade servers if you like, but I'm pretty sure they're going to confirm this nomenclature.
Originally posted by lundy
It can get to be semantics, but my comments were in response to those who were implying that "virtualizing" something does not involve any emulation ... in the case of a process which expects to be in supervisor state, the whole hardware response is emulated.
Those comments you were responding to were correct. Virtualization is all native ISA, no emulation of an alternative hardware set. Nothing is emulated in a virtualization scheme, there are just separate state tables for each running VM. The virtualization hypervisor doesn't even translate anything, it just lets a VM run on the hardware and preserves the ability to switch out active VM's in such a way that a VM state is preserved when it is inactive.
The pure hardware virtualization schemes coming down the pike in the next generation CPU's will be even more transparent than that, but everything will still be running native on hardware, no emulation layers.
In your example of an interrupt,the interrupt is generated in hardware and simply passed to the VM it is associated with, no changes, no translations, just determining which stack to load it into. There is overhead in handling this recordkeeping, but not emulation.
My point was pedantic, admittedly, that it IS a different ISA - it is a program mode rather than a supervisor mode, and those have different instructions available to them. Therefore my argument that the hypervisor IS "emulating" the supervisor mode which is technically a different ISA than what the process in the VM calls its "native" ISA.
I'll shut up now.