Vitualisation?

Posted:
in Future Apple Hardware edited January 2014
So at the Intel conference ther is a lot of talk about virtualisation which seems to be regarded as 'a good thing'. Is this the ability of the chip to run as two separate machines at the same time? As in running two different (or indeed two instances of the same) OS?



Does this have any nice implications for us in MacIntel land, or is it mainly a server-side thing?
«1

Comments

  • Reply 1 of 28
    hmurchisonhmurchison Posts: 12,425member
    Quote:

    Does this have any nice implications for us in MacIntel land, or is it mainly a server-side thing?





    Huge implications. Think VirtualPC but at %80 native hardware speeds real world. Think about running OSX, Win Vista and Linux all at the same time.



    Intel is moving to a point where they will virtualize down to the I/O. Future PCI Express will support virtualization as well. Devices will be communicating with two or more OS yet the OS will not know it's sharing that device or I/O.



    Soon the family computer maybe a Blade Server in the basement heavily virtualized and laden with multicore processors.
  • Reply 2 of 28
    brussellbrussell Posts: 9,812member
    Quote:

    Originally posted by hmurchison

    Huge implications. Think VirtualPC but at %80 native hardware speeds real world. Think about running OSX, Win Vista and Linux all at the same time.



    Why wouldn't it be 100%, since it is native hardware? All of those OSs will be native on Intel, so what's the point of this virtualization?
  • Reply 3 of 28
    mmmpiemmmpie Posts: 628member
    Quote:

    Originally posted by BRussell

    Why wouldn't it be 100%, since it is native hardware? All of those OSs will be native on Intel, so what's the point of this virtualization?



    An OS expects to have full control of the machine. Setting up the virtualistion so that the OS _thinks_ it has full control results in a performance penalty. This might seem pointless, but it is usually cheaper to get a 20% faster computer than a second one.



    AMDs Pacifica and Intels VT seek to improve the hardware support for virtualisation, and so reduce the performance cost ( and the complexity of deployment ). Pacifica is more advanced than VT, but neither offer complete virtualisation, and it will be few years before they do.



    What we might see, however, is support in 10.5 for running Windows as a virtual machine. This is pretty much exactly what happens with Classic at the moment.



    This could be a good solution for Apple. No one really likes running software in a virtual machine ( be it classic or otherwise ), so it wont threaten the Mac software market, but it will let us run any software that we need from windows. We might even get good support for video hardware ( and hence games ).
  • Reply 4 of 28
    tubgirltubgirl Posts: 177member
    Quote:

    Originally posted by mmmpie

    ...

    This might seem pointless, but it is usually cheaper to get a 20% faster computer than a second one.

    ...




    actually just must get a 25% faster computer to even it out, but i get your point. (sorry)



    are these kinds of tricks not possible on current hardware or just inefficient?
  • Reply 5 of 28
    telomartelomar Posts: 1,804member
    Really the benefit of virtualisation is just the ability to add a software layer between OS and hardware. Whether you use that to improve security or better partition resources is entirely up to the vendors but it allows for both hence the reason you can expect it to become standard.
  • Reply 6 of 28
    brussellbrussell Posts: 9,812member
    Quote:

    Originally posted by mmmpie

    An OS expects to have full control of the machine. Setting up the virtualistion so that the OS _thinks_ it has full control results in a performance penalty. This might seem pointless, but it is usually cheaper to get a 20% faster computer than a second one.



    So the key line in hmurchison's post is "Think about running OSX, Win Vista and Linux all at the same time." You could always boot into X, Windows, or Linux on your Macintel whenever you want. But this virtualization allows you to run them simultaneously. Is that right?
  • Reply 7 of 28
    wmfwmf Posts: 1,164member
    Basically Intel's virtualization stuff makes VMware run faster. VMware will still look and act the same, but it will be faster.



    Hopefully VMware for OS X will be released sooner rather than later.
  • Reply 8 of 28
    hirohiro Posts: 2,663member
    Quote:

    Originally posted by wmf

    Basically Intel's virtualization stuff makes VMware run faster. VMware will still look and act the same, but it will be faster.



    Hopefully VMware for OS X will be released sooner rather than later.




    The whole point of built in hardware virtualization is to eliminate the need for VMware like software, and allow all the OSes to run at 100% native speeds.
  • Reply 9 of 28
    jimbo123jimbo123 Posts: 153member
    is virtualusation not just emulation at the end of the day?



    In my understanding of all this, Virtualisation just means that the hardware would have one driver for any os.



    But running an os on a different platform would be emulation... ie software layers just like Virtual PC.



  • Reply 10 of 28
    telomartelomar Posts: 1,804member
    Quote:

    Originally posted by jimbo123

    is virtualusation not just emulation at the end of the day?



    In my understanding of all this, Virtualisation just means that the hardware would have one driver for any os.



    But running an os on a different platform would be emulation... ie software layers just like Virtual PC.







    Really virtualisation means an OS ceases to become an OS in the old sense. It no longer communicates with hardware but instead a software layer abstracts hardware from OS. This allows multiple OS's to be run like you would multiple apps.



    Right now not everything is shared cleanly but certainly Intel has roadmaps to basically virtualise everything, by which I mean abstract the OS's direct communication to hardware.



    In the past there have been some software packages that allowed you to achieve similar results but now Intel is essentially putting it in hardware. They had some good diagrams at one stage and if I can stop feeling lazy I might hunt them down. Otherwise head over to The Inquirer and search for Vanderpool or Pacifica. They have a couple good 3 part articles on it from memory.
  • Reply 11 of 28
    snoopysnoopy Posts: 1,901member
    Wouldn't virtualisation make it easier to run OS X on non-Apple hardware?
  • Reply 12 of 28
    vinney57vinney57 Posts: 1,162member
    Quote:

    Originally posted by snoopy

    Wouldn't virtualisation make it easier to run OS X on non-Apple hardware?



    Don't be silly now.
  • Reply 13 of 28
    wmfwmf Posts: 1,164member
    Quote:

    Originally posted by snoopy

    Wouldn't virtualisation make it easier to run OS X on non-Apple hardware?



    Yes. People are already running OS X on VMware on non-Apple hardware.
  • Reply 14 of 28
    hirohiro Posts: 2,663member
    Quote:

    Originally posted by jimbo123

    is virtualusation not just emulation at the end of the day?



    In my understanding of all this, Virtualisation just means that the hardware would have one driver for any os.



    But running an os on a different platform would be emulation... ie software layers just like Virtual PC.







    No.



    Virtualization is just a way for a CPU to manage independent OS states concurrently. Each OS believes it has the run of the machine but doesn't really. Each OS is also directly executing on the CPU, which is fully native, not emulation.



    Emulation is a guest OS running inside a software pen (application), and the software pen (application) is being executed by a native OS on the CPU. Huge execution speed difference.
  • Reply 15 of 28
    snoopysnoopy Posts: 1,901member
    Quote:

    Originally posted by Hiro





    . . . Virtualization is just a way for a CPU to manage independent OS states concurrently. Each OS believes it has the run of the machine but doesn't really. Each OS is also directly executing on the CPU, which is fully native, not emulation. . .







    So the complete state of the CPU is saved, as the CPU moves from one OS to the next as requested? It sounds simple enough. I imagine for most home computers the OS would be switched only when the user wishes to run an application in another OS. Do servers need to switch the OS more frequently? I can't think of a good reason to do so off hand, but I'm not an IT type.



    Actually, each OS does have the run of the machine as long as it is active, right? So the OS runs totally native and in complete control. The overhead to save the complete state of the machine would not be much if the OS did not get switched frequently. I imagine that the virtualisation takes care of RAM allocation too. Each OS would have its own block of RAM, no?
  • Reply 16 of 28
    wmfwmf Posts: 1,164member
    Quote:

    Originally posted by snoopy

    So the complete state of the CPU is saved, as the CPU moves from one OS to the next as requested? It sounds simple enough. I imagine for most home computers the OS would be switched only when the user wishes to run an application in another OS.



    All modern OSes do various housekeeping tasks many times per second, so on a virtualized system there are a lot of VM switches. But that's OK, because VT and Pacifica make those switches cheaper.



    Quote:

    Actually, each OS does have the run of the machine as long as it is active, right? So the OS runs totally native and in complete control.



    No, because then the OSes would accidentally stomp on each other. That's why...



    Quote:

    Each OS would have its own block of RAM, no?



    Right, and its own disk space. And each peripheral can only be controlled by one OS, so the other OSes get virtual network, sound, and graphics controllers.
  • Reply 17 of 28
    mattyjmattyj Posts: 898member
    Say if you bought a dual core machine, and an SLI based graphics setup, so you had two 7800 GTXs for example. Wouldn't this technology allow one processor and one graphics card to be used by one OS, and the other two by another?



    So you could have OS X allocated it's own processor and 7800, and all in essence that the OS' share would be the IO buses? I doubt the technology would work this way, rather using what it needed when it needed it, but is this possible? Certainly would be a tasty setup and wouldn't cause a performance hit as such.
  • Reply 18 of 28
    snoopysnoopy Posts: 1,901member
    Quote:

    Originally posted by mattyj

    . . . Wouldn't this technology allow one processor and one graphics card to be used by one OS, and the other two by another? . . .









    Maybe this is possible, but it would cut your performance in half. You might run OS X for a half hour and only have one core. Then you run Windows and Office for 15 minutes using the other core. The only way it might be efficient is if one OS is running in the foreground while the other is doing a task in the background. That is, both OS X and Window actively running applications simultaneously.
  • Reply 19 of 28
    snoopysnoopy Posts: 1,901member
    Regarding each OS being in complete control, you wrote:



    Quote:

    Originally posted by wmf



    . . . No, because then the OSes would accidentally stomp on each other. . .







    Not if they each have their own RAM space, or am I missing something here? Will the I/O get stomped on, or the use of some PCIe card?
  • Reply 20 of 28
    mattyjmattyj Posts: 898member
    Quote:

    Originally posted by snoopy

    Maybe this is possible, but it would cut your performance in half. You might run OS X for a half hour and only have one core. Then you run Windows and Office for 15 minutes using the other core.



    I know it would cut performance in half, but it would still be as fast as a single core system running just one OS, would it not (in theory). Although this would be an expensive method of using two os' it would be interesting to find out if the technology was this flexible.



    Also, in regard to your later post snoopy, you could be right, the I/O busses don't hold data like RAM or a HD or video ram in a graphics chip or like cache on a CPU etc, so as long as both os' have their own memory allocation, core, and HD space (perhaps graphics chip and video ram as well) and don't have background tasks going on all the time (which is a given as someone stated above however) or if they are alternated via timing or whatever then the os' wouldn't stomp over each other. Phew.



    But you'd need double of most things and a complicated timing system to achieve such a system imo.
Sign In or Register to comment.