Briefly: Multi OS PowerPC, Canadian iPod price reduction

2»

Comments

  • Reply 21 of 26
    onlookeronlooker Posts: 5,252member
    I didn't read any posts sorry, but I was thinking it was probably devised for game developers to easily create XBOX2, and PS3 Games simultaneously, or Anything that would be currently (at that time) running under PPC architecture.
  • Reply 22 of 26
    You forget latency within a single CPU under heavy load multitasking app instances that also rely heavily on multithreading.



    I agree a digital hub would require either multiple CPUs or at the very least a multi-core which would allow for a dedicated real-time set of PIs to remove latency that audio/video introduces.



    Quote:

    Originally posted by schizzylogic

    Nah. You only need multi-tasking to accomplish that, you don't need virtualization/partitioning for that. In fact, partition a system to do something like that would be a complete waste of system resources because of the overhead it would take just to run each instance of the operating system.



    Also, This is not like VirtualPC; take a system with 1 CPU, 1 Hard drive, one network connection and creates multiple "virtual" systems with it.



    partitioning as described in the article would be to take a system with 4 CPUs, 8 hard drives, and 2 network connections and create say 2 virtual systems from it each with 2 CPUs, 4 hard drives and one network connection. This allows you to maximize system resources for each given purpose. Maybe one virtual system needs more hard drive space or one needs more CPU power than the other, etc.




  • Reply 23 of 26
    Quote:

    Originally posted by thuh Freak

    [...]

    that brief snippet there doesn't mention x86 compatibility or emulation, so i dont think windows will start running on apple hardware anytime soon (based on this article).

    [...]



    Really?

    Windows is already running on current Apple hardware.



    Furthermore, the NT kernel is designed in such a way that the ISA modules are fairly modular and, therefore, easy to switch. NT, don't forget, was designed to run on Intel's dead-before-arrival N-Ten CPU, and then (within a matter of months, apparently) ported to x86-32 when Microsoft realised the folly of even having bothered with making an OS for the N-Ten.



    That's why Microsoft had DEC Alpha compatibility, along with MIPS,

    SPARC, and, you guessed it, PowerPC support in WinNT 3.1.



    No, I'm not mad. It was a reality.



    To get NT-based Windows running on pretty much any architecture is feasible, if only Microsoft wanted to bother.



    They have a reason to release a PowerPC compatible version of Windows with their Xbox2 SDK. Until now, any reason they could come up with would have ended up getting them in more antitrust hot water, and possibly even obliterating Apple on their own turf (due to sheer might).



    Excuse me if this is a tad off-topic, but I felt it needed saying.



    With regards the "multiple operating systems" malarchy, I doubt that this will be the only possible use for the technology...

    Imagine, if you will, small, ISA-level applications designed for one (heavy) task, and one task only. In other words, think of a generic CPU that could be used in a similar vein to the way GPUs are used right now - marshal off tasks of particular types and intensities to the processor (in it's own sandbox OS), and collect the results later.



    That way, an image rendering app (for example) could use what would effectively be a whole, free, CPU to carry out intensive tasks, whilst still letting the user carry on as normal. Think "dual CPU" on steroids.



    Alternatively, you can take the view that you could create a personal grid computing environment, but I'll leave that discussion for another day.



    There are lots of possibilities for this technology, but everyone jumps on the "OMFG!!1one! i r teh 1337 OS runnA" bandwagon, because that's the most controversial topic, and diverges the furthest from current usage scenaria.



    Oh, and I'm sorry if this post has been long and winding, or doesn't adhere to AI's guidelines or ettiquete, or something - it's my first post here
  • Reply 24 of 26
    Will there be a discount to those who already paid this third party and now want their money back?
  • Reply 25 of 26
    aslan^aslan^ Posts: 599member
    If anyone is interested in how virtualization is currently being utilized.. here is some software that provides a virtualization solution for Linux.



    http://www.sw-soft.com/products/virtuozzo/



    My web host provider uses this software basically to give everyone their own private computer with root access and the ability to install software, choose which services and Daemons are running, run sperate distributions of Linux, and otherwise run amok. Basically its like having my own private dedicated server at a fraction of the cost ($15 a month versus say $99 a month).



    The CPU(s) and RAM are shared by all users but most of the time not all users want to use the resources at the same time (so whoevers using it gets the lions share), when they do, a load balancing program allocates resources *fairly*.



    This kind of software for OSX would allow PowerPC servers to host multiple operating systems (linux, OSX) and give each user their own dedicated server according to their tastes.
  • Reply 26 of 26
    Neat link, Aslan, I am now beginning to fathom the possibilities of OS independent microprocessors. It occurred to me, however, that we may be looking in the wrong direction if we're assuming that this technology will be utilized in future Power Macs. I think, instead, we will like see it first implemented on the Xserve. As we've already established, such functionality would be less useful for desktop users. It would, of course, be nice to have one Power Mac be a server to multiple Mac terminals, but I doubt that's where Apple is going. It seems to me that one would need at least a dual core dual processor Power Mac or an Xserve cluster to have sufficient processing power for virtual systems. Our current systems are still too easily pegged by even simple operations (window resizing comes to mind), so expecting this technology to provide substantial returns on only slightly faster hardware seems to be overly optimistic.
Sign In or Register to comment.