*CONFIRMED* Mac OS X on x86 after this year!

11112131517

Comments

  • Reply 281 of 339
    [quote]Originally posted by Programmer:

    <strong>Sure some people will figure out how to hack it to run on some PCs, but these people will be running the OS illegally and it won't have solid and/or fast drivers to support it.</strong><hr></blockquote>



    Ok, but how many things could possibly be different? Given Apple's more recent attraction to commodity components, what would prevent someone like Dell from reverse engineering the unique bits to legally ship OS X compatible hardware? Sure, they won't be able to ship hardware that deviates from Apple's, but they can certainly undercut margins given they do little R&D.



    Letting the hardware genie out of the bottle is what Apple should be fearing most.



    [quote]<strong>I could also see one or two MacOSX licensees whom Apple brings on board with specific and limited contracts to simply to be secondary suppliers for x86 and PowerPC hardware on which to run Apple's OS. This is important in a bunch of business and government markets that Apple no doubt wants to be in. These tend to be high margin environments which emphasize solutions and support, so by choosing appropriate partners (e.g. IBM) Apple doesn't have to worry about them suddenly jumping into Apple's traditional markets and doing what PowerComputing et al did.</strong><hr></blockquote>



    True. It's a double dip for IBM. They can sell their clients PPC systems, or x86 systems (can OS X run on a POWER 4 chip assuming proper drivers, etc?). IBM probably benefits more from selling the PPC boxes, and wouldn't be interesting in cutting into Apple's markets. It gives IBM options as well as Apple. Apple would probably want some assurances from IBM that they'll push OS X in some capacity over Windows, though.



    Apple gains marketshare if OS X client is positioned to hang off of traditional IBM servers and services better than Windows. That might be a decent strategy and help get IBM out of the MS spiderweb, but OS X and linux/AIX don't seem to be paralleling particularly well. OS X is either too far ahead or too far behind... The real advantage is that it brings big cash to Apple through IBM.



    On the flip side, if IBM is looking to bring new CPUs into the mix, they can meet Apple in the middle - IBM providing throughput systems for enterprise at lower prices than the POWER4 systems, Apple providing CPU systems for content work with higher performance than they current offerings and Apple will sign up in a second to ensure that PPC gets its legs back.



    But OS X on x86 will come back around to the age-old question: will Office run on it? I assume that any Carbon or Cocoa app is not much more than a recompile (stupid endian issues notwithstanding) but I would think that antitrust or no, Office on X86 without Windows will happen over Gate's dead body.



    Gotta wonder: will we see a giant Palmisano head at MWSF?
     0Likes 0Dislikes 0Informatives
  • Reply 282 of 339
    kecksykecksy Posts: 1,002member
    This has gone on long enough. I'm only going to say this once.



    OS X WILL NEVER RUN ON A PC AND APPLE WILL NEVER PUT A PENTIUM OR ATHLON IN A POWERMAC!!!
     0Likes 0Dislikes 0Informatives
  • Reply 283 of 339
    stoostoo Posts: 1,490member
    [quote]APPLE WILL NEVER PUT A PENTIUM OR ATHLON IN A POWERMAC!!!<hr></blockquote>



    DOS compatible 6100? (Or was that a 486?)



    I agree that a shift to x86 (and its sucessors) is unlikely. x86 is nearing the end of its life, Apple spent the late 90s cussin' the Pentium line, a significant number of Apple's applications are heavily Altivec optimised, users won't like having to buy "new" applications, software houses won't like having to recode.



    [ 08-06-2002: Message edited by: Stoo ]</p>
     0Likes 0Dislikes 0Informatives
  • Reply 284 of 339
    pbpb Posts: 4,255member
    Is there any truth on that?



    <a href="http://www.crazyapplerumors.com/2002_08_04_archive.htm#85316500"; target="_blank">http://www.crazyapplerumors.com/2002_08_04_archive.htm#85316500</a>;



    If yes, then a possible switch to Intel chips could be more than probable. Personally I doubt, unless Apple degenerates to a software company.
     0Likes 0Dislikes 0Informatives
  • Reply 285 of 339
    jpfjpf Posts: 167member
    Jezz, I should have never started this thread. How much more of this can I take? Can't we all just get along?



    The post about the yard sale, is a joke.
     0Likes 0Dislikes 0Informatives
  • Reply 286 of 339
    pbpb Posts: 4,255member
    [quote]Originally posted by JPF:

    <strong>

    The post about the yard sale, is a joke.</strong><hr></blockquote>



    Of course <img src="graemlins/lol.gif" border="0" alt="[Laughing]" /> <img src="graemlins/lol.gif" border="0" alt="[Laughing]" /> , but seriously I doubt Apple would make the move before it shrinks to a software company (or it gets--and perhaps we get--clear indications for the inevitable).
     0Likes 0Dislikes 0Informatives
  • Reply 287 of 339
    [quote]Originally posted by moki:

    <strong>



    I'm not an analyst, I'm an engineer -- I am not basing any of my speculation on what the various pundits are writing, but rather on... other information. </strong><hr></blockquote>

    Other information? Are you referring to AMD's rumored Intel killer?
     0Likes 0Dislikes 0Informatives
  • Reply 288 of 339
    rickagrickag Posts: 1,626member
    [quote]Jeff Leigh

    "The 1.6GHz is probably bogus, but could be an overclocked chip. The numbers seem to scale correctly.

    The 1GHz score that was listed was for a single processor. Not a single processor machine, but the RC5 client running on only a single processor.

    Here are the scores for a dual 1GHz machine:



    <a href="http://n0cgi.distributed.net/speed/query.cgi?cputype=99&arch=all&contest=all&multi=1&; recordid=0" target="_blank">http://n0cgi.distributed.net/speed/query.cgi?cputype=99&arch=all&contest=all&multi=1&; recordid=0</a>

    20,847,444...not too shabby."<hr></blockquote>



    Thank you for the clarification. 1.6GHz bogus/overclocked?? dual 1.33GHz bogus/overclocked???



    Looking at the Standard Deviations for the dual 800 (rc5) and dual 1000 (ogr). To be 99.7% confident(re: 3 sigma) additional test results would fall into a "range of results", the respective ranges would be about:



    dual 800 range: 1,000,000 - 25,000,000

    dual 1000 range: 2,700,000 - 32,500,000



    that's some pretty hefty deviations



    ID 8455 seems to be the results lowering the averages and upping the Stn Dev.



    interesting
     0Likes 0Dislikes 0Informatives
  • Reply 289 of 339
    x86. I'm not so sure after hearing about the 'Cell' project's specs...



    ...looking at the 'Cell' project...IBM look to be playing hard ball.



    'Cell' has 100 times the fpu performance of the Pentium 2.5 gig!!!



    :eek:



    ER. On...a crummy console?



    IBM using 'Cell' on their Servers also.



    If it's PPC.



    Conclusion. Apple will be able to use it?



    Cell is only a year-ish ahead of Apple using a 'G5'? Who needs a G5 if you have a Cell?



    Lemon Bon Bon



    [ 08-06-2002: Message edited by: Lemon Bon Bon ]</p>
     0Likes 0Dislikes 0Informatives
  • Reply 290 of 339
    programmerprogrammer Posts: 3,503member
    [quote]Originally posted by johnsonwax:

    <strong>Ok, but how many things could possibly be different? Given Apple's more recent attraction to commodity components, what would prevent someone like Dell from reverse engineering the unique bits to legally ship OS X compatible hardware? Sure, they won't be able to ship hardware that deviates from Apple's, but they can certainly undercut margins given they do little R&D.



    Letting the hardware genie out of the bottle is what Apple should be fearing most.</strong><hr></blockquote>



    Moki answered this above: It doesn't matter if the hardware is MacOSX compatible if Apple doesn't sell OSX licenses except bound to their own hardware. If Dell or some other PC vendor tries to sell hardware with it, or if any customer tries to run copies of it that aren't properly licensed, Apple legal can just shut 'em down.



    You're also assuming that Dell and company want to ship OSX machines, or that Microsoft will let them. I don't think Microsoft would care if Apple ships those machines (nor could they do anything about it), but anybody that licenses Windows might think twice about shipping a competing OS as well.
     0Likes 0Dislikes 0Informatives
  • Reply 291 of 339
    <a href="http://www.macedition.com/soup/hotsoup_20020806.php"; target="_blank">http://www.macedition.com/soup/hotsoup_20020806.php</a>;



    x86 'X'?



    'Not Likely'



    Read on...



    Lemon Bon Bon
     0Likes 0Dislikes 0Informatives
  • Reply 292 of 339
    What's a 'clockless' chip, Programmer?



    I don't think Intel are going to like the sound of that...



    Lemon Bon Bon
     0Likes 0Dislikes 0Informatives
  • Reply 293 of 339
    programmerprogrammer Posts: 3,503member
    It is a chip where there is not one master clock to which everything synchronizes. Instead each internal circuit works as fast as it can and sychronizes with the circuits it needs to communicate with when it needs to communicate. If you think about a pipeline, current chip designs are like having somebody singing cadence... and everyone in the pipeline must get all their work done in time with the beat. If somebody is slow then you are limited by the rate at which they work -- speeding the cadence up will just cause them to fall out of step and introduce errors. In an asynchronous design each person in the pipeline negotiates with the previous and next persons in the pipeline as to when they can accept more data, and when they can hand off the data they just finished with. In this scheme the slowest member only affects execution when actually used, and then it only causes a slight delay as the results are waited for.



    Sun is doing work along these lines and it was written up in a recent Scientific American. The idea has been around a long time, but until now there really hasn't been a lot of motivation. Now that clock rates are getting a little silly, however, I think more chip designers are leaning that way. I've always thought that it makes a fair bit of sense, although its not clear (to me) at what level it should be best used.
     0Likes 0Dislikes 0Informatives
  • Reply 293 of 339
    yevgenyyevgeny Posts: 1,148member
    [quote]Originally posted by Lemon Bon Bon:

    <strong>What's a 'clockless' chip, Programmer?



    I don't think Intel are going to like the sound of that...



    Lemon Bon Bon</strong><hr></blockquote>



    A clockless chip is a CPU that does not have a common MHz speed for all of its parts. Think of a CPU that has its integer unit running at a different speed than its floating point unit, at a different speed than its vector processor, etc.



    Why do this? Because certain parts of a chip can run quicker than other parts of a chip and getting rid of the clock frees the parts to run as quickly as possible. The drawback is that because you don't know when an operation is going to be finished, the chip has to do quite a bit more to make sure that operations are executed in the proper order. This difficulty isn't insurmountable, and some portions of current CPU's are clockless (parts of the P4, Ultrasparc III). It is generally thought that clockless CPU's are the future of CPU's because it gets hard to make the whole CPU run at the same speed as the chip gets faster and faster.



    [ 08-06-2002: Message edited by: Yevgeny ]</p>
     0Likes 0Dislikes 0Informatives
  • Reply 295 of 339
    amorphamorph Posts: 7,112member
    [quote]Originally posted by Lemon Bon Bon:

    <strong>What's a 'clockless' chip, Programmer?</strong><hr></blockquote>



    <a href="http://www.cs.columbia.edu/async/misc/IHT_As_Chips_Reach_Speed_Limit.html"; target="_blank">IHT article on clockless tech</a>



    [quote]<strong>I don't think Intel are going to like the sound of that...</strong><hr></blockquote>



    Ironically, Intel is using similar technology in the Pentium IV.
     0Likes 0Dislikes 0Informatives
  • Reply 296 of 339
    yevgenyyevgeny Posts: 1,148member
    [quote]Originally posted by Programmer:

    <strong>It is a chip where there is not one master clock to which everything synchronizes. Instead each internal circuit works as fast as it can and sychronizes with the circuits it needs to communicate with when it needs to communicate. If you think about a pipeline, current chip designs are like having somebody singing cadence... and everyone in the pipeline must get all their work done in time with the beat. If somebody is slow then you are limited by the rate at which they work -- speeding the cadence up will just cause them to fall out of step and introduce errors. In an asynchronous design each person in the pipeline negotiates with the previous and next persons in the pipeline as to when they can accept more data, and when they can hand off the data they just finished with. In this scheme the slowest member only affects execution when actually used, and then it only causes a slight delay as the results are waited for.



    Sun is doing work along these lines and it was written up in a recent Scientific American. The idea has been around a long time, but until now there really hasn't been a lot of motivation. Now that clock rates are getting a little silly, however, I think more chip designers are leaning that way. I've always thought that it makes a fair bit of sense, although its not clear (to me) at what level it should be best used.</strong><hr></blockquote>



    It is thought that some of the cache coherency issues with the Ultrasparc II's were caused by the fact that the cache wasn't quite able to run at the speed of the chip. Of course, it is hard to know why the USII's crashed- I don't think that Sun ever officially aknowledged that they had a problem.



    It seems (IMHO) that certain blocks of a CPU should be able to be run at the same speed. For example, all your integer units should be able to run at the same speed (if they have the same silicon). Given that the CPU maintains a Petri net of what data is needed for an operation to be executed, and that this involves a bit of overhead, it would be bad to decouple everything on the CPU. But then again, I am a software engineer whose specialty is graph theory, so what do I know.
     0Likes 0Dislikes 0Informatives
  • Reply 297 of 339
    amorphamorph Posts: 7,112member
    [quote]Originally posted by moki:

    <strong>I'm not an analyst, I'm an engineer -- I am not basing any of my speculation on what the various pundits are writing, but rather on... other information. </strong><hr></blockquote>



    I wasn't addressing you with that comment (and anyway, aren't you a photographer? ), but since you asked: The technical feasibility of sticking an x86 into a Mac isn't my primary concern here. Of course it's technically feasible. I just don't see more than a transient advantage in one part of the professional market, best case.



    Now, the stuff you're -ing about with respect to IBM makes sense to me.
     0Likes 0Dislikes 0Informatives
  • Reply 298 of 339
    yevgenyyevgeny Posts: 1,148member
    Yes, if clockless chips were used by everyone, then GHz as a measure of speed would be truly meaningless. Of course, it is somewhat meaningless right now but this doesn't bother them. I can just see it now "My pared down integer unit runs at 24 GHz, and your floating point unit runs at 8 GHz. My 24GHz CPU is three times faster than your 8GHz CPU".



    I think that people would have to use the SPEC ratings and "real world" testing to compare CPU's.
     0Likes 0Dislikes 0Informatives
  • Reply 299 of 339
    amorphamorph Posts: 7,112member
    [quote]Originally posted by Programmer:

    <strong>In an asynchronous design each person in the pipeline negotiates with the previous and next persons in the pipeline as to when they can accept more data, and when they can hand off the data they just finished with. In this scheme the slowest member only affects execution when actually used, and then it only causes a slight delay as the results are waited for.</strong><hr></blockquote>



    There is a side effect of this trait that plays very well to the PPC's strengths: The "cadence" in a clocked chip is actually a flood of electricity that fills the whole chip periodically. The circuits that have work to do use it, and the ones that don't just waste it. So clockless designers realized quickly that, with a little work, their designs could use only as much power as they absolutely needed to. In the boundary case, where the CPU is completely idle, it consumes no power. By contrast, a clocked chip always consumes a few watts at least (in the case of the G3) or more (in the case of the P4). It's like a better SpeedStep that doesn't require any additional hardware or software to enable.



    In tests by IBM, a clockless chip consumes something like 10% of the power of a clocked chip under average use (peak use would obviously be in the same ballpark as a clocked chip). Hello iBook.



    [quote]<strong>Sun is doing work along these lines and it was written up in a recent Scientific American. The idea has been around a long time, but until now there really hasn't been a lot of motivation. Now that clock rates are getting a little silly, however, I think more chip designers are leaning that way. I've always thought that it makes a fair bit of sense, although its not clear (to me) at what level it should be best used.</strong><hr></blockquote>



    There are pure clockless chips right now in cell phones. I think Nokia is the pioneer here, but I might be misremembering. The fact that they use so little power broke the barrier in the phone market, where power consumption is absolutely crucial.



    I think the variant tech that Yevgeny mentioned is more likely to appear in CPUs (it's already in the P4): Instead of a pure clockless design, the CPU is divided into parts which are all clocked differently, and those are unified under a clockless scheme. The parts can keep getting smaller and smaller, approaching the pure design asymptotically - as far as it makes sense to, anyway. This looks like the path of least resistance. Also, some parts (simple, fast ones like integer) could go pure clockless before others (say, AltiVec). The same "clockless fabric" design that accomodates parts running at different clockspeeds could accomodate parts that ran clockless, it seems to me.
     0Likes 0Dislikes 0Informatives
  • Reply 300 of 339
    [quote]Originally posted by Lemon Bon Bon:

    <strong>What's a 'clockless' chip, Programmer?



    I don't think Intel are going to like the sound of that...



    Lemon Bon Bon</strong><hr></blockquote>



    it is normally called 'asynchronous' chip. no master clock synchronization needed on the whole chip. one big advantage of it is less power consumption. it was quite research field in academic. there are couples of startup working on it. if i am correct, IEEE spectrum had an article once on it.



    and yeah, intel is investing a lot on this technology. not sure whether they used it or not.
     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.