Rumor: Disaster at MWNY... :(

18910111214»

Comments

  • Reply 261 of 266
    [quote]Originally posted by Eskimo:

    <strong>



    Not based on any concrete information, but I can't see logically how they could afford not to. Unless Moto wants to withdraw from the semiconductor business completely, which they haven't indicated, they will continue to produce PowerPC chips for the embedded space which is their bread and butter. They simply don't have the resources to devote to cutting edge CPU design and production. Something fewer and fewer companies today are able to do.</strong><hr></blockquote>

    ah, that wasn't exactly what i was getting at. in any case, it would be more clear in the future if my information is correct.
  • Reply 262 of 266
    [quote]Originally posted by Programmer:

    <strong>

    If the hardware takes care of most of the details (i.e. a uniform 36 bit address space) then what is left for MacOSX is mostly an optimization problem -- the operating system will want to bind particular processes to particular threads when possible. For a multi-threaded app you're pretty much outta luck on a NUMA architecture since memory pages aren't bound to threads, only to processes. With the large L1/L2 caches, however, the situation isn't any worse than it is today as the processors will trade data across the MPX bus and the memory controller(s) will be running at &gt;1 GHz, not to mention you will effectively have a double-width memory interface (or more if &gt;2 processors).</strong><hr></blockquote>



    I think OS X should be in good shape for this type of a situation. Consider that most apps are Carbon and not Cocoa (and this includes a lot of Apple's stuff), you often have a large number of single-threaded apps running. Toss in a large number of lightweight processes from BSD, and again we have a workable situation. Where OS X will need to be thoughtful is with the well-written multithreaded Cocoa apps. And as you point out, as the processor count grows, this arrangement makes more and more sense.



    [quote]<strong>Your point about this architecture not being appropriate for the Xserve has some validity. In the Xserve the I/O system has up to 2.1 GB/sec memory bandwidth, whereas in the conjectural machine discussed here it would only have 1 GB/sec.</strong><hr></blockquote>



    That's what I was wondering as well. Thinking about it and having talked to a number of people looking at making Xserve purchases lately, the Xserve doesn't seem hugely appropriate for large scale numerical clustering where you'll be pounding through large data sets. Sure, it's got the bandwidth to shove those data sets around, but it'll again bind up on that bus when it comes to computation. These really are conventional servers, and ones pretty well suited for media work where you want the I/O bandwidth.



    What's being considered here with the memory controller on the CPU makes far more sense for a desktop application and for the big scientific stuff. These are almost always more CPU-memory bound. For scientific work, if the code is factored for clustering, it'll work like a charm in this arrangement.



    An Xserve built around this (preferably with a quad arrangement, actually) tips the balance from bandwith engine to computation engine which you want in *some* situations. Now were're looking at 168 G4's per rack, all of which have maximum bandwidth from Altivec to RAM. Sounds like a BLAST machine to me and would probably tear the shit out of anything in it's price range when it comes to computational power. We've been looking at how our scientific guys would decide between the Xserve and a plain dual desktop, and unless they were building a large cluster or needed the colocation, in most cases the desktop makes more sense since the CPU-RAM performance looks like a wash, and the towers present no other bottleneck to what they do. A quad Xserve in this arrangement changes the situation rather a lot, however. Apple will need to market this distinction well, but it's a unique distinction in a single vendor.



    What I'm curious about though, is that one of the original (supposed) benefits of AGP was the ability of the graphics card to access main memory. Clearly, this wouldn't make sense with the arrangement that we're describing, but modern cards with 64MB or 128MB of RAM probably aren't going to benefit from this arrangement anyway.



    The diagram previously posted by Barto with the RapidIO switch sitting between the CPU and the IC or AGP would almost have to happen before long. If we improve the performance of the G4, and get two of them humming along well, it seems at some point we'll have trouble feeding the AGP, even with most of the memory load off of that bus.



    [ 06-30-2002: Message edited by: johnsonwax ]



    [ 06-30-2002: Message edited by: johnsonwax ]</p>
  • Reply 263 of 266
    bodhibodhi Posts: 1,424member
    [quote]Originally posted by Locomotive:

    <strong>Immediately after last summer's MWNY, I understand our former iCEO launched a top-secret project: "D-Skys". Now if you don't recognize the first person to walk across the stage this year...</strong><hr></blockquote>



    huh?? english please...
  • Reply 264 of 266
    mokimoki Posts: 551member
    [quote]Originally posted by johnsonwax:

    <strong>I think OS X should be in good shape for this type of a situation. Consider that most apps are Carbon and not Cocoa (and this includes a lot of Apple's stuff), you often have a large number of single-threaded apps running. Toss in a large number of lightweight processes from BSD, and again we have a workable situation. Where OS X will need to be thoughtful is with the well-written multithreaded Cocoa apps. And as you point out, as the processor count grows, this arrangement makes more and more sense.</strong><hr></blockquote>



    Actually, even if you don't do anything special, your carbon app will have more than one thread. Several, in fact.



    ...and it is pretty easy to add additional threads as well.
  • Reply 265 of 266
    mmicistmmicist Posts: 214member
    [quote]Originally posted by Eskimo:

    <strong>



    In a simulator? I'd be interested to know what research lab was making 30nm transistors 10 years ago except by freak accident.</strong><hr></blockquote>



    No, real transistors, and repeatable. These were GaAs/AlGaAs heterojunction FETs, not MOS, although others in the group were making 100nm MOS devices. This was at Glasgow University's nanoelectronics research group, although there were/are several other groups in the world that could approach if not meet those values at the time.

    Minimum repeatable feature size when I left the group 3 years ago was around 10nm, occasionally 3nm. This was done using e-beam lithography.



    Michael
  • Reply 266 of 266
    Question from Bodhi



    [quote]quote:



    Originally posted by Locomotive:

    Immediately after last summer's MWNY, I understand our former iCEO launched a top-secret project: "D-Skys". Now if you don't recognize the first person to walk across the stage this year...



    huh?? english please... <hr></blockquote>



    D-Skys = disguise. If nothing's ready again, I think we may not recognize Steve.



    Locomotive



    [ 07-01-2002: Message edited by: Locomotive ]</p>
Sign In or Register to comment.