Rumor: Disaster at MWNY... :(

189101113

Comments

  • Reply 241 of 266
    programmerprogrammer Posts: 3,467member
    [quote]Originally posted by trumptman:

    <strong>It would be a neat engineering trick to turn a disadvantage (old on chip memory controller) into an advantage via some engineering magic. It would also allow us Mac users to tell PC users to stop comparing our Mac's with true server/workstation engineering to their PC's with an antiquated n/s bridge and all the resources associated with it.</strong><hr></blockquote>



    I think you've made an incorrect assumption about why (some) Macs have only one chip on the motherboard, rather than a setup like the PC north/south bridge architecture. Apple has integrated all the functionality into the single motherboard IC -- the G4s (so far) do not have an on-chip memory controller. All memory operations go through the MPX bus, and this is a shared bus so that all the processors in the system can watch for who currently has what in their private cache. If there were multiple MPX busses in the system there would need to be a device that sat astride all of them and did the job of watching for who has what data at any given time. This would be the job of the motherboard chip which runs at a low clock rate and which Apple would have to design from scratch. Not very likely, and not very efficient.



    If the new G4 has an on-chip memory controller then memory requests from other processors or the I/O chip(s) would come across MPX and be handled by the G4 that happens to have the requested address in its own private pool of memory. This means that anybody accessing memory hooked to a different G4 than they are running on still has to go through the 850 MB/sec bottleneck, but each G4 would be able to talk to its own private pool of memory at whatever speed its on-chip controller and the attached RAM is capable of. Since the on-chip controller is probably hooked into the AltiVec cache streaming engine I'd wager that such a G4 would be capable of getting far more bandwidth out of any given memory type than any PC with the same type of memory. Lastly, since far less traffic will be crossing the MPX bus, the 850 MB/sec limit will seem far less crowded that before.



    Or all of you rumour mongers could be blowing smoke and we'll just get a speed bump.



  • Reply 242 of 266
    bigcbigc Posts: 1,224member
    Which they are trying by supposedly giving additional $100 rebates to dealers.
  • Reply 243 of 266
    eskimoeskimo Posts: 474member
    [quote]Originally posted by admactanium:

    <strong>a question for any of the other people with moto information: will moto continue to make "PowerPC" chips in the future? if you know what i'm talking about it makes sense.</strong><hr></blockquote>



    Not based on any concrete information, but I can't see logically how they could afford not to. Unless Moto wants to withdraw from the semiconductor business completely, which they haven't indicated, they will continue to produce PowerPC chips for the embedded space which is their bread and butter. They simply don't have the resources to devote to cutting edge CPU design and production. Something fewer and fewer companies today are able to do.
  • Reply 243 of 266
    [quote]However, the problem is NOT existing mac users, it's professionals who have no platform loyality. <hr></blockquote>



    I totally agree with that. I was responding to your concern about the Mac user base defecting. A large proportion of Mac users are not even considering Wintel boxes.



    But I agree that there is a percentage of power users that Apple must keep in the fold to maintain or grow market share. My sense is that they have identified some key markets where they can grow their base, and there will be equipment to back that up. I hope. :eek:
  • Reply 245 of 266
    [quote]Originally posted by Barto:

    <strong>What goes up on MOSR is no more reliable than me or you (with possible exeptions of smart people/insiders like...JYD...).</strong><hr></blockquote>



    Now that's comedy! <img src="graemlins/lol.gif" border="0" alt="[Laughing]" /> <img src="graemlins/lol.gif" border="0" alt="[Laughing]" /> <img src="graemlins/lol.gif" border="0" alt="[Laughing]" />
  • Reply 246 of 266
    Immediately after last summer's MWNY, I understand our former iCEO launched a top-secret project: "D-Skys". Now if you don't recognize the first person to walk across the stage this year...
  • Reply 247 of 266
    [quote]Originally posted by Junkyard Dawg:

    <strong>

    They MUST win over rednecks/morons for this to happen, and these types do care about GHz, very much so. We're talking about the sorts who would just as soon overbore their small block chevy V8 on a saturday as play Quake, or drop a radical cam into their Mustang rather than burn a CD of Poison's greatest hits. Apple's got to win over these idiots and it's not going to happen with 800 MHz cutesy computers.</strong><hr></blockquote>



    Uh... Which ones are the morons again? You are saying that someone working on their car engine is a moron while someone mindlessly blowing things up in a computer game is some kind of creative genius? Now that is one seriously messed up line of thinking. I'm going to print out your quote and have it framed. Classic!
  • Reply 248 of 266
    [quote]Originally posted by Programmer:

    <strong>If the new G4 has an on-chip memory controller then memory requests from other processors or the I/O chip(s) would come across MPX and be handled by the G4 that happens to have the requested address in its own private pool of memory. This means that anybody accessing memory hooked to a different G4 than they are running on still has to go through the 850 MB/sec bottleneck, but each G4 would be able to talk to its own private pool of memory at whatever speed its on-chip controller and the attached RAM is capable of. Since the on-chip controller is probably hooked into the AltiVec cache streaming engine I'd wager that such a G4 would be capable of getting far more bandwidth out of any given memory type than any PC with the same type of memory. Lastly, since far less traffic will be crossing the MPX bus, the 850 MB/sec limit will seem far less crowded that before.</strong><hr></blockquote>



    Yeah, that's my thinking as well. This setup should give us pretty substantial performance in most situations, doesn't require a radical overhaul of the system, or a huge increase in component costs, outside of whatever increase in G4 costs itself.



    Of course, it also makes me think that it's not going to show at MWNY, but rather a bit later - Sept or Oct. It's not such a radical change that it couldn't have been incorporated in the Xserve, and what would 2 months or so have mattered, especially if there was a substantial performance boost for some of the markets that Apple is targetting with Xserve?



    Granted, the Xserve is relatively easy to change manufacturing in this way what with having a field-serviceable mobo, but why incur the setup costs with the old board? Doesn't make sense to me.



    Do you think that the Xserve would be better served with it's current architecture than the one posed above given it's target market? Of course, maybe Apple just introduced the good and better Xserves, and have left the best for later. Dual or Quad 1.2 or 1.4 GHz (quad is actually worth considering again with dedicated memory controllers - shared memory performance would be even more anemic, but well factored software could really haul) I think Apple could squeeze quads into the Xserve (I played with one yesterday) and the dual 1GHz that I played with ran amazingly cool - no perceptible heat buildup or output at all.



    Curious, if Apple is going to offload Quartz Extreme, what would be the setup for this to work efficiently, again without wildly expensive architectural changes?



    I disagree that 1GHz chips in the upgrade market suggest a non-trivial speed bump, since Apple's clearly not buying them up for future iMacs. Instead, Apple's got plans for performance improvements that don't depend on CPU speed from legacy chips, so having these in the upgrade market aren't going to substantially harm Apple's sales.



    Seems to make sense. Now, looking forward, where do Moki's DSPs play into this? Perhaps those are for next year.
  • Reply 249 of 266
    spookyspooky Posts: 504member
    You're basing that on the assumption that processor speed is all anyone cares about. It would never even occur to me to switch to peecees just because they're running 3HGz processors. Okay, I'm only responsible for a dozen Macs (not including my own machines), but they will all be replaced by new Macs next year. Most of our current Macs are single G4 450s and they're already faster machines than the people using them need for the work they do. But we're on a three-year upgrade schedule and next year we get new machines.



    Sure, I want a G5 for myself, but the one thing people in my office care about MOST is...the size of their monitor. Nobody (except me) could tell you what processor they have, or how fast it is, or whether it's faster than the 8500 it replaced, but man, they can tell you that they have a 17-inch monitor and it's SO much better than their old one.



    There are a LOT of Mac users who don't know anything about processors or clock speed or DDR RAM. They don't compare the specs of Macs to peecees because they don't know enough about computers to understand what they're comparing. What they know is: Macs are fun and easy to use. Peecees are complicated and crash a lot. EVERYONE in my office who buys a computer to use at home buys a Mac. They always ask my advice about what model to buy, but nobody has ever said to me "Gee, this Dell runs at 2.1GHz, should I consider getting one of those instead of a new Mac?"




    You just don't get it at all do you? I personally WILL NOT Switch to wintel cos I love the mac experience (even if it is dog slow). However, more and more outfits are run by bean counters and worldcom types. I don't have any say any more on what computers we use. That is now down to a dip sh*t IT Director and Procurement Manager. Most of my friends find themselves in the same situation now (very different from just a few years ago when we had the say over what we used)



    These said dip sh*ts do not care about how easy a mac is to use or how fun it is. They are used to spending wads of cash on technical support - what do they care if X crashes less? All they see is that they can get a 2Ghz wintel with better memory, graphics cards etc which will run Photoshop, Director, Dreamweaver, After Effects, Quark et al.



    We get Pc dealers contacting us direct (we're an edu outfit - a big one) all the time offering us "a complete IT solution". Apple has never even sent someone down to do a demo when we request it.



    My students won't buy one. Why? its too slow. they can't afford the powermac line ("£3000 for a dual 1Ghz machine?"), they can't keep putting a newer graphics card in an imac or emac to play the latest games et al.



    Mac Users love the mac and will never switch. I didn't switch at work because of any 2Ghz envy. This was forced on me. So what if I still buy macs at home ? Whoopee, apple sold a mac. Too bad they lost the 300 we have so far switched at work.



    We are less than 5%. To see the bigger picture about apple's future we have to try to think outside the way mac users think
  • Reply 250 of 266
    the fact that mot is making 1gig processors available to parties other than apple could also mean that the relationship between apple and mot has reached a low. consider this: mot has not much to offer to apple in terms of ghz and apple has committed itself to ibm for the future, mot only can squeeze some more out of the g4 by selling it to upgrade vendors. i can't see why apple would be happy with the whole upgrade thing, they would like to see customers bying new apples instead, but mot could not care less anymore because they losse apple anyway.
  • Reply 251 of 266
    bigcbigc Posts: 1,224member
    I thought the Apple/MOT arrangement ended this year
  • Reply 252 of 266
    mmicistmmicist Posts: 214member
    [quote]Originally posted by johnsonwax:

    <strong>



    Yeah, that's my thinking as well. This setup should give us pretty substantial performance in most situations, doesn't require a radical overhaul of the system, or a huge increase in component costs, outside of whatever increase in G4 costs itself.



    Of course, it also makes me think that it's not going to show at MWNY, but rather a bit later - Sept or Oct. It's not such a radical change that it couldn't have been incorporated in the Xserve, and what would 2 months or so have mattered, especially if there was a substantial performance boost for some of the markets that Apple is targetting with Xserve?

    </strong><hr></blockquote>



    I'm trying to work out the level of complexity implicit in an on-chip memory controller. The problem is in memory management, since you straight away have a NUMA (non-uniform memory architecture) and I'm not sure OS X can cope with that yet. With DMA access from peripherals having to go through the CPU and it's bus, this might actually reduce the performance as far as a server (Xserve) is concerned. Dual processor systems would also change their architecure completely. Most certainly it is not a trivial problem.



    Michael
  • Reply 253 of 266
    programmerprogrammer Posts: 3,467member
    [quote]Originally posted by mmicist:

    <strong>I'm trying to work out the level of complexity implicit in an on-chip memory controller. The problem is in memory management, since you straight away have a NUMA (non-uniform memory architecture) and I'm not sure OS X can cope with that yet. With DMA access from peripherals having to go through the CPU and it's bus, this might actually reduce the performance as far as a server (Xserve) is concerned. Dual processor systems would also change their architecure completely. Most certainly it is not a trivial problem.</strong><hr></blockquote>



    If the hardware takes care of most of the details (i.e. a uniform 36 bit address space) then what is left for MacOSX is mostly an optimization problem -- the operating system will want to bind particular processes to particular threads when possible. For a multi-threaded app you're pretty much outta luck on a NUMA architecture since memory pages aren't bound to threads, only to processes. With the large L1/L2 caches, however, the situation isn't any worse than it is today as the processors will trade data across the MPX bus and the memory controller(s) will be running at &gt;1 GHz, not to mention you will effectively have a double-width memory interface (or more if &gt;2 processors).



    The on-chip memory controller can be fairly independent of the CPU core that it shares the silicon die with -- indeed they are probably connected by a fast/wide internal MPX bus. External memory requests don't need to be serviced by the processor, it just needs to share the memory controller with the rest of the world like it currently does. The memory controller(s) would also have built-in DMA engines -- at least the 8540 has them.



    This reminds me: with this architecture &gt;2 processors makes a lot of sense.



    Your point about this architecture not being appropriate for the Xserve has some validity. In the Xserve the I/O system has up to 2.1 GB/sec memory bandwidth, whereas in the conjectural machine discussed here it would only have 1 GB/sec.
  • Reply 254 of 266
    i love reading about the "pro's", that are constantly being held up as some kind of empirical benchmark of sorts. i love reading about other people telling me what i want or am willing to buy. first off your so called pros can be just as loyal to their platform of choice, mac or wintel, it matters not. the bean counter arguments are a reality that i have experienced in the past, a real corporate plague, no contest. do me a favor though, let me champion my own cause. i think i can be a little more eloquent about my needs, not using words like "poo" or "fart" to get the point across. in the end though i get the feeling that your using the "pro's" as a smoke screen, to hide your needs to win some kind of playground pissing contest. regardless, i am concerned by the number of people stating that apple wont upgrade the FSB, not good news in my opinion. the xserve DDR implementation is a disappointing, yet realistic possibility. if that's all we get, then i wouldn't buy it, don't give a damn if its got 5 GHz chips in it. i would never switch to wintel based on chip performance, i didnt in '96 and i wont in '02.
  • Reply 255 of 266
    [quote]Originally posted by Programmer:

    <strong>If the new G4 has an on-chip memory controller...each G4 would be able to talk to its own private pool of memory at whatever speed its on-chip controller and the attached RAM is capable of. Since the on-chip controller is probably hooked into the AltiVec cache streaming engine I'd wager that such a G4 would be capable of getting far more bandwidth out of any given memory type than any PC with the same type of memory.</strong><hr></blockquote>



    An on-chip memory controller was substantially what I had in mind when I mentioned a 'Book-E compliant' G4, and for similar reasons.



    [quote]Originally posted by Programmer:

    <strong>I write speculative code.</strong><hr></blockquote>

    Well, that might work!
  • Reply 256 of 266
    trumptmantrumptman Posts: 16,464member
    [quote]Originally posted by Programmer:

    <strong>



    I think you've made an incorrect assumption about why (some) Macs have only one chip on the motherboard, rather than a setup like the PC north/south bridge architecture. Apple has integrated all the functionality into the single motherboard IC -- the G4s (so far) do not have an on-chip memory controller. All memory operations go through the MPX bus, and this is a shared bus so that all the processors in the system can watch for who currently has what in their private cache. If there were multiple MPX busses in the system there would need to be a device that sat astride all of them and did the job of watching for who has what data at any given time. This would be the job of the motherboard chip which runs at a low clock rate and which Apple would have to design from scratch. Not very likely, and not very efficient.



    If the new G4 has an on-chip memory controller then memory requests from other processors or the I/O chip(s) would come across MPX and be handled by the G4 that happens to have the requested address in its own private pool of memory. This means that anybody accessing memory hooked to a different G4 than they are running on still has to go through the 850 MB/sec bottleneck, but each G4 would be able to talk to its own private pool of memory at whatever speed its on-chip controller and the attached RAM is capable of. Since the on-chip controller is probably hooked into the AltiVec cache streaming engine I'd wager that such a G4 would be capable of getting far more bandwidth out of any given memory type than any PC with the same type of memory. Lastly, since far less traffic will be crossing the MPX bus, the 850 MB/sec limit will seem far less crowded that before.



    Or all of you rumour mongers could be blowing smoke and we'll just get a speed bump.



    </strong><hr></blockquote>





    Well remember, I said I was just pissing in the wind and even commented that I wondered why my shins were wet.



    At least the rest of my prediction was pretty spot on.
  • Reply 257 of 266
    mmicistmmicist Posts: 214member
    [quote]Originally posted by Programmer:

    <strong>



    If the hardware takes care of most of the details (i.e. a uniform 36 bit address space) then what is left for MacOSX is mostly an optimization problem -- the operating system will want to bind particular processes to particular threads when possible. For a multi-threaded app you're pretty much outta luck on a NUMA architecture since memory pages aren't bound to threads, only to processes. With the large L1/L2 caches, however, the situation isn't any worse than it is today as the processors will trade data across the MPX bus and the memory controller(s) will be running at &gt;1 GHz, not to mention you will effectively have a double-width memory interface (or more if &gt;2 processors).</strong><hr></blockquote>



    Yes, but I was thinking of the hardware complexity, as a programmer I certainly don't want to see that complexity.



    [quote]

    <strong>

    The on-chip memory controller can be fairly independent of the CPU core that it shares the silicon die with -- indeed they are probably connected by a fast/wide internal MPX bus. External memory requests don't need to be serviced by the processor, it just needs to share the memory controller with the rest of the world like it currently does. The memory controller(s) would also have built-in DMA engines -- at least the 8540 has them.</strong><hr></blockquote>



    But for optimum performance you don't want this, the memory controller can be (partially) moved into the processor's pipeline, significantly reducing the latency. It would certainly be an easier design to produce if you didn't do this, and still give major benefits, however.



    [quote]

    <strong>

    This reminds me: with this architecture &gt;2 processors makes a lot of sense.



    Your point about this architecture not being appropriate for the Xserve has some validity. In the Xserve the I/O system has up to 2.1 GB/sec memory bandwidth, whereas in the conjectural machine discussed here it would only have 1 GB/sec.</strong><hr></blockquote>



    Nice to talk to another (literate) programming engineer.



    Michael
  • Reply 258 of 266
    programmerprogrammer Posts: 3,467member
    [quote]Originally posted by mmicist:

    <strong>Nice to talk to another (literate) programming engineer.</strong><hr></blockquote>



    Ditto.



    Yes, I was refering to the software complexity of this model. The hardware side of things would no doubt be complex, but hey, those guys pull off miracles all the time. I mean do you know how small 0.13 microns is? :eek:





    BTW: as much as I'd like to see an on-chip memory controller show up in an PowerMac sometime really soon, I don't for a second expect it. I do believe that we might not see better than 133MHz either. The quote I've seen from a Moto rep on the subject of a 166 MHz version of MPX was fairly recent and of the "might" and "in the future" nature.



    A speed bumped Xserve-style machine remains the most likely possibility. I'm hoping Apple buddies up to nVidia and gets the first of the nv30s too.
  • Reply 259 of 266
    mmicistmmicist Posts: 214member
    [quote]Originally posted by Programmer:

    <strong>



    Ditto.



    Yes, I was refering to the software complexity of this model. The hardware side of things would no doubt be complex, but hey, those guys pull off miracles all the time. I mean do you know how small 0.13 microns is? :eek:



    </strong><hr></blockquote>



    Yes, I've worked with much smaller. Made my first 30nm transistor more than 10 years ago.



    I also don't expect much different at MW, but hope for a lot. It's rather like weather forecasting, saying tomorrow will be rather like today will be right most of the time, however it is occasionally totally wrong.



    I need a revamped FPU, or better still, a double precision Altivec unit, not a lot to ask for is it?



    Michael
  • Reply 260 of 266
    eskimoeskimo Posts: 474member
    [quote]Originally posted by mmicist:

    <strong>



    Yes, I've worked with much smaller. Made my first 30nm transistor more than 10 years ago.



    </strong><hr></blockquote>



    In a simulator? I'd be interested to know what research lab was making 30nm transistors 10 years ago except by freak accident.
Sign In or Register to comment.