or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Rumor: Disaster at MWNY... :(
New Posts  All Forums:Forum Nav:

Rumor: Disaster at MWNY... :( - Page 7

post #241 of 267
Hey cool, a whole bunch of people who post like they really know something!


Re: 133 MHz MPX remaining unchanged and yet bandwidth problems being fixed. The problem with the fast G4 is that it can't read and write to memory fast enough (i.e. 850 MB/sec doesn't cut it). Motorola's own documentation shows that 850 is the MPX's limit, and they don't want to change it since the embedded crowd likes it the way it is. A 128-bit implementation is too expensive (and would constitute a change).

Obviously the only way that memory bandwidth could increase is by not using the MPX bus to connect to memory. Bringing the memory controller on-chip. Motorola has a DDR memory controller in the 8540, so they could use that and if the L3 cache interface is removed the pin count might not skyrocket so badly. I would have expected a RapidIO bus, but hey MPX works and its there already. This scheme would actually simplify embedded designs using the G4, so that might make Moto's other customers happy and would explain why Moto hasn't brought out a decent memory controller themselves (Apple's kick's their butt).

Another alternative would be to replace the L3 cache interface with an Intel-style FSB to an external memory controller, but this seems a little weird and considerably slower than the on-chip design.

Oh heck, maybe I should just PM you after all and put up with "ha ha, made you look".


Re: The need for fast machines. While it is true that there is a large group of Mac customers (realized and potential) for whom speed does not matter, there are also a whole bunch of markets that Apple clearly wants into that really really really care about performance. People to whom time is money. People to whom real-time is everything. Without at least competitive performance, Apple will have a very hard time competing no matter how good their software is.

If the speculation above is true, however, that ought to close the performance gap enough. A 20% MHz bump + duals + significant bandwidth improvement + I/O improvements + new GPUs on AGP 8x + Jaguar will make for a remarkable leap in performance. My money is on the table. If I were Apple my biggest concern would be how the heck to sell the existing PowerMac inventory!

[ 06-29-2002: Message edited by: Programmer ]</p>
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #242 of 267
[quote]Originally posted by trumptman:
<strong>It would be a neat engineering trick to turn a disadvantage (old on chip memory controller) into an advantage via some engineering magic. It would also allow us Mac users to tell PC users to stop comparing our Mac's with true server/workstation engineering to their PC's with an antiquated n/s bridge and all the resources associated with it.</strong><hr></blockquote>

I think you've made an incorrect assumption about why (some) Macs have only one chip on the motherboard, rather than a setup like the PC north/south bridge architecture. Apple has integrated all the functionality into the single motherboard IC -- the G4s (so far) do not have an on-chip memory controller. All memory operations go through the MPX bus, and this is a shared bus so that all the processors in the system can watch for who currently has what in their private cache. If there were multiple MPX busses in the system there would need to be a device that sat astride all of them and did the job of watching for who has what data at any given time. This would be the job of the motherboard chip which runs at a low clock rate and which Apple would have to design from scratch. Not very likely, and not very efficient.

If the new G4 has an on-chip memory controller then memory requests from other processors or the I/O chip(s) would come across MPX and be handled by the G4 that happens to have the requested address in its own private pool of memory. This means that anybody accessing memory hooked to a different G4 than they are running on still has to go through the 850 MB/sec bottleneck, but each G4 would be able to talk to its own private pool of memory at whatever speed its on-chip controller and the attached RAM is capable of. Since the on-chip controller is probably hooked into the AltiVec cache streaming engine I'd wager that such a G4 would be capable of getting far more bandwidth out of any given memory type than any PC with the same type of memory. Lastly, since far less traffic will be crossing the MPX bus, the 850 MB/sec limit will seem far less crowded that before.

Or all of you rumour mongers could be blowing smoke and we'll just get a speed bump.

Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #243 of 267
Which they are trying by supposedly giving additional $100 rebates to dealers.
I heard that geeks are a dime a dozen, I just want to find out who's been passin' out the dimes
----- Fred Blassie 1964
Reply
I heard that geeks are a dime a dozen, I just want to find out who's been passin' out the dimes
----- Fred Blassie 1964
Reply
post #244 of 267
[quote]Originally posted by admactanium:
<strong>a question for any of the other people with moto information: will moto continue to make "PowerPC" chips in the future? if you know what i'm talking about it makes sense.</strong><hr></blockquote>

Not based on any concrete information, but I can't see logically how they could afford not to. Unless Moto wants to withdraw from the semiconductor business completely, which they haven't indicated, they will continue to produce PowerPC chips for the embedded space which is their bread and butter. They simply don't have the resources to devote to cutting edge CPU design and production. Something fewer and fewer companies today are able to do.
post #245 of 267
[quote]However, the problem is NOT existing mac users, it's professionals who have no platform loyality. <hr></blockquote>

I totally agree with that. I was responding to your concern about the Mac user base defecting. A large proportion of Mac users are not even considering Wintel boxes.

But I agree that there is a percentage of power users that Apple must keep in the fold to maintain or grow market share. My sense is that they have identified some key markets where they can grow their base, and there will be equipment to back that up. I hope. :eek:
post #246 of 267
[quote]Originally posted by Barto:
<strong>What goes up on MOSR is no more reliable than me or you (with possible exeptions of smart people/insiders like...JYD...).</strong><hr></blockquote>

Now that's comedy! <img src="graemlins/lol.gif" border="0" alt="[Laughing]" /> <img src="graemlins/lol.gif" border="0" alt="[Laughing]" /> <img src="graemlins/lol.gif" border="0" alt="[Laughing]" />
If yer gonna bother with thinking different, swing for the fences.
Reply
If yer gonna bother with thinking different, swing for the fences.
Reply
post #247 of 267
Immediately after last summer's MWNY, I understand our former iCEO launched a top-secret project: "D-Skys". Now if you don't recognize the first person to walk across the stage this year...
Yes my child, he closed quite a few threads in his day.

Locomotive
Reply
Yes my child, he closed quite a few threads in his day.

Locomotive
Reply
post #248 of 267
[quote]Originally posted by Junkyard Dawg:
<strong>
They MUST win over rednecks/morons for this to happen, and these types do care about GHz, very much so. We're talking about the sorts who would just as soon overbore their small block chevy V8 on a saturday as play Quake, or drop a radical cam into their Mustang rather than burn a CD of Poison's greatest hits. Apple's got to win over these idiots and it's not going to happen with 800 MHz cutesy computers.</strong><hr></blockquote>

Uh... Which ones are the morons again? You are saying that someone working on their car engine is a moron while someone mindlessly blowing things up in a computer game is some kind of creative genius? Now that is one seriously messed up line of thinking. I'm going to print out your quote and have it framed. Classic!
Stuck in an infinite loop waiting for an Apple PDA...

apple.otaku
Reply
Stuck in an infinite loop waiting for an Apple PDA...

apple.otaku
Reply
post #249 of 267
[quote]Originally posted by Programmer:
<strong>If the new G4 has an on-chip memory controller then memory requests from other processors or the I/O chip(s) would come across MPX and be handled by the G4 that happens to have the requested address in its own private pool of memory. This means that anybody accessing memory hooked to a different G4 than they are running on still has to go through the 850 MB/sec bottleneck, but each G4 would be able to talk to its own private pool of memory at whatever speed its on-chip controller and the attached RAM is capable of. Since the on-chip controller is probably hooked into the AltiVec cache streaming engine I'd wager that such a G4 would be capable of getting far more bandwidth out of any given memory type than any PC with the same type of memory. Lastly, since far less traffic will be crossing the MPX bus, the 850 MB/sec limit will seem far less crowded that before.</strong><hr></blockquote>

Yeah, that's my thinking as well. This setup should give us pretty substantial performance in most situations, doesn't require a radical overhaul of the system, or a huge increase in component costs, outside of whatever increase in G4 costs itself.

Of course, it also makes me think that it's not going to show at MWNY, but rather a bit later - Sept or Oct. It's not such a radical change that it couldn't have been incorporated in the Xserve, and what would 2 months or so have mattered, especially if there was a substantial performance boost for some of the markets that Apple is targetting with Xserve?

Granted, the Xserve is relatively easy to change manufacturing in this way what with having a field-serviceable mobo, but why incur the setup costs with the old board? Doesn't make sense to me.

Do you think that the Xserve would be better served with it's current architecture than the one posed above given it's target market? Of course, maybe Apple just introduced the good and better Xserves, and have left the best for later. Dual or Quad 1.2 or 1.4 GHz (quad is actually worth considering again with dedicated memory controllers - shared memory performance would be even more anemic, but well factored software could really haul) I think Apple could squeeze quads into the Xserve (I played with one yesterday) and the dual 1GHz that I played with ran amazingly cool - no perceptible heat buildup or output at all.

Curious, if Apple is going to offload Quartz Extreme, what would be the setup for this to work efficiently, again without wildly expensive architectural changes?

I disagree that 1GHz chips in the upgrade market suggest a non-trivial speed bump, since Apple's clearly not buying them up for future iMacs. Instead, Apple's got plans for performance improvements that don't depend on CPU speed from legacy chips, so having these in the upgrade market aren't going to substantially harm Apple's sales.

Seems to make sense. Now, looking forward, where do Moki's DSPs play into this? Perhaps those are for next year.
The plural of 'anecdote' is not 'data'.
Reply
The plural of 'anecdote' is not 'data'.
Reply
post #250 of 267
You're basing that on the assumption that processor speed is all anyone cares about. It would never even occur to me to switch to peecees just because they're running 3HGz processors. Okay, I'm only responsible for a dozen Macs (not including my own machines), but they will all be replaced by new Macs next year. Most of our current Macs are single G4 450s and they're already faster machines than the people using them need for the work they do. But we're on a three-year upgrade schedule and next year we get new machines.

Sure, I want a G5 for myself, but the one thing people in my office care about MOST is...the size of their monitor. Nobody (except me) could tell you what processor they have, or how fast it is, or whether it's faster than the 8500 it replaced, but man, they can tell you that they have a 17-inch monitor and it's SO much better than their old one.

There are a LOT of Mac users who don't know anything about processors or clock speed or DDR RAM. They don't compare the specs of Macs to peecees because they don't know enough about computers to understand what they're comparing. What they know is: Macs are fun and easy to use. Peecees are complicated and crash a lot. EVERYONE in my office who buys a computer to use at home buys a Mac. They always ask my advice about what model to buy, but nobody has ever said to me "Gee, this Dell runs at 2.1GHz, should I consider getting one of those instead of a new Mac?"


You just don't get it at all do you? I personally WILL NOT Switch to wintel cos I love the mac experience (even if it is dog slow). However, more and more outfits are run by bean counters and worldcom types. I don't have any say any more on what computers we use. That is now down to a dip sh*t IT Director and Procurement Manager. Most of my friends find themselves in the same situation now (very different from just a few years ago when we had the say over what we used)

These said dip sh*ts do not care about how easy a mac is to use or how fun it is. They are used to spending wads of cash on technical support - what do they care if X crashes less? All they see is that they can get a 2Ghz wintel with better memory, graphics cards etc which will run Photoshop, Director, Dreamweaver, After Effects, Quark et al.

We get Pc dealers contacting us direct (we're an edu outfit - a big one) all the time offering us "a complete IT solution". Apple has never even sent someone down to do a demo when we request it.

My students won't buy one. Why? its too slow. they can't afford the powermac line ("£3000 for a dual 1Ghz machine?"), they can't keep putting a newer graphics card in an imac or emac to play the latest games et al.

Mac Users love the mac and will never switch. I didn't switch at work because of any 2Ghz envy. This was forced on me. So what if I still buy macs at home ? Whoopee, apple sold a mac. Too bad they lost the 300 we have so far switched at work.

We are less than 5%. To see the bigger picture about apple's future we have to try to think outside the way mac users think
Greatly Insane
Reply
Greatly Insane
Reply
post #251 of 267
the fact that mot is making 1gig processors available to parties other than apple could also mean that the relationship between apple and mot has reached a low. consider this: mot has not much to offer to apple in terms of ghz and apple has committed itself to ibm for the future, mot only can squeeze some more out of the g4 by selling it to upgrade vendors. i can't see why apple would be happy with the whole upgrade thing, they would like to see customers bying new apples instead, but mot could not care less anymore because they losse apple anyway.
post #252 of 267
I thought the Apple/MOT arrangement ended this year
I heard that geeks are a dime a dozen, I just want to find out who's been passin' out the dimes
----- Fred Blassie 1964
Reply
I heard that geeks are a dime a dozen, I just want to find out who's been passin' out the dimes
----- Fred Blassie 1964
Reply
post #253 of 267
[quote]Originally posted by johnsonwax:
<strong>

Yeah, that's my thinking as well. This setup should give us pretty substantial performance in most situations, doesn't require a radical overhaul of the system, or a huge increase in component costs, outside of whatever increase in G4 costs itself.

Of course, it also makes me think that it's not going to show at MWNY, but rather a bit later - Sept or Oct. It's not such a radical change that it couldn't have been incorporated in the Xserve, and what would 2 months or so have mattered, especially if there was a substantial performance boost for some of the markets that Apple is targetting with Xserve?
</strong><hr></blockquote>

I'm trying to work out the level of complexity implicit in an on-chip memory controller. The problem is in memory management, since you straight away have a NUMA (non-uniform memory architecture) and I'm not sure OS X can cope with that yet. With DMA access from peripherals having to go through the CPU and it's bus, this might actually reduce the performance as far as a server (Xserve) is concerned. Dual processor systems would also change their architecure completely. Most certainly it is not a trivial problem.

Michael
Sintoo, agora non podo falar.
Reply
Sintoo, agora non podo falar.
Reply
post #254 of 267
[quote]Originally posted by mmicist:
<strong>I'm trying to work out the level of complexity implicit in an on-chip memory controller. The problem is in memory management, since you straight away have a NUMA (non-uniform memory architecture) and I'm not sure OS X can cope with that yet. With DMA access from peripherals having to go through the CPU and it's bus, this might actually reduce the performance as far as a server (Xserve) is concerned. Dual processor systems would also change their architecure completely. Most certainly it is not a trivial problem.</strong><hr></blockquote>

If the hardware takes care of most of the details (i.e. a uniform 36 bit address space) then what is left for MacOSX is mostly an optimization problem -- the operating system will want to bind particular processes to particular threads when possible. For a multi-threaded app you're pretty much outta luck on a NUMA architecture since memory pages aren't bound to threads, only to processes. With the large L1/L2 caches, however, the situation isn't any worse than it is today as the processors will trade data across the MPX bus and the memory controller(s) will be running at &gt;1 GHz, not to mention you will effectively have a double-width memory interface (or more if &gt;2 processors).

The on-chip memory controller can be fairly independent of the CPU core that it shares the silicon die with -- indeed they are probably connected by a fast/wide internal MPX bus. External memory requests don't need to be serviced by the processor, it just needs to share the memory controller with the rest of the world like it currently does. The memory controller(s) would also have built-in DMA engines -- at least the 8540 has them.

This reminds me: with this architecture &gt;2 processors makes a lot of sense.

Your point about this architecture not being appropriate for the Xserve has some validity. In the Xserve the I/O system has up to 2.1 GB/sec memory bandwidth, whereas in the conjectural machine discussed here it would only have 1 GB/sec.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #255 of 267
i love reading about the "pro's", that are constantly being held up as some kind of empirical benchmark of sorts. i love reading about other people telling me what i want or am willing to buy. first off your so called pros can be just as loyal to their platform of choice, mac or wintel, it matters not. the bean counter arguments are a reality that i have experienced in the past, a real corporate plague, no contest. do me a favor though, let me champion my own cause. i think i can be a little more eloquent about my needs, not using words like "poo" or "fart" to get the point across. in the end though i get the feeling that your using the "pro's" as a smoke screen, to hide your needs to win some kind of playground pissing contest. regardless, i am concerned by the number of people stating that apple wont upgrade the FSB, not good news in my opinion. the xserve DDR implementation is a disappointing, yet realistic possibility. if that's all we get, then i wouldn't buy it, don't give a damn if its got 5 GHz chips in it. i would never switch to wintel based on chip performance, i didnt in '96 and i wont in '02.
sure im an expert, of what i cant remember.
Reply
sure im an expert, of what i cant remember.
Reply
post #256 of 267
[quote]Originally posted by Programmer:
<strong>If the new G4 has an on-chip memory controller...each G4 would be able to talk to its own private pool of memory at whatever speed its on-chip controller and the attached RAM is capable of. Since the on-chip controller is probably hooked into the AltiVec cache streaming engine I'd wager that such a G4 would be capable of getting far more bandwidth out of any given memory type than any PC with the same type of memory.</strong><hr></blockquote>

An on-chip memory controller was substantially what I had in mind when I mentioned a 'Book-E compliant' G4, and for similar reasons.

[quote]Originally posted by Programmer:
<strong>I write speculative code.</strong><hr></blockquote>
Well, that might work!
If yer gonna bother with thinking different, swing for the fences.
Reply
If yer gonna bother with thinking different, swing for the fences.
Reply
post #257 of 267
[quote]Originally posted by Programmer:
<strong>

I think you've made an incorrect assumption about why (some) Macs have only one chip on the motherboard, rather than a setup like the PC north/south bridge architecture. Apple has integrated all the functionality into the single motherboard IC -- the G4s (so far) do not have an on-chip memory controller. All memory operations go through the MPX bus, and this is a shared bus so that all the processors in the system can watch for who currently has what in their private cache. If there were multiple MPX busses in the system there would need to be a device that sat astride all of them and did the job of watching for who has what data at any given time. This would be the job of the motherboard chip which runs at a low clock rate and which Apple would have to design from scratch. Not very likely, and not very efficient.

If the new G4 has an on-chip memory controller then memory requests from other processors or the I/O chip(s) would come across MPX and be handled by the G4 that happens to have the requested address in its own private pool of memory. This means that anybody accessing memory hooked to a different G4 than they are running on still has to go through the 850 MB/sec bottleneck, but each G4 would be able to talk to its own private pool of memory at whatever speed its on-chip controller and the attached RAM is capable of. Since the on-chip controller is probably hooked into the AltiVec cache streaming engine I'd wager that such a G4 would be capable of getting far more bandwidth out of any given memory type than any PC with the same type of memory. Lastly, since far less traffic will be crossing the MPX bus, the 850 MB/sec limit will seem far less crowded that before.

Or all of you rumour mongers could be blowing smoke and we'll just get a speed bump.

</strong><hr></blockquote>


Well remember, I said I was just pissing in the wind and even commented that I wondered why my shins were wet.

At least the rest of my prediction was pretty spot on.

"During times of universal deceit, telling the truth becomes a revolutionary act." -George Orwell

Reply

"During times of universal deceit, telling the truth becomes a revolutionary act." -George Orwell

Reply
post #258 of 267
[quote]Originally posted by Programmer:
<strong>

If the hardware takes care of most of the details (i.e. a uniform 36 bit address space) then what is left for MacOSX is mostly an optimization problem -- the operating system will want to bind particular processes to particular threads when possible. For a multi-threaded app you're pretty much outta luck on a NUMA architecture since memory pages aren't bound to threads, only to processes. With the large L1/L2 caches, however, the situation isn't any worse than it is today as the processors will trade data across the MPX bus and the memory controller(s) will be running at &gt;1 GHz, not to mention you will effectively have a double-width memory interface (or more if &gt;2 processors).</strong><hr></blockquote>

Yes, but I was thinking of the hardware complexity, as a programmer I certainly don't want to see that complexity.

[quote]
<strong>
The on-chip memory controller can be fairly independent of the CPU core that it shares the silicon die with -- indeed they are probably connected by a fast/wide internal MPX bus. External memory requests don't need to be serviced by the processor, it just needs to share the memory controller with the rest of the world like it currently does. The memory controller(s) would also have built-in DMA engines -- at least the 8540 has them.</strong><hr></blockquote>

But for optimum performance you don't want this, the memory controller can be (partially) moved into the processor's pipeline, significantly reducing the latency. It would certainly be an easier design to produce if you didn't do this, and still give major benefits, however.

[quote]
<strong>
This reminds me: with this architecture &gt;2 processors makes a lot of sense.

Your point about this architecture not being appropriate for the Xserve has some validity. In the Xserve the I/O system has up to 2.1 GB/sec memory bandwidth, whereas in the conjectural machine discussed here it would only have 1 GB/sec.</strong><hr></blockquote>

Nice to talk to another (literate) programming engineer.

Michael
Sintoo, agora non podo falar.
Reply
Sintoo, agora non podo falar.
Reply
post #259 of 267
[quote]Originally posted by mmicist:
<strong>Nice to talk to another (literate) programming engineer.</strong><hr></blockquote>

Ditto.

Yes, I was refering to the software complexity of this model. The hardware side of things would no doubt be complex, but hey, those guys pull off miracles all the time. I mean do you know how small 0.13 microns is? :eek:


BTW: as much as I'd like to see an on-chip memory controller show up in an PowerMac sometime really soon, I don't for a second expect it. I do believe that we might not see better than 133MHz either. The quote I've seen from a Moto rep on the subject of a 166 MHz version of MPX was fairly recent and of the "might" and "in the future" nature.

A speed bumped Xserve-style machine remains the most likely possibility. I'm hoping Apple buddies up to nVidia and gets the first of the nv30s too.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #260 of 267
[quote]Originally posted by Programmer:
<strong>

Ditto.

Yes, I was refering to the software complexity of this model. The hardware side of things would no doubt be complex, but hey, those guys pull off miracles all the time. I mean do you know how small 0.13 microns is? :eek:

</strong><hr></blockquote>

Yes, I've worked with much smaller. Made my first 30nm transistor more than 10 years ago.

I also don't expect much different at MW, but hope for a lot. It's rather like weather forecasting, saying tomorrow will be rather like today will be right most of the time, however it is occasionally totally wrong.

I need a revamped FPU, or better still, a double precision Altivec unit, not a lot to ask for is it?

Michael
Sintoo, agora non podo falar.
Reply
Sintoo, agora non podo falar.
Reply
post #261 of 267
[quote]Originally posted by mmicist:
<strong>

Yes, I've worked with much smaller. Made my first 30nm transistor more than 10 years ago.

</strong><hr></blockquote>

In a simulator? I'd be interested to know what research lab was making 30nm transistors 10 years ago except by freak accident.
post #262 of 267
[quote]Originally posted by Eskimo:
<strong>

Not based on any concrete information, but I can't see logically how they could afford not to. Unless Moto wants to withdraw from the semiconductor business completely, which they haven't indicated, they will continue to produce PowerPC chips for the embedded space which is their bread and butter. They simply don't have the resources to devote to cutting edge CPU design and production. Something fewer and fewer companies today are able to do.</strong><hr></blockquote>
ah, that wasn't exactly what i was getting at. in any case, it would be more clear in the future if my information is correct.
post #263 of 267
[quote]Originally posted by Programmer:
<strong>
If the hardware takes care of most of the details (i.e. a uniform 36 bit address space) then what is left for MacOSX is mostly an optimization problem -- the operating system will want to bind particular processes to particular threads when possible. For a multi-threaded app you're pretty much outta luck on a NUMA architecture since memory pages aren't bound to threads, only to processes. With the large L1/L2 caches, however, the situation isn't any worse than it is today as the processors will trade data across the MPX bus and the memory controller(s) will be running at &gt;1 GHz, not to mention you will effectively have a double-width memory interface (or more if &gt;2 processors).</strong><hr></blockquote>

I think OS X should be in good shape for this type of a situation. Consider that most apps are Carbon and not Cocoa (and this includes a lot of Apple's stuff), you often have a large number of single-threaded apps running. Toss in a large number of lightweight processes from BSD, and again we have a workable situation. Where OS X will need to be thoughtful is with the well-written multithreaded Cocoa apps. And as you point out, as the processor count grows, this arrangement makes more and more sense.

[quote]<strong>Your point about this architecture not being appropriate for the Xserve has some validity. In the Xserve the I/O system has up to 2.1 GB/sec memory bandwidth, whereas in the conjectural machine discussed here it would only have 1 GB/sec.</strong><hr></blockquote>

That's what I was wondering as well. Thinking about it and having talked to a number of people looking at making Xserve purchases lately, the Xserve doesn't seem hugely appropriate for large scale numerical clustering where you'll be pounding through large data sets. Sure, it's got the bandwidth to shove those data sets around, but it'll again bind up on that bus when it comes to computation. These really are conventional servers, and ones pretty well suited for media work where you want the I/O bandwidth.

What's being considered here with the memory controller on the CPU makes far more sense for a desktop application and for the big scientific stuff. These are almost always more CPU-memory bound. For scientific work, if the code is factored for clustering, it'll work like a charm in this arrangement.

An Xserve built around this (preferably with a quad arrangement, actually) tips the balance from bandwith engine to computation engine which you want in *some* situations. Now were're looking at 168 G4's per rack, all of which have maximum bandwidth from Altivec to RAM. Sounds like a BLAST machine to me and would probably tear the shit out of anything in it's price range when it comes to computational power. We've been looking at how our scientific guys would decide between the Xserve and a plain dual desktop, and unless they were building a large cluster or needed the colocation, in most cases the desktop makes more sense since the CPU-RAM performance looks like a wash, and the towers present no other bottleneck to what they do. A quad Xserve in this arrangement changes the situation rather a lot, however. Apple will need to market this distinction well, but it's a unique distinction in a single vendor.

What I'm curious about though, is that one of the original (supposed) benefits of AGP was the ability of the graphics card to access main memory. Clearly, this wouldn't make sense with the arrangement that we're describing, but modern cards with 64MB or 128MB of RAM probably aren't going to benefit from this arrangement anyway.

The diagram previously posted by Barto with the RapidIO switch sitting between the CPU and the IC or AGP would almost have to happen before long. If we improve the performance of the G4, and get two of them humming along well, it seems at some point we'll have trouble feeding the AGP, even with most of the memory load off of that bus.

[ 06-30-2002: Message edited by: johnsonwax ]

[ 06-30-2002: Message edited by: johnsonwax ]</p>
The plural of 'anecdote' is not 'data'.
Reply
The plural of 'anecdote' is not 'data'.
Reply
post #264 of 267
[quote]Originally posted by Locomotive:
<strong>Immediately after last summer's MWNY, I understand our former iCEO launched a top-secret project: "D-Skys". Now if you don't recognize the first person to walk across the stage this year...</strong><hr></blockquote>

huh?? english please...
I'm making plastics right now!
Reply
I'm making plastics right now!
Reply
post #265 of 267
[quote]Originally posted by johnsonwax:
<strong>I think OS X should be in good shape for this type of a situation. Consider that most apps are Carbon and not Cocoa (and this includes a lot of Apple's stuff), you often have a large number of single-threaded apps running. Toss in a large number of lightweight processes from BSD, and again we have a workable situation. Where OS X will need to be thoughtful is with the well-written multithreaded Cocoa apps. And as you point out, as the processor count grows, this arrangement makes more and more sense.</strong><hr></blockquote>

Actually, even if you don't do anything special, your carbon app will have more than one thread. Several, in fact.

...and it is pretty easy to add additional threads as well.
Andrew Welch / el Presidente / Ambrosia Software, Inc.
Carpe Aqua -- Snapz Pro X 2.0.2 for OS X..... Your digital recording device -- WireTap Pro 1.1.0 for OS X
Reply
Andrew Welch / el Presidente / Ambrosia Software, Inc.
Carpe Aqua -- Snapz Pro X 2.0.2 for OS X..... Your digital recording device -- WireTap Pro 1.1.0 for OS X
Reply
post #266 of 267
[quote]Originally posted by Eskimo:
<strong>

In a simulator? I'd be interested to know what research lab was making 30nm transistors 10 years ago except by freak accident.</strong><hr></blockquote>

No, real transistors, and repeatable. These were GaAs/AlGaAs heterojunction FETs, not MOS, although others in the group were making 100nm MOS devices. This was at Glasgow University's nanoelectronics research group, although there were/are several other groups in the world that could approach if not meet those values at the time.
Minimum repeatable feature size when I left the group 3 years ago was around 10nm, occasionally 3nm. This was done using e-beam lithography.

Michael
Sintoo, agora non podo falar.
Reply
Sintoo, agora non podo falar.
Reply
post #267 of 267
Question from Bodhi

[quote]quote:

Originally posted by Locomotive:
Immediately after last summer's MWNY, I understand our former iCEO launched a top-secret project: "D-Skys". Now if you don't recognize the first person to walk across the stage this year...

huh?? english please... <hr></blockquote>

D-Skys = disguise. If nothing's ready again, I think we may not recognize Steve.

Locomotive

[ 07-01-2002: Message edited by: Locomotive ]</p>
Yes my child, he closed quite a few threads in his day.

Locomotive
Reply
Yes my child, he closed quite a few threads in his day.

Locomotive
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Rumor: Disaster at MWNY... :(