PowerBook G5

1679111219

Comments

  • Reply 161 of 375
    Quote:

    Originally posted by wizard69

    Given information floating around that Apple and IBM are wroking on a low power processor it is not unreasonable to suspect that this is a path they are following.



    Yes, but that is not the assertion made in this thread. The assertion is not that the next Powerbook will use a previously undisclosed, low power chip developed for that purpose. The contention is that it will use 4 PPC 440 cores on an MCM. Now, if Nr9 can document how the 440 was used to create this new, low power chip I'm all ears. Instead, what he has done is to posit that IBM has added VMX and 440 FPU2 to a 440 core and put four of these on an MCM.



    His answer to the issue of the lack of SMP support in the 440 core is that the OS will use an MPI implementation instead. (At first, I thought he didn't know what MPI was but he seems to understand the implications.) Well, that will require substantial retooling of the OS from the ground up. Essentially, it means retooling the entire OS from the kernel out. Oh and by the way, there is the issue that while an MPI implementation is appropriate for a highly parallel application, there is some concern that it's not appropriate for general purpose applications.



    The approach he describes is great for supercomputing applications where you are doing large scale matrix transforms which can be divided across multiple processors and the results later recombined for your solution. But that's not the reality of today's multi-threaded apps. What Nr9 is talking about is splitting a single thread across multiple processors which simply does not work. Every Mac application would have to be re-written from the ground up to make it work, and even then, most would not benefit from a distributed compute environment. (Too many dependencies to be able to split up the problem efficiently.)



    So what he is proposing amounts to a mobile computer that's built to run supercomputer applications. Which is plausible, I suppose, if you are willing to cede that government agencies and large academic institutions are going to be porting these applications to Mac laptops. Otherwise, I think it's a lot of speculation piled on top of a lot of wishin' and hopin'.

    Quote:

    Considering the process technologies that IBM has available to tap, one could reasonably believe that a new variant of this processor could hit much higher frequencies.



    Fine, but then it's not a 440 anymore. Sure, Apple and IBM could be working on something that makes sense. But the 440 ain't it.

    Quote:

    1. Apple has been forced to spend a great deal of effort to optimize its OS and system libraires to support multithreaded operation. It is to the point now that it is very worthwhile to leverage this in new hardware designs.



    And the 440 doesn't do multi threading. It has no SMP support.

    Quote:

    2. It does not appear that the 970 will be a viable laptop processor anytime soon. Process shrinks or not the market is going to demand good performance and long battery life. Intels Centrino will soon be the benchmark here.



    Yes, well, a PPC970 varient is still a viable candidate for a low power, high performance mobile solution IMHO. And Centrino markets a chip set, not a CPU. The CPUs in Centrinos are mobile versions of the P4 IIRC.

    Quote:

    4 Lots of public discussions with respect to dual core G4's coming in the future. This could easyly be an alternative for Apple if the R&D effort around this rumored system fizzles out.



    Possibly. But the speculation (that's all there is) has been going on for how many years now? With what to show for it? I don't think this is likely, but you never know.

    Quote:

    5. SMP systems offer alternative ways to manage power in laptops and other power constrianed PCs.



    Yes, they do. Too bad the 440 doesn't support SMP.

    Quote:

    6. To remain more than competitive Apple will need to cut power usage by more that 1/2. One of the primary motivators behind many Apple laptop purchases has been time on battery for a given size machine. Intel now has machines that exceed what Apple can deliever here.



    You're right, Apple needs better power management and better battery technology in their laptops. And which do you think you will see first? A completely rewritten OS (that only supports laptops with no available applications) or incremental improvements in battery and power management?

    Quote:

    Maybe not this year or even early next, but certainly in the future.



    Oh there's no doubt that Apple is already working on next generation systems. But that's not the assertion made in this thread.



    Ask yourself this: Why would Apple invest the effort just for the Powerbook line? Do you think they are willing or able to force developers to adapt to a new instruction set just for the Powerbook line? To maintain two separate code bases (one for PBs, one for desktops)?



    Yes, I know it's the year of the Powerbook, but this is still the tail wagging a very large dog.
  • Reply 162 of 375
    Quote:

    Originally posted by Nr9

    this is because the PowerBook G5 architecture requires a mini-OS on each core and then link together with message passing, sorta like a mini-cluster. OS X 10.4 is likely to provide functionality. Most of the user interface will be offloaded to the graphics chip. the overall system architecture is very high bandwidth and low latency and that is what make it possible. Some third party work has alraedy been done in this area, and that should help the implmenetation.



    You're obviously quoting someone else here. Could you provide a link or source for this?
  • Reply 163 of 375
    nr9nr9 Posts: 182member
    Quote:

    Originally posted by Tomb of the Unknown



    So what he is proposing amounts to a mobile computer that's built to run supercomputer applications. Which is plausible, I suppose, if you are willing to cede that government agencies and large academic institutions are going to be porting these applications to Mac laptops. Otherwise, I think it's a lot of speculation piled on top of a lot of wishin' and hopin'.





    or today's applications written in a different programming model.

    Quote:

    Originally posted by Tomb of the Unknown





    Ask yourself this: Why would Apple invest the effort just for the Powerbook line? Do you think they are willing or able to force developers to adapt to a new instruction set just for the Powerbook line? To maintain two separate code bases (one for PBs, one for desktops)?



    Yes, I know it's the year of the Powerbook, but this is still the tail wagging a very large dog.




    They are going to do it for future desktop. the reason they start with powerbook is because power consumption.



    Quote:

    Originally posted by Tomb of the Unknown

    You're obviously quoting someone else here. Could you provide a link or source for this?



    wats that suppsoe to mean.
  • Reply 164 of 375
    Quote:

    Originally posted by Nr9

    or today's applications written in a different programming model.



    Completely different and possibly not suited for general purpose applications. (Is there an echo in here?)

    Quote:

    They are going to do it for future desktop. the reason they start with powerbook is because power consumption.



    Right, and because Powerbook users don't need software.

    Quote:

    wats that suppsoe to mean.



    It means the english used in the bit quoted isn't yours. You did not write that. I'd like to know who did and what it was in reference to.
  • Reply 165 of 375
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by Tomb of the Unknown



    His answer to the issue of the lack of SMP support in the 440 core is that the OS will use an MPI implementation instead. (At first, I thought he didn't know what MPI was but he seems to understand the implications.) Well, that will require substantial retooling of the OS from the ground up. Essentially, it means retooling the entire OS from the kernel out. Oh and by the way, there is the issue that while an MPI implementation is appropriate for a highly parallel application, there is some concern that it's not appropriate for general purpose applications.




    I'm doing a bit of reading on message-passing kernels to determine relevance right now (this is what I love about threads like this). I'm not sure whether or how those messages are adaptable yet, and if anyone does I'm all ears.



    I share the concern in your "oh and by the way" reminder, but as a counterpoint: IBM and Apple are both putting a fair amount of work into parallel architectures. Some of Apple's reasons have more to do with G4 related exigencies than any theoretical concerns or long-term visions, but work done in the direction of parallelism is work done. There's a general sense that the single big CPU approach isn't going to make sense much longer. (Maybe it will, but I can't blame anyone for looking at alternate solutions.)



    If IBM believes that parallel computing of whatever flavor is the future, and Apple is using IBM as a CPU supplier, they're going to have to prepare for and endure whatever disruption this change entails anyway, at some point. Better sooner than later, frankly. And although the PowerBook seems like an odd choice to be the early adopter, so does everything in Apple's lineup - and at least the PowerBook can exploit the low power aspect of this solution (which IBM has also claimed for Cell, if memory serves).



    Quote:

    What Nr9 is talking about is splitting a single thread across multiple processors which simply does not work. Every Mac application would have to be re-written from the ground up to make it work, and even then, most would not benefit from a distributed compute environment. (Too many dependencies to be able to split up the problem efficiently.)



    I missed any claim from him that you could split one thread across multiple processors? At any rate, yes, this is really the Big Problem. But again, if IBM's headed this way, the problem has to be faced one way or another. I'd argue that Apple's in a better position than most to move this way, although I still haven't convinced myself of the feasibility.



    But it's not as if the current approach is without disadvantages. The trick is choosing a solution that has the right disadvantages for any given situation.



    Quote:

    Yes, well, a PPC970 varient is still a viable candidate for a low power, high performance mobile solution IMHO. And Centrino markets a chip set, not a CPU. The CPUs in Centrinos are mobile versions of the P4 IIRC.



    The Pentium M has more in common with a P3 than a P4.



    Quote:

    Oh there's no doubt that Apple is already working on next generation systems. But that's not the assertion made in this thread.



    Ask yourself this: Why would Apple invest the effort just for the Powerbook line? Do you think they are willing or able to force developers to adapt to a new instruction set just for the Powerbook line? To maintain two separate code bases (one for PBs, one for desktops)?




    First off, for myself, I'm far less interested in clinging to every assertion made in this thread than I am in the potential of the architecture generally, so whether this appears in a PowerBook late next year or not is only of secondary interest to me.



    Second, I'm not convinced that the 440 is ill-suited to the task. It doesn't do SMP, but if you're using lots of cores linked together with MPI then you don't want to waste silicon on that anyway. Lots of attention has to be paid to the business of passing messages, but it seems to me that the basic insight of RISC design - that load and store instructions should be separate and explicit - lends itself well to that adaptation. As far as the bulk of the PPC instruction set is concerned, nothing outside the register set exists, and so it seems to me that that part of the instruction set won't have to be touched. A multiply-add is a multiply-add; where the data came from is an implementation detail.



    As for the fabric: A fabric in this context is logical (obviously). Nothing in particular requires it to be uniform; in fact, based on IBM's and Sony's claims, I'd say that Cell is designed to deal with a more fractal-appearing fabric, with a large, high-latency fabric of smaller, lower-latency fabrics of smaller, even lower latency fabrics. It seems to me that "fabric" can cover everything from broadband to CoreConnect, inclusive. Again, we're already seeing baby steps in this direction, but as a fundamental architecture it would be able to cover a lot of ground. Basically, you could implement a semantic where the smaller the message passed, the smaller the fabric it's passed to for handling. At one extreme, a tiny message could be passed to a single core, and this could be optimized for given messages to be an atomic operation that wrote the response into the same memory used for the message; at the other, an message of arbitrarily large size (carrying a frame in a Pixar film to be rendered, say) would be sent and received across a network between discrete machines. This approach doesn't require that the fabric be low latency, it merely requires that for any given message, the latency should be appropriate to the size of the message - if you're going to send a message containing eight hours of work to a render farm, the latency of Ethernet is essentially irrelevant. NeXTStep was most of the way toward being able to do all of this anyway - it left implementation details to the programmer, but otherwise all the required technologies were there.



    If size is too simple a metric (and I confess that it does resemble the silliness of early information theory too closely for my comfort) then perhaps some parameter can be set in the message header identifying the class of latency appropriate to the message. I have much more reading to do.
  • Reply 166 of 375
    jbljbl Posts: 555member
    When is this PowerBook G5 supposed to be released?
  • Reply 167 of 375
    Wow, what a weird thread! It's titled "Powerbook G5", but it's discussing an experimental "cell" multiprocessor technology that would require a completely different programming approach, with its initial commercial deployment in a consumer laptop?!



    Maybe it's just me, but I'd expect the initial deployment of a new technology in a desktop (tower) machine, or even possibly an Xserve.



    And I'd expect IBM to sell it first. Apple does not have the programming resources to support multiple product lines with different programming requirements. That's why Steve killed the Newton, and why he killed Mac OS 9.



    Don't get me wrong, it's a fascinating discussion. But remember that consumer computing technbology takes years to catch up with experimental trends.
  • Reply 168 of 375
    Quote:

    Originally posted by Nr9

    this is because the PowerBook G5 architecture requires a mini-OS on each core and then link together with message passing, sorta like a mini-cluster. OS X 10.4 is likely to provide functionality. Most of the user interface will be offloaded to the graphics chip. the overall system architecture is very high bandwidth and low latency and that is what make it possible. Some third party work has alraedy been done in this area, and that should help the implmenetation.



    In my opinion, this quote is the most interesting part of this thread.
  • Reply 169 of 375
    How would xGrid fit in all this?
  • Reply 170 of 375
    Quote:

    Originally posted by Amorph

    IFirst off, for myself, I'm far less interested in clinging to every assertion made in this thread than I am in the potential of the architecture generally, so whether this appears in a PowerBook late next year or not is only of secondary interest to me.



    OK.

    Quote:

    Second, I'm not convinced that the 440 is ill-suited to the task. It doesn't do SMP, but if you're using lots of cores linked together with MPI then you don't want to waste silicon on that anyway.



    This is true as far as it goes, the question is which is more efficient? Which approach uses less silicon? Because you have to spend silicon one way or the other. Either on broadband communications and NUMA memory systems or on SMP cache coherency routines.

    Quote:

    Lots of attention has to be paid to the business of passing messages, but it seems to me that the basic insight of RISC design - that load and store instructions should be separate and explicit - lends itself well to that adaptation. As far as the bulk of the PPC instruction set is concerned, nothing outside the register set exists, and so it seems to me that that part of the instruction set won't have to be touched. A multiply-add is a multiply-add; where the data came from is an implementation detail.



    But what do you do about branch prediction? How do you handle dependancies that arise out of the execution of instructions? Do you "pre-process instructions" on a subset of cores?

    Quote:

    As for the fabric: A fabric in this context is logical (obviously). Nothing in particular requires it to be uniform; in fact, based on IBM's and Sony's claims, I'd say that Cell is designed to deal with a more fractal-appearing fabric, with a large, high-latency fabric of smaller, lower-latency fabrics of smaller, even lower latency fabrics. It seems to me that "fabric" can cover everything from broadband to CoreConnect, inclusive. Again, we're already seeing baby steps in this direction, but as a fundamental architecture it would be able to cover a lot of ground.



    Sure, in theory, this approach lends almost infinite flexibility and capacity. In practice, how do you compile an application to take advantage of this kind of logical fabric? How do you break up an application like Word or Powerpoint so that each runs on more than one core? And then how do you prioritize them? In HPC applications, there are usually "head nodes" that handle this, will you need the same in a 4 core laptop? An eight core desktop?

    Quote:

    an message of arbitrarily large size (carrying a frame in a Pixar film to be rendered, say) would be sent and received across a network between discrete machines.



    But this is an example of something with very little dependancies that lends itself to SIMD or MPI -- it's easily divided into chunks that can be processed and stitched back together. What about Raytracing? Where operations are more serial? Do you want all your raytracing done on one slow, core?

    Quote:

    if you're going to send a message containing eight hours of work to a render farm, the latency of Ethernet is essentially irrelevant.



    Unless it can't start for an hour because it takes that long to break up and distribute the job.
  • Reply 171 of 375
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by cubist

    Wow, what a weird thread! It's titled "Powerbook G5", but it's discussing an experimental "cell" multiprocessor technology that would require a completely different programming approach, with its initial commercial deployment in a consumer laptop?!



    Welcome to Future Hardware.



    Quote:

    Maybe it's just me, but I'd expect the initial deployment of a new technology in a desktop (tower) machine, or even possibly an Xserve.



    That's true for a technology that's best deployed in a desktop machine or a server. Most new technology of this sort to date has been big and hot and expensive. But if the new technology is, say, wireless networking, where was it deployed first?



    This new tech is all about how much power you can get in how little space for how cheap. That's the whole appeal of clustering: Green Destiny was built on the big computing equivalent of pocket change, it's small, it doesn't require a lot of power, and it can do real work. A big fire-breathing tower or server misses the point; a notebook, on the other hand, is closer to hitting the mark. (So is a blade, or a thin server like the Xserve, except that the notebook has to be more clever about power management.)



    Quote:

    And I'd expect IBM to sell it first.



    IBM's been selling stuff like this for ages. It's not new, it's just new in this space.



    Quote:

    Apple does not have the programming resources to support multiple product lines with different programming requirements. That's why Steve killed the Newton, and why he killed Mac OS 9.



    Right. Except that NewtonOS and OS 9 were utterly alien (and in OS 9's case, incredibly constricting). This doesn't have to be. The technology behind Cocoa has already been there and done this. It will require change, but it will not require everyone to drop everything and start over if Apple does this right.



    On the other hand, there is the Big Problem of what to do with the old monolithic apps. They're common, some of them are bedrock, and some of them are designed that way because that's what makes the most sense for that particular application (although I think most are that way out of some combination of legacy and a need to work around Windows' lousy threading). I really don't see any way around this, and that's not good. At least a dual-processor SMP system runs them well enough.



    Quote:

    Don't get me wrong, it's a fascinating discussion. But remember that consumer computing technbology takes years to catch up with experimental trends.



    It's been years already. Distributed Objects is how old now? Mach is how old? Objective-C (whose basic paradigm involves passing messages) is how old? The difference is that now we have the ability to make cheap fabrics and tiny but full-featured processor cores. The network is the computer, and the computer is the network.
  • Reply 172 of 375
    A little bird told me once that Apple is making a subnotebook in the near future. Maybe this has something to do.
  • Reply 173 of 375
    nr9nr9 Posts: 182member
    Quote:

    Originally posted by Tomb of the Unknown

    It means the english used in the bit quoted isn't yours. You did not write that. I'd like to know who did and what it was in reference to.



    heh. how do you tell that its not mine. its mine.



    tomb did just just come from battlefront?
  • Reply 174 of 375
    nr9nr9 Posts: 182member
    i think tomb thinks this is wrong because it is from Nr9



    im sorry tomb, Nr9 does have sources. i am originally from taiwan and taiwan is pretty leaky.
  • Reply 175 of 375
    you know....life is real funny sometimes.

    i told all of you,repeatedly,that apple would NOT use the 970 chip in a powerbook because it was to hot.

    i asked all of you to think different.

    i reminded you all that apple going with IBM was a hobsons choice because although they needed the clockspeed and power they did not need the heat that such a chip would put out.

    i even suggested that the rumoured "mojave" or whatever they are calling it was a possible answer.

    i pointed out that the industry is trending towards smaller,lighter,faster in regards to notebooks.

    i was called a "troll" by certain longstanding members of this board which i took in stride.

    but then these very people turn around and engage in speculation that was alluded to by me.

    i dont get it.

    either you believe or you dont.

    im not a engineer although i work in the electronics industry in the silicon valley.

    im not a expert.

    but its not to hard in my opinion to see where apple is going.

    remember dorsal?

    he foretold of a "core" being mixed and matched with other components to make a custom cpu,go look.

    and we all know that dorsal was the greatest appleinsider to ever post on this board,hands down.

    sometimes i wonder about you guys,do you really believe?

    with apple its not usually a matter of if,but when.....remember this because this is very important.

    many of the products that have be realeased by apple were rumoured quite a while ago but only recently released.

    apple does things when it suits ........THEM.







    THINK DIFFERENT!
  • Reply 176 of 375
    amorphamorph Posts: 7,112member
    All right! Now we're getting to the really interesting questions. I can't even begin to claim that I can answer them in any absolute way, but I'll give it the old college try. Good post.



    Quote:

    Originally posted by Tomb of the Unknown

    This is true as far as it goes, the question is which is more efficient? Which approach uses less silicon? Because you have to spend silicon one way or the other. Either on broadband communications and NUMA memory systems or on SMP cache coherency routines.



    Right. My point was only that if you know you're going with one, there's no point spending silicon on the other as well. Either is cheaper than both. So it makes sense that there'd be no SMP support in a Cell core.



    Quote:

    But what do you do about branch prediction? How do you handle dependancies that arise out of the execution of instructions? Do you "pre-process instructions" on a subset of cores?



    The 440 (or, realistically, any other Cell core) has a short pipeline, so branch prediction failure isn't nearly the problem that it is on a deep-pipelined CPU. You can throw a little silicon at the problem in much the same way the G3 and G4 do, or take advantage of the dual-core arrangement (in this case) and run 'em both. After all, SMT and superscalar designs are just adaptations of dual-core design to really big cores. The same tricks work in both cases.



    Quote:

    Sure, in theory, this approach lends almost infinite flexibility and capacity. In practice, how do you compile an application to take advantage of this kind of logical fabric? How do you break up an application like Word or Powerpoint so that each runs on more than one core? And then how do you prioritize them? In HPC applications, there are usually "head nodes" that handle this, will you need the same in a 4 core laptop? An eight core desktop?



    As I mentioned upthread, IBM's already taken a crack at auto-threading in their compiler. If anyone's tried it, I'd love to hear how well it works. I'd imagine that it's not smart enough to generate mutually dependent threads, but that would actually be an advantage in this context.



    As far as coordinating it all, I don't know. If head nodes are necessary, they're necessary. If, on this scale, a tasker thread suffices, great. At merely 4 or 8 cores (well short of a typical HPC implementation!), it would be a waste to use an entire core as a manager.



    Quote:

    What about Raytracing? Where operations are more serial? Do you want all your raytracing done on one slow, core?



    This is going to sound like a punt, but it's really not: Since you're already engaging multiple machines here, and since IBM (for one) isn't dropping their POWER line or their 900 line and putting all their eggs in the Cell basket, it seems to me that someone who has to do a lot of serial work could get a CPU suited to that work and add it to the fabric.



    As for things like Word, I have no idea. I think it would be quite sensible to have a pervasively threaded word processor, but I wouldn't want to be the guy given Word's code base and asked to thread it. This is the Big Problem.
  • Reply 177 of 375
    nr9nr9 Posts: 182member
    this is not for iBook. the iBook will continue to use G4. this is for powerbook.
  • Reply 178 of 375
    wizard69wizard69 Posts: 13,377member
    That is good to hear, why don't you pass on some more information.



    Everything I've heard up to this point tells me this would be an ideal processor for the iBook line and not the PowerBook. If Apple has sites on an even smaller portable this technology would make even more sense.



    Dave





    Quote:

    Originally posted by Splinemodel

    A little bird told me once that Apple is making a subnotebook in the near future. Maybe this has something to do.



  • Reply 179 of 375
    amorphamorph Posts: 7,112member
    Very interesting, Splinemodel.



    Yeah, it really does make more sense in an iBook or subnote. Anyone who tries raytracing on that machine deserves what they get.



    If this variant architecture does happen, I think it's important to remember that Apple doesn't have to make it happen everywhere. One or two 970s will be a better choice in e.g. towers for a good while yet.



    I don't think this architecture style is limited to such light use, though, or Sony wouldn't be interested in building a console around it. Obviously, they've found ways to get some serious juice out of the implementation that I can't guess at - not even given a few minutes and a background in application software.
  • Reply 180 of 375
    wizard69wizard69 Posts: 13,377member
    Hi Tomb I will see if I can respond in a reasonable manner before running off to work.





    Quote:

    Originally posted by Tomb of the Unknown

    Yes, but that is not the assertion made in this thread. The assertion is not that the next Powerbook will use a previously undisclosed, low power chip developed for that purpose. The contention is that it will use 4 PPC 440 cores on an MCM. Now, if Nr9 can document how the 440 was used to create this new, low power chip I'm all ears. Instead, what he has done is to posit that IBM has added VMX and 440 FPU2 to a 440 core and put four of these on an MCM.







    The 400 series is available as a core. You take your design automation tools, tack on a few functional units, compile and send to the foundry. It no real mystery and those functional units can be derived from a library or a few of your own. I would not be surprised at this moment in time to hear that AltVec exist as source code some place.



    Actually I thought we started out with 2 core processors but that doesn't really matter. The only thing that bothers me about the MCM is that they where expensive, but maybe that is not a problem at the volumns that Apple would use.



    In any event I would suspect that the 440 was used for prototype work. There is a good chance that a follow on to the 400 series may actually make it into the design.

    Quote:



    His answer to the issue of the lack of SMP support in the 440 core is that the OS will use an MPI implementation instead. (At first, I thought he didn't know what MPI was but he seems to understand the implications.) Well, that will require substantial retooling of the OS from the ground up. Essentially, it means retooling the entire OS from the kernel out. Oh and by the way, there is the issue that while an MPI implementation is appropriate for a highly parallel application, there is some concern that it's not appropriate for general purpose applications.



    Well that may be his answer but what would happen if a MMU that supports SMP where tacked onto the core.



    This is unix, there is support for communications between processes already. I don't think you would see a major retooling of the operating system. In any event I'm leaning to a more traditional SMP system on a chip.

    Quote:



    The approach he describes is great for supercomputing applications where you are doing large scale matrix transforms which can be divided across multiple processors and the results later recombined for your solution. But that's not the reality of today's multi-threaded apps. What Nr9 is talking about is splitting a single thread across multiple processors which simply does not work. Every Mac application would have to be re-written from the ground up to make it work, and even then, most would not benefit from a distributed compute environment. (Too many dependencies to be able to split up the problem efficiently.)



    If that isn't happening with some of todays applications please explain what is. Frankly I can't ever recall Nr9 saying that a single thread would run acroos several processors. I'm reasonable sure that was someone else because I responded to that specific post.

    My position is that it would be easiest for Apple to go the SMP route on the new chip implementation due to the leveraging of existing software. It would certainly give you the best bang for the buck in the short term. But and it is a big but SMP does not scale forever and not all programs can really make use of it. At some point multiple independant processing units that communicate amongst themselves may be a better idea. In effect a cluster of SMP units on one motherboard or MCM or SOC.

    Quote:



    So what he is proposing amounts to a mobile computer that's built to run supercomputer applications. Which is plausible, I suppose, if you are willing to cede that government agencies and large academic institutions are going to be porting these applications to Mac laptops. Otherwise, I think it's a lot of speculation piled on top of a lot of wishin' and hopin'.



    This is not a spooky goverment project. It has the potential to solve a number of issue related to low power operation and high performance. Why you believe that a bunch of portin will need to be done is beyond me. Sure some system level stuff will have to be done, but there is no reason at all that all traditional programming models could not be supported. What you would be doing is evolving the machine not building a new one.

    Quote:



    Fine, but then it's not a 440 anymore. Sure, Apple and IBM could be working on something that makes sense. But the 440 ain't it.



    That is like saying if the 970 comes out with a larger cache its not the 970 anymore. Sure it is improved but overall it maintains the smae profile. The reason the 440 would not match waht ever Apple deleivers has more to do with them using a CORE and not the 440 itself. Think of 440 as a reference to the processor series.

    Quote:



    And the 440 doesn't do multi threading. It has no SMP support.



    Ok explain what multithread has to do with SMP. Further do you need SMP to support multithreading. <<<<Trick question>>>>.

    Quote:



    Yes, well, a PPC970 varient is still a viable candidate for a low power, high performance mobile solution IMHO. And Centrino markets a chip set, not a CPU. The CPUs in Centrinos are mobile versions of the P4 IIRC.



    I'm becoming less and less a believer that the 970 will ever be put into a laptop. When it first cam out I couldn't wait for a 970 based laptop, now that the excitement has calmed down I don't see it as possible in a true portable laptop. The shrink to 90nm will not drop power usage enough on its own.

    Quote:



    Possibly. But the speculation (that's all there is) has been going on for how many years now? With what to show for it? I don't think this is likely, but you never know.



    Yes, they do. Too bad the 440 doesn't support SMP.



    This doesn't mean that Appl/IBM couldn't deliver a variant that does. It also doesn't mean that alternative approaches can not be used.

    Quote:



    You're right, Apple needs better power management and better battery technology in their laptops. And which do you think you will see first? A completely rewritten OS (that only supports laptops with no available applications) or incremental improvements in battery and power management?



    Again explain no applications for a laptop.

    Quote:



    Oh there's no doubt that Apple is already working on next generation systems. But that's not the assertion made in this thread.



    Well we can't go out and buy any of these systems today. Hell we may never be able to buy them. These are rumors and wild ass geusses you know

    Quote:



    Ask yourself this: Why would Apple invest the effort just for the Powerbook line? Do you think they are willing or able to force developers to adapt to a new instruction set just for the Powerbook line? To maintain two separate code bases (one for PBs, one for desktops)?



    All of this is applicable to the entire product line. Especially the new XServes which by the way are taking way to long to come out.



    Please give up on the seperate code base thought would you. It shows a complete lack of understanding on what is possible. Believe me many things are possible.



    Even worst what is this talk about a new instruction set!!! we have been talking PPC since the begining of this thread.

    Quote:



    Yes, I know it's the year of the Powerbook, but this is still the tail wagging a very large dog.



    Hey this is a rumor, with a lot of people exploring the possibilities. Some are open minded and others are a bit thick, to each his own. There are several sound paths that Apple could take its new hardware down, this is jut one possibility.



    Thanks

    Dave



    Quote:





Sign In or Register to comment.