PowerBook G5

11314151719

Comments

  • Reply 321 of 375
    wizard69wizard69 Posts: 13,377member
    Quote:

    Originally posted by Amorph

    C and C++ do not do parallellism. They are designed to assume that there is one CPU. All means to distribute work across multiple CPUs or cores has to be done outside of those languages, using system libraries. 99% of the time, it's done manually by the programmer, and the threads are very coarse because the application only expects 1 or 2 large, fast CPUs.







    Agreed C/C++ is not natively threaded but that has nothing to do with the fact that there are a great many threaded application available for OS/X right now. The number of threads an application generates has nothing to do with the number of CPU's that application will run on. It is not unheard of for an application to generate ten or twenty threads and run on one processor. How effectively additional procesors can be used is highly dependant on the problem domain and the OS.

    Quote:



    If you want that application to run on 4 small CPUs (or possibly many more in the case of Cell) a mere recompile would be the best case. The more execution cores you have to target, the more redesign and reimplementation you'd have to do. Because there is no provision in C or in C++ for dividing up work, and historically there's been no need, except in supercomputing applications, this is not easy. In particular, debugging and troubleshooting threaded code is exasperating, and the exasperation increases supralinearly with the number of threads.



    Well agian the amount of reimplementation that has to be done is highly dependant on the problem at hand. Some applications could immediately us the additional resources right away. Some developers will never take upon themselves the effort to multi thread their application.



    The issues with threaded code and parallel super computing are well known. The issue is that these techniques do allow the delivery of applications that proces information in a timely manner. Yes at times the effort to debug and deliver multi-threaded applications is rather involved but not all applications need massive and complex threading implementations. Many applications can and do provide significant user experience improvements with the implementation of simple threading implementations.



    Even is we have a single thread application that application will still benefit from running on a multiprocessor machine. With other processors available for such things as system functions and display server code, any application benfits from more than one processor being available on the Mac. Lets face it the PowerMac before the G5 did a rather credible job of making use of the processors it had available to it. If the 970 is a no go in the PowerBook I see no reason why a bit of retrenching to this sort of arraingement won't work. Sure you may not have the ultimate in single task performance but the overall user experience is pretty good.

    Quote:

    It's not an issue now because the only dual-processor machines also happen to use the fastest available chips, so threading is a luxury. On the platform proposed in this thread, it would become a necessity, and that would have a tremendous impact on application design - especially among the big, monolithic Carbon applications originally designed for an OS that had mediocre threading support bolted on late in its life. Furthermore, an application designed to spread itself over a large number of individually weak chips won't run as well on a platform built on one or two fast chips, because threading carries overhead.



    Threading is already a development issue for anybody targeting the professional line. The big but is that this machine does not force threading and parallel coding technigues onto a developer anymore that the G5 PowerMacs do. It is a feature that is there to be taken advantage of, many applications will and won't even know it because of system resources using the facilities provided by the additional processors.



    But your argument about the number of processors really hold no wieght. What is going to happen when they start to put dual core chips into the G5 PowerMac and there are now 4 real processors to deal with. If those 4 processors support some sort of multi threading you could end up with 8 or more logical processors on a PowerMac in as little as a year or two. Even with 8 or 16 logical processors a PowerMac is still going to be able to run single thread applications and benefit somewhat from all of that additional processing capability. Granted this PowerMac is a tightly coupled system but that does not mean that a loosly coupled system can not also deliver to the user similar benefits.

    Quote:



    This makes no sense. Professional applications are optimized for professional machines, not "desktops" or "laptops". PowerBooks are professional machines. Apple pitches them that way, and people use them that way. A PowerBook that couldn't run AltiVec-heavy apps like Photoshop, DVD Studio Pro, Final Cut Pro, Logic, etc., would not sell at anywhere near the rate of the current PowerBook.



    Well I consider Altvec an imperative also, so I would have to hope that Apple does to. The problem with your postion is that there is now a huge gap between the laptop hardware and the desktop hardware. Like it or not many of the applications you mentioned above do perform remarkably better on an SMP machine of any generation than they do on a laptop. A multiprocessor laptop could help to close that gap a bit. But a multiprocssor machine won't come to market with the current generation of power hungery chips. This is what makes a 440 derived laptop so interesting, two or four 1 Ghz SOC running at lets say 1.2 GHz could make a multiprocessor laptop a reality.



    IBM currently has an 800MHz 440 SOC that uses about 4 watts of power. Lets say though a combination of shrinking the die, design improvements, speed increases and feature additions (alt-vec unit, fpu) they can deliver a chip running at 2 watts at 1.2 GHz with similar performance to a G4 maybe better. Put a few of these into a laptop and you won't get alot of complaints.

    Quote:



    The market is moving to notebooks, generally. Apple sold an unheard-of 197,000 PowerBooks last quarter. This is exactly the wrong time to start gimping them.



    Yes and this is a problem that Apple has to deal with as the current PowerBook ages. I'd love to see a 970 in a PowerBook tommorrow but I don't think that will happen any time soon. At least not in a machine that will leave me happy about battery life time. This Cell based approach does provide interesting speculation as to a follow on to the Powerbook. I could just as easily see this approach adapted to the iBook though, especially if they manage to actually deliever a 970 based Powerbook.

    Quote:



    One more time. Cell, upper-case-C, is a particular implementation shipping from IBM soon. Cell, lower-case-c, is a noun referring to a self-contained entity. Cellular computing is a concept. They are three different, if not unrelated, things.



    I have to disagree. This is not how I interpet IBM's documentation as thin as it is. As you have stated though this should all be cleared up shortly. MY take though is that Cell in any form is a concept which will be applied to a number of devices.

    Quote:

    As to the suitability of Xgrid: Generally, you can choose between a solution that scales up well and a solution that scales down well. Xgrid is suitable for powerful CPUs connected by (relatively) low bandwidth. Because of that, it is less suitable for weaker CPUs connected by high bandwidth. As soon as you start distributing computations across nodes, you have to become sensitive to both available resources and available bandwidth, because any inefficiency will squander a surprising amount of your available power. A solution for powerful CPUs with low bandwidth interconnects is only suitable for that situation.



    while XGrid may be designed for the problem domain you describe, but there is no reason why it can not be extended to local computing resources. You still have all the problems you described with the low bandwidth approach, it is just that your bandwidth is much higher and your resources generally local.



    It probally would be more sensible to extend the operating system to support scheduling of the local resources directly. As I mentioned in another response this whole concept reminds me of the Transputer family of chips from year gone buy. Unfortunately a concept that never took off but sounds a lot like Cell as STI documentation at this early stage describes. It will be interesting to see how STI overcomes the failings that the Transputer had.



    It is pretty much wait and see. IBM has announcements scheduled from the 22 nd of Jan. through the middle of Feb. It will be interesting to see what they have been up to and to see if any of it is Cell related.

    Quote:





  • Reply 322 of 375
    What would make me happy with the next rev powerBooks ?

    ithink the G4 has a long life in the portable form factor....remember now...the G4 can go dual as well .

    Now that the iBook has the G4, i think it's about time the PowerBook distinguish itself from the consumer machine. I think with 90nm fabs, it's about time we see dual processor (G4) powerBooks, with roughly the same battery life as we got now.personally i dont think the time is right for a G5 in a portable and personally i wouldnt buy one unless it had comparable battery life.



    dual G4 powerbooks !!! gimme gimme gimme...
  • Reply 323 of 375
    zapchudzapchud Posts: 844member
    Quote:

    Originally posted by wizard69

    But your argument about the number of processors really hold no wieght. What is going to happen when they start to put dual core chips into the G5 PowerMac and there are now 4 real processors to deal with. If those 4 processors support some sort of multi threading you could end up with 8 or more logical processors on a PowerMac in as little as a year or two. Even with 8 or 16 logical processors a PowerMac is still going to be able to run single thread applications and benefit somewhat from all of that additional processing capability. Granted this PowerMac is a tightly coupled system but that does not mean that a loosly coupled system can not also deliver to the user similar benefits.





    Yes, the argument actually holds a lot of weight. When you get 8 logical processors in the PowerMac, the processors themselves will be so powerful, that the applications don't need to be that heavily threaded. Most problems could easily be solved by using one logical processor. And a whole lot of problems would be solved that way. Then there would be a lot of problems in professional applications that'd be solved by two threads. The "8 processor" PowerMac would do that easily, because the processors would be so powerful in the first place.



    A handful of apps would be threaded heavily enough to use four processors. The PowerMac would be extremely fast at these. And this is the point where small quad systems like the proposed PowerBook here would start to be really useful. Heavy threading is a necessity (to steal Amorph's word) for such a machine.



    It would of course, as Nr9 says, require a whole new programming model. And that for a PowerBook? I'm having a hard time believing in this even if they switched all lines of computers to this programming model. Developers have gone through quite a lot already, and so have Apples customers.



    I could see the programming model being used for some problems, but it would be disastrous for all apps that'd have to be ported to the platform. It would have to be rewritten, rethought, re-implemented, and re-debugged, and you'd end up with two, or maybe three totally/very different code-bases. One for the original platform, one for the Powerbook, and one for the PowerMac (unless it would be made to use the same tech as in the Powerbook, but there's a lot of ifs in here already). I don't see how this is feasible.



    So my argument is; To have such a machine work as a professional laptop is supposed to, you'd have to have quite powerful processors to begin with. The G4, more specifically the IBM "750VX", or whatever they choose to call it, is such a beast. Give it a decent bus, add another processor to the equation (if this is even needed), and you'll have this problem quickly solved already.
  • Reply 324 of 375
    wizard69wizard69 Posts: 13,377member
    Quote:

    Originally posted by Zapchud

    Yes, the argument actually holds a lot of weight. When you get 8 logical processors in the PowerMac, the processors themselves will be so powerful, that the applications don't need to be that heavily threaded. Most problems could easily be solved by using one logical processor. And a whole lot of problems would be solved that way. Then there would be a lot of problems in professional applications that'd be solved by two threads. The "8 processor" PowerMac would do that easily, because the processors would be so powerful in the first place.







    Utter garbage!



    There are classes of problems that we will never have enough power to solve. Granted many of these problems are not typical of the work load place on most PC's these days. The difference is that Apple is going after the atypical power user with these machines.



    Further for general usage many applications are limited by processor power available to them. The big item here is games believe it or not.

    Quote:



    A handful of apps would be threaded heavily enough to use four processors. The PowerMac would be extremely fast at these. And this is the point where small quad systems like the proposed PowerBook here would start to be really useful. Heavy threading is a necessity (to steal Amorph's word) for such a machine.



    It would of course, as Nr9 says, require a whole new programming model. And that for a PowerBook? I'm having a hard time believing in this even if they switched all lines of computers to this programming model. Developers have gone through quite a lot already, and so have Apples customers.



    I could see the programming model being used for some problems, but it would be disastrous for all apps that'd have to be ported to the platform. It would have to be rewritten, rethought, re-implemented, and re-debugged, and you'd end up with two, or maybe three totally/very different code-bases. One for the original platform, one for the Powerbook, and one for the PowerMac (unless it would be made to use the same tech as in the Powerbook, but there's a lot of ifs in here already). I don't see how this is feasible.



    So my argument is; To have such a machine work as a professional laptop is supposed to, you'd have to have quite powerful processors to begin with. The G4, more specifically the IBM "750VX", or whatever they choose to call it, is such a beast. Give it a decent bus, add another processor to the equation (if this is even needed), and you'll have this problem quickly solved already.



    I have no doubt that a dual processor machine would be rather powerful in a portable form factor. The issue becomes will the power usage be manageable. As to this sort of machine solving all of the worlds problems, I don't buy it but it would be a remarkable improvement.



    My point has always been that Nr9's described machine is very interesting. It is a machine that I could see Apple having in a development lab. For a variety of reasons I do not see Apple bringing the machine to market, atleast not as a PowerBook.



    What I've tried to point out though is that Multi-processing is the wave of the future. There have been a lot of pointers to a future of dual core chips, if Apple were to put dual core chips into a the iMac or its follow on the world of single processor machines would come to a quick halt. Like wise with a multi thread chip. Parallel execution of multi-thread applications is the future but there is little reason to expect that all of this will be done exclusively on SMP machines.



    I'm also bothered by the continued 1 to 1 association of thread to processors. This is not the case folks, it is very possible to have multiple thread executing on one processor. The benefits may not be the same as having the OS spread the threads across several processors but the application is multi thread just not executing in parallel.



    Quote:





  • Reply 325 of 375
    powerdocpowerdoc Posts: 8,123member
    Quote:

    Originally posted by Zapchud

    Yes, the argument actually holds a lot of weight. When you get 8 logical processors in the PowerMac, the processors themselves will be so powerful, that the applications don't need to be that heavily threaded. Most problems could easily be solved by using one logical processor. And a whole lot of problems would be solved that way. Then there would be a lot of problems in professional applications that'd be solved by two threads. The "8 processor" PowerMac would do that easily, because the processors would be so powerful in the first place.



    A handful of apps would be threaded heavily enough to use four processors. The PowerMac would be extremely fast at these. And this is the point where small quad systems like the proposed PowerBook here would start to be really useful. Heavy threading is a necessity (to steal Amorph's word) for such a machine.



    It would of course, as Nr9 says, require a whole new programming model. And that for a PowerBook? I'm having a hard time believing in this even if they switched all lines of computers to this programming model. Developers have gone through quite a lot already, and so have Apples customers.



    I could see the programming model being used for some problems, but it would be disastrous for all apps that'd have to be ported to the platform. It would have to be rewritten, rethought, re-implemented, and re-debugged, and you'd end up with two, or maybe three totally/very different code-bases. One for the original platform, one for the Powerbook, and one for the PowerMac (unless it would be made to use the same tech as in the Powerbook, but there's a lot of ifs in here already). I don't see how this is feasible.



    So my argument is; To have such a machine work as a professional laptop is supposed to, you'd have to have quite powerful processors to begin with. The G4, more specifically the IBM "750VX", or whatever they choose to call it, is such a beast. Give it a decent bus, add another processor to the equation (if this is even needed), and you'll have this problem quickly solved already.




    Good point , when we see the number of software coming from the PC world poorly optimised for the mac (that's why we many tests are in favor of the PC), it's obvious that nobody will optimise heavily (because it's a very special way of programming) the code for a bunch of laptops.



    To my advice the 750 VX will be a perfect chip for a laptop. My vision of the near future is G5 for desktop (i mac and powermac) 750 VX for laptops (i book and powerbook).



    Dual core chips will not reach the consumer PC market until the 65 nm process will be avalaible.



    here is my guessed roadmap.



    Towers : G5 90nm for 2004

    G5 power 5 derived (1MB cache and multithreading) for 2005

    Dual core version of the lattest chip for 2006



    Laptops : 750 vx for 2004 and 2005

    dual core 750 vx variant or 65 nm G5 power 5 derived for 2006



    I Mac : G5 90 nm for 2004

    G5 power 5 derived for 2005

    Dual core version of the lattet chip for 2006.



    The tower line will have an entry single chip models, others will be dual.
  • Reply 326 of 375
    Quote:

    Originally posted by wizard69

    Utter garbage!



    There are classes of problems that we will never have enough power to solve. Granted many of these problems are not typical of the work load place on most PC's these days. The difference is that Apple is going after the atypical power user with these machines.



    Further for general usage many applications are limited by processor power available to them. The big item here is games believe it or not.





    What part of that was 'utter garbage'?

    I absolutely agree with what you're saying here, I'm not sure what we're arguing about here :-)
  • Reply 327 of 375
    airslufairsluf Posts: 1,861member
    Quote:

    Originally posted by Amorph

    C and C++ do not do parallellism. They are designed to assume that there is one CPU. All means to distribute work across multiple CPUs or cores has to be done outside of those languages, using system libraries. 99% of the time, it's done manually by the programmer, and the threads are very coarse because the application only expects 1 or 2 large, fast CPUs.



    If you want that application to run on 4 small CPUs (or possibly many more in the case of Cell) a mere recompile would be the best case. The more execution cores you have to target, the more redesign and reimplementation you'd have to do. Because there is no provision in C or in C++ for dividing up work, and historically there's been no need, except in supercomputing applications, this is not easy. In particular, debugging and troubleshooting threaded code is exasperating, and the exasperation increases supralinearly with the number of threads.







    It's not an issue now because the only dual-processor machines also happen to use the fastest available chips, so threading is a luxury. On the platform proposed in this thread, it would become a necessity, and that would have a tremendous impact on application design - especially among the big, monolithic Carbon applications originally designed for an OS that had mediocre threading support bolted on late in its life. Furthermore, an application designed to spread itself over a large number of individually weak chips won't run as well on a platform built on one or two fast chips, because threading carries overhead.





    Amorph, I'll quibble just a tad here. Languages do not do parallelism in general, making the choice of any particular language kind of immaterial. Java is a semi-special case, Java is really doing the tightrope between being a language and a API set. If that is an un-objectionable stance than C/C++ derived API's are not far fetched for multi-threading. Just another half baby-step has an non-OS supplier providing a library of these API-like multi-threading tools.



    The key to making massive parallelism work as a widely used commodity programming paradigm is the set of design tools available, much more so than a particular language, as a capable lower level language like C can write these tools as well. You know how well suited C is to such a task, just look at C++, Obj-C and Java as the three biggest progeny of Bell Labs original baby (some directly, other like Java more circuitously).



    Threading is fast leaving the luxury realm and becoming necessary. SMT is barreling towards us from both IBM and Intel, suddenly a dual processor or dual core can look like 4, and that will be the mainstream in the next couple years.
  • Reply 328 of 375
    wizard69wizard69 Posts: 13,377member
    The implication that processors will become so powerfull that only one or two threads of execution will be needed. It is my position that we will not see in our lifetimes, a PC made by anybody, that is so powerfull it would meet the needs of evey user. The future has been pretty much laid out before us, it is just a matter of having the hardware delivered. That future will be multi-processor and processors supporting multi-thread execution.



    Lets face it Apple has been optimizing for multi processing for years now. Frankly it is the only good thing that Motorola ever did for Apple. That is the poor performance of the G4 force Apple to apply SMP to keep performance on a par with Intel. This resulted in a OS that takes advanatge of the 970 series processors to a far greater extent than any comparable OS for desktop users.



    Even with the fantastic support for multiple processors, OS/X and the 970's still are only on a par with Intel hardware. For many aplications this is not good enough as the huge increase in the installation of cluster computers indicates. These day you have everybody from genetic researchers to race car teams trying to get realtime results from clusters of computers, we are far from having computers that are fast enough. Each time we see an incremental increase in the performance of PC's, or recently the PowerMac, new markets open up for the hardware as the economics change.



    Beyond the issue of us ever having computers that are powerfull enough, I'd have to say yes we agree.



    Dave



    Quote:

    Originally posted by Zapchud

    What part of that was 'utter garbage'?

    I absolutely agree with what you're saying here, I'm not sure what we're arguing about here :-)




  • Reply 329 of 375
    Quote:

    Originally posted by wizard69

    The implication that processors will become so powerfull that only one or two threads of execution will be needed. It is my position that we will not see in our lifetimes, a PC made by anybody, that is so powerfull it would meet the needs of evey user. The future has been pretty much laid out before us, it is just a matter of having the hardware delivered. That future will be multi-processor and processors supporting multi-thread execution.



    Oh, I'm sorry, I might not have been clear enough on that. :-)



    My point was not that the processors will be powerful enough to solve any given problem with only one or two execution threads. The processors will be fast enough to sustain a good enough performance-level compared to what is to be expected of the computer, or the computer will not be interpreted as being slow compared to its competition.



    I think we agree on this.
  • Reply 330 of 375
    wizard69wizard69 Posts: 13,377member
    One thing about competition is that it never stands still. It is very hard to project who will be the technology leader 5 years down the road. After all who would have suspected that Intel would have tripped up with repsect to getting out a 90nm processor.



    As to computers if we don't continue to expect more from them the market would quickly stagnate. The expectation that one will be able to buy faster machine every year allows software development technology to continue to move forward. It is the development of software that takes advantage of the latest processor capabilities that drives the market place.



    One just has to look at the simple realm of the games industry. Without the develpment of hardware to allow the deployment of more advanced software tools and applications there would be little that is new in the field of games. Same with amny real industries, processor power allows one to do things in the futer that only can be dreamed about today.



    Quote:

    Originally posted by Zapchud

    Oh, I'm sorry, I might not have been clear enough on that. :-)



    My point was not that the processors will be powerful enough to solve any given problem with only one or two execution threads. The processors will be fast enough to sustain a good enough performance-level compared to what is to be expected of the computer, or the computer will not be interpreted as being slow compared to its competition.



    I think we agree on this.




  • Reply 331 of 375
    snoopysnoopy Posts: 1,901member
    Quote:

    Originally posted by wizard69





    . . One just has to look at the simple realm of the games industry. Without the develpment of hardware to allow the deployment of more advanced software tools and applications there would be little that is new in the field of games. . .









    Interesting that you mention games. If I understand correctly, Xbox will be using the IBM Power5 derivative, the 975 or whatever it will be called. MS obviously want a lot of processor power in their next game machine. So, I would expect Sony to have a similar goal for the PlayStation 3, and Sony is going with a PPC cell architecture. It will be interesting to see how these two compare.
  • Reply 332 of 375
    Quote:

    Originally posted by snoopy

    Interesting that you mention games. If I understand correctly, Xbox will be using the IBM Power5 derivative, the 975 or whatever it will be called. MS obviously want a lot of processor power in their next game machine. So, I would expect Sony to have a similar goal for the PlayStation 3, and Sony is going with a PPC cell architecture. It will be interesting to see how these two compare.



    I heard the same about Microsoft. personally i dont think processors are a big factor when it comes to graphic performance for games on consoles. I mean having a 975/970 in a gaming machine is overkill !! this thing will be able to crunch numbers at a rate that video editors or genetic researchers need, hardly comparable to gaming AI imo.



    either way.... it's Microfluffy.... who give a rats arse ?
  • Reply 333 of 375
    jubelumjubelum Posts: 4,490member
    Upon using that phrase one too many times, I received a plastic rat's arse in the mail from an anonymous admirer... with a loving note. I, for one, can now "give a rat's arse"...



  • Reply 334 of 375
    henriokhenriok Posts: 537member
    Quote:

    Originally posted by Hawkeye_a

    I mean having a 975/970 in a gaming machine is overkill !!



    Just like a 733 MHz Celeron is overkill when it comes to gaming today? The Xbox is about two years old and everyone would probably agree that all the game consoles would do better if their CPUs and GPUs was more powerful. Is there anyone that thinks that future games like Doom III and Halo 2 would run smoothly with excellent graphics on todays consoles? And they are just two-three years old.



    Xbox 2 is due for release in 2005/6, ie two years from now, so why would a CPU that's already half a year old today be overkill then? And four-five years from now? Will it be overkill then?



    Microsoft would make a wise choice not to chose a CPU that's not a low end compared to contemporairy CPUs for computers. 4-5 GHz might seem much today, but it won't in 2007-8.
  • Reply 335 of 375
    wizard69wizard69 Posts: 13,377member
    Oh come on now, snap open a game console and look inside. There really isn't much to them other than a processor, GPU and a little memory. The processor directly affects the performance of the console there is no other way to look at it.



    Granted FP performance may not be everything in a game but it sure does help the graphicaly intense ones. The whole point though is that the extra performace means fewer restrictions for the developers, who can thus provide new functionality. Even a good old game of Chess on a console can use a significant amount of new computing resources to provide more challenging play. So even old games can benefit from increasing CPU horse power.



    The reality is that a 970 or its folow on in a console is just an incremental step to allow software developers to realize their potential. A 97xyz is not going to be the end of it even if they hit 5GHz next year. Admittedly the software developers may lag by a few months with respect to making good use of that power. None the less; after a blockbuster of a game comes out, that does use that power, everybody will be clamoring for faster consoles. It's the way of the industry.



    Dave



    Quote:

    Originally posted by Hawkeye_a

    I heard the same about Microsoft. personally i dont think processors are a big factor when it comes to graphic performance for games on consoles. I mean having a 975/970 in a gaming machine is overkill !! this thing will be able to crunch numbers at a rate that video editors or genetic researchers need, hardly comparable to gaming AI imo.



    either way.... it's Microfluffy.... who give a rats arse ?




  • Reply 336 of 375
    yeah well... good for XBox devotees then i guess. i could 'give a rats arse' anyway. hehe.... microFluffy can come out with a console that wipes my arse and i wont buy it. I'm not one of those plae bald dudes from '1984'.



    As long as Microsoft (or any company) subsidized their product to this extent that it exterminated the competition, they wont get my dollars.



    Besides, if i want games ill get it from the ppl who do it best.... NIntendo.



    Cheers.
  • Reply 337 of 375
    Sorry if this is all been mentioned, I read the first page or so then decided to post.



    About this 2 or 4 chip PPC440 system with moduled altivec, ethernet etc.. here's my thoughts:



    1. I like it.

    2. It could be a thin client.

    3. If it's for a portable, then maybe they'll drop the Gx series lables for them.

    4. Go back to iBook, PowerBook plain and simple.

    5. Maybe it'll be a total sub notebook or tablet (i know!)

    6. Maybe it'll be a rendezvous-based server admin devices for sysadmins.



    For OS X to handle all that SMP action don't they just have to get the kernal to arbitrate it all and let the rest sit in top? It's pretty hardware independent like that.



    Anway my 2c worth...
  • Reply 338 of 375
    OK, now I've read a few more postings.



    Response to some PowerBooks/Portables postings:



    "iBook G4 hints at something coming"

    It hints at the fact that the G3 version sucked wasn't selling enough running up to christmas.



    "Apple's portables run SO much hotter than Intel/AMD's"

    Really? - ever hear any news stories of burns victims from PowerBooks? No but you get plenty of Dell victims.



    "IBM have a secret project that they've kept under wraps"

    No. Apple do this. IBM don't. They announced the 970 a year before we saw it. We all knew the phrase GP-UL way back.

    If there is something it'll be based on an existing development stream.



    "Lot's of software work needed to get things running on this new system"

    Nope. Just a kernal recompile and some .kext work. The rest sits on top.



    "January is too soon to launch anything"

    They've suprised too many times to count.
  • Reply 339 of 375
    Quote:

    Originally posted by Hawkeye_a

    Besides, if i want games ill get it from the ppl who do it best.... NIntendo.



    Cheers.




    You'll be eating those words.
  • Reply 340 of 375
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by wizard69

    Agreed C/C++ is not natively threaded but that has nothing to do with the fact that there are a great many threaded application available for OS/X right now.



    It does have to do with the way they're threaded, and the number of threads they spawn, though.



    Hardly any of them - if any of them at all - try to split up a computationally intense procedure over multiple processors, for example, which would be necessary to support a platform of multiple CPUs which are individually weak.



    The problem has to do with the fact that there are a lot of obstacles to pervasive threading, and no current incentives. Currently, you can always fall back on a single, powerful CPU.



    Quote:

    Lets face it the PowerMac before the G5 did a rather credible job of making use of the processors it had available to it.



    The processors were individually powerful. That's the crux of my argument. I'm not talking about parallelism, I'm talking about a paradigm shift from small groups of one or more individually powerful CPUs to large groups of many individually weak CPUs. That change requires a completely different approach to programming that is poorly accommodated now. The research has been done on how to write for that style of architecture, but who uses the result?



    Quote:

    Threading is already a development issue for anybody targeting the professional line. The big but is that this machine does not force threading and parallel coding technigues onto a developer anymore that the G5 PowerMacs do.



    Yes it does! If you fail to take advantage of the SMP feature in the G5, you get the considerable power of one 970 to play with. If you try that in this putative PowerBook, you get the inadequate power of something weaker than a G3. If Cell (note the cap) uses even smaller cores, as it appears to, this becomes even more painfully true.



    As soon as you switch to an architecture where one CPU is not adequate to the task of powering an application to the expectation of the user, everything changes.



    Quote:

    But your argument about the number of processors really hold no wieght. What is going to happen when they start to put dual core chips into the G5 PowerMac and there are now 4 real processors to deal with.



    That's because my argument has nothing to do with the number of processors, and everything to do with the power of each individual processor. I've focused on dealing with a large number of processors only because the architecture under discussion in this thread uses many processors to make up for the weakness of each one. This issue is that the building block for this architecture is too weak to rely on individually. Threading and multiprocessing become mandatory for decent performance in this case, and that's when the architectural assumptions of the popular languages and of Carbon ports become burdensome.



    I'm not worried about G5 PowerMacs, because of the power of the CPUs. Apple can put as many dual-core SMT POWER-derived CPUs in those towers as they please. I'm conerned about architectures that use multiple weak CPUs in place of one or two powerful CPUs, like the architecture discussed in this thread. Like Cell.
Sign In or Register to comment.