or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › PowerBook G5
New Posts  All Forums:Forum Nav:

PowerBook G5 - Page 9  

post #321 of 376
Quote:
Originally posted by snoopy
I believe this make a lot of sense. Consider that the G5 had to wait for the 90 nm chip, with more power management features too I believe, to fit into a 1U Xserve. If IBM builds blade servers there will be many CPUs in a box. It makes sense to have them run as cool as possible. Since this goal is also what is needed for a PowerBook, both companies are highly motivated. Good partnership. Regarding the PowerBook, I too think it will be such a chip, not a cell. The cell idea is fun to think about however.

Yep cell is fun to think about. Personally I think the hype is so great right now that the first product delivered that use "CELL" will be a disappointment.

It is interesting that the little bit of information that has been released, reminds me of the Transputer if people can remmeber that chip. It never did well in the market place either. I'm not saying that cell will be a failure as I simply do not know that much about it. What I do know is that cell, had best be an extension of an industry standard architecture if it expects to have a chance in hell.

I to believe both companies are highly motivated, but there are limits to technology. When the PowerBook or Ibook does move to new technology it will be very interesting to se what that technology is. I just hope that Apple does not try to push a 30 minute wonder onto us.

I know that many people are pushing for the 970 in a laptop, frankly I don't know why. What would make me happy is a strong performance increase coupled with extended battery time for a new PB revision. The question is does Apple have access to a chip to deliver that upgrade?

Dave
post #322 of 376
Quote:
Originally posted by Amorph
C and C++ do not do parallellism. They are designed to assume that there is one CPU. All means to distribute work across multiple CPUs or cores has to be done outside of those languages, using system libraries. 99% of the time, it's done manually by the programmer, and the threads are very coarse because the application only expects 1 or 2 large, fast CPUs.


Agreed C/C++ is not natively threaded but that has nothing to do with the fact that there are a great many threaded application available for OS/X right now. The number of threads an application generates has nothing to do with the number of CPU's that application will run on. It is not unheard of for an application to generate ten or twenty threads and run on one processor. How effectively additional procesors can be used is highly dependant on the problem domain and the OS.
Quote:

If you want that application to run on 4 small CPUs (or possibly many more in the case of Cell) a mere recompile would be the best case. The more execution cores you have to target, the more redesign and reimplementation you'd have to do. Because there is no provision in C or in C++ for dividing up work, and historically there's been no need, except in supercomputing applications, this is not easy. In particular, debugging and troubleshooting threaded code is exasperating, and the exasperation increases supralinearly with the number of threads.

Well agian the amount of reimplementation that has to be done is highly dependant on the problem at hand. Some applications could immediately us the additional resources right away. Some developers will never take upon themselves the effort to multi thread their application.

The issues with threaded code and parallel super computing are well known. The issue is that these techniques do allow the delivery of applications that proces information in a timely manner. Yes at times the effort to debug and deliver multi-threaded applications is rather involved but not all applications need massive and complex threading implementations. Many applications can and do provide significant user experience improvements with the implementation of simple threading implementations.

Even is we have a single thread application that application will still benefit from running on a multiprocessor machine. With other processors available for such things as system functions and display server code, any application benfits from more than one processor being available on the Mac. Lets face it the PowerMac before the G5 did a rather credible job of making use of the processors it had available to it. If the 970 is a no go in the PowerBook I see no reason why a bit of retrenching to this sort of arraingement won't work. Sure you may not have the ultimate in single task performance but the overall user experience is pretty good.
Quote:
It's not an issue now because the only dual-processor machines also happen to use the fastest available chips, so threading is a luxury. On the platform proposed in this thread, it would become a necessity, and that would have a tremendous impact on application design - especially among the big, monolithic Carbon applications originally designed for an OS that had mediocre threading support bolted on late in its life. Furthermore, an application designed to spread itself over a large number of individually weak chips won't run as well on a platform built on one or two fast chips, because threading carries overhead.

Threading is already a development issue for anybody targeting the professional line. The big but is that this machine does not force threading and parallel coding technigues onto a developer anymore that the G5 PowerMacs do. It is a feature that is there to be taken advantage of, many applications will and won't even know it because of system resources using the facilities provided by the additional processors.

But your argument about the number of processors really hold no wieght. What is going to happen when they start to put dual core chips into the G5 PowerMac and there are now 4 real processors to deal with. If those 4 processors support some sort of multi threading you could end up with 8 or more logical processors on a PowerMac in as little as a year or two. Even with 8 or 16 logical processors a PowerMac is still going to be able to run single thread applications and benefit somewhat from all of that additional processing capability. Granted this PowerMac is a tightly coupled system but that does not mean that a loosly coupled system can not also deliver to the user similar benefits.
Quote:

This makes no sense. Professional applications are optimized for professional machines, not "desktops" or "laptops". PowerBooks are professional machines. Apple pitches them that way, and people use them that way. A PowerBook that couldn't run AltiVec-heavy apps like Photoshop, DVD Studio Pro, Final Cut Pro, Logic, etc., would not sell at anywhere near the rate of the current PowerBook.

Well I consider Altvec an imperative also, so I would have to hope that Apple does to. The problem with your postion is that there is now a huge gap between the laptop hardware and the desktop hardware. Like it or not many of the applications you mentioned above do perform remarkably better on an SMP machine of any generation than they do on a laptop. A multiprocessor laptop could help to close that gap a bit. But a multiprocssor machine won't come to market with the current generation of power hungery chips. This is what makes a 440 derived laptop so interesting, two or four 1 Ghz SOC running at lets say 1.2 GHz could make a multiprocessor laptop a reality.

IBM currently has an 800MHz 440 SOC that uses about 4 watts of power. Lets say though a combination of shrinking the die, design improvements, speed increases and feature additions (alt-vec unit, fpu) they can deliver a chip running at 2 watts at 1.2 GHz with similar performance to a G4 maybe better. Put a few of these into a laptop and you won't get alot of complaints.
Quote:

The market is moving to notebooks, generally. Apple sold an unheard-of 197,000 PowerBooks last quarter. This is exactly the wrong time to start gimping them.

Yes and this is a problem that Apple has to deal with as the current PowerBook ages. I'd love to see a 970 in a PowerBook tommorrow but I don't think that will happen any time soon. At least not in a machine that will leave me happy about battery life time. This Cell based approach does provide interesting speculation as to a follow on to the Powerbook. I could just as easily see this approach adapted to the iBook though, especially if they manage to actually deliever a 970 based Powerbook.
Quote:

One more time. Cell, upper-case-C, is a particular implementation shipping from IBM soon. Cell, lower-case-c, is a noun referring to a self-contained entity. Cellular computing is a concept. They are three different, if not unrelated, things.

I have to disagree. This is not how I interpet IBM's documentation as thin as it is. As you have stated though this should all be cleared up shortly. MY take though is that Cell in any form is a concept which will be applied to a number of devices.
Quote:
As to the suitability of Xgrid: Generally, you can choose between a solution that scales up well and a solution that scales down well. Xgrid is suitable for powerful CPUs connected by (relatively) low bandwidth. Because of that, it is less suitable for weaker CPUs connected by high bandwidth. As soon as you start distributing computations across nodes, you have to become sensitive to both available resources and available bandwidth, because any inefficiency will squander a surprising amount of your available power. A solution for powerful CPUs with low bandwidth interconnects is only suitable for that situation.

while XGrid may be designed for the problem domain you describe, but there is no reason why it can not be extended to local computing resources. You still have all the problems you described with the low bandwidth approach, it is just that your bandwidth is much higher and your resources generally local.

It probally would be more sensible to extend the operating system to support scheduling of the local resources directly. As I mentioned in another response this whole concept reminds me of the Transputer family of chips from year gone buy. Unfortunately a concept that never took off but sounds a lot like Cell as STI documentation at this early stage describes. It will be interesting to see how STI overcomes the failings that the Transputer had.

It is pretty much wait and see. IBM has announcements scheduled from the 22 nd of Jan. through the middle of Feb. It will be interesting to see what they have been up to and to see if any of it is Cell related.
Quote:

post #323 of 376
What would make me happy with the next rev powerBooks ?
ithink the G4 has a long life in the portable form factor....remember now...the G4 can go dual as well .
Now that the iBook has the G4, i think it's about time the PowerBook distinguish itself from the consumer machine. I think with 90nm fabs, it's about time we see dual processor (G4) powerBooks, with roughly the same battery life as we got now.personally i dont think the time is right for a G5 in a portable and personally i wouldnt buy one unless it had comparable battery life.

dual G4 powerbooks !!! gimme gimme gimme...
post #324 of 376
Quote:
Originally posted by wizard69
But your argument about the number of processors really hold no wieght. What is going to happen when they start to put dual core chips into the G5 PowerMac and there are now 4 real processors to deal with. If those 4 processors support some sort of multi threading you could end up with 8 or more logical processors on a PowerMac in as little as a year or two. Even with 8 or 16 logical processors a PowerMac is still going to be able to run single thread applications and benefit somewhat from all of that additional processing capability. Granted this PowerMac is a tightly coupled system but that does not mean that a loosly coupled system can not also deliver to the user similar benefits.

Yes, the argument actually holds a lot of weight. When you get 8 logical processors in the PowerMac, the processors themselves will be so powerful, that the applications don't need to be that heavily threaded. Most problems could easily be solved by using one logical processor. And a whole lot of problems would be solved that way. Then there would be a lot of problems in professional applications that'd be solved by two threads. The "8 processor" PowerMac would do that easily, because the processors would be so powerful in the first place.

A handful of apps would be threaded heavily enough to use four processors. The PowerMac would be extremely fast at these. And this is the point where small quad systems like the proposed PowerBook here would start to be really useful. Heavy threading is a necessity (to steal Amorph's word) for such a machine.

It would of course, as Nr9 says, require a whole new programming model. And that for a PowerBook? I'm having a hard time believing in this even if they switched all lines of computers to this programming model. Developers have gone through quite a lot already, and so have Apples customers.

I could see the programming model being used for some problems, but it would be disastrous for all apps that'd have to be ported to the platform. It would have to be rewritten, rethought, re-implemented, and re-debugged, and you'd end up with two, or maybe three totally/very different code-bases. One for the original platform, one for the Powerbook, and one for the PowerMac (unless it would be made to use the same tech as in the Powerbook, but there's a lot of ifs in here already). I don't see how this is feasible.

So my argument is; To have such a machine work as a professional laptop is supposed to, you'd have to have quite powerful processors to begin with. The G4, more specifically the IBM "750VX", or whatever they choose to call it, is such a beast. Give it a decent bus, add another processor to the equation (if this is even needed), and you'll have this problem quickly solved already.
post #325 of 376
Quote:
Originally posted by Zapchud
Yes, the argument actually holds a lot of weight. When you get 8 logical processors in the PowerMac, the processors themselves will be so powerful, that the applications don't need to be that heavily threaded. Most problems could easily be solved by using one logical processor. And a whole lot of problems would be solved that way. Then there would be a lot of problems in professional applications that'd be solved by two threads. The "8 processor" PowerMac would do that easily, because the processors would be so powerful in the first place.


Utter garbage!

There are classes of problems that we will never have enough power to solve. Granted many of these problems are not typical of the work load place on most PC's these days. The difference is that Apple is going after the atypical power user with these machines.

Further for general usage many applications are limited by processor power available to them. The big item here is games believe it or not.
Quote:

A handful of apps would be threaded heavily enough to use four processors. The PowerMac would be extremely fast at these. And this is the point where small quad systems like the proposed PowerBook here would start to be really useful. Heavy threading is a necessity (to steal Amorph's word) for such a machine.

It would of course, as Nr9 says, require a whole new programming model. And that for a PowerBook? I'm having a hard time believing in this even if they switched all lines of computers to this programming model. Developers have gone through quite a lot already, and so have Apples customers.

I could see the programming model being used for some problems, but it would be disastrous for all apps that'd have to be ported to the platform. It would have to be rewritten, rethought, re-implemented, and re-debugged, and you'd end up with two, or maybe three totally/very different code-bases. One for the original platform, one for the Powerbook, and one for the PowerMac (unless it would be made to use the same tech as in the Powerbook, but there's a lot of ifs in here already). I don't see how this is feasible.

So my argument is; To have such a machine work as a professional laptop is supposed to, you'd have to have quite powerful processors to begin with. The G4, more specifically the IBM "750VX", or whatever they choose to call it, is such a beast. Give it a decent bus, add another processor to the equation (if this is even needed), and you'll have this problem quickly solved already.

I have no doubt that a dual processor machine would be rather powerful in a portable form factor. The issue becomes will the power usage be manageable. As to this sort of machine solving all of the worlds problems, I don't buy it but it would be a remarkable improvement.

My point has always been that Nr9's described machine is very interesting. It is a machine that I could see Apple having in a development lab. For a variety of reasons I do not see Apple bringing the machine to market, atleast not as a PowerBook.

What I've tried to point out though is that Multi-processing is the wave of the future. There have been a lot of pointers to a future of dual core chips, if Apple were to put dual core chips into a the iMac or its follow on the world of single processor machines would come to a quick halt. Like wise with a multi thread chip. Parallel execution of multi-thread applications is the future but there is little reason to expect that all of this will be done exclusively on SMP machines.

I'm also bothered by the continued 1 to 1 association of thread to processors. This is not the case folks, it is very possible to have multiple thread executing on one processor. The benefits may not be the same as having the OS spread the threads across several processors but the application is multi thread just not executing in parallel.

Quote:

post #326 of 376
Quote:
Originally posted by Zapchud
Yes, the argument actually holds a lot of weight. When you get 8 logical processors in the PowerMac, the processors themselves will be so powerful, that the applications don't need to be that heavily threaded. Most problems could easily be solved by using one logical processor. And a whole lot of problems would be solved that way. Then there would be a lot of problems in professional applications that'd be solved by two threads. The "8 processor" PowerMac would do that easily, because the processors would be so powerful in the first place.

A handful of apps would be threaded heavily enough to use four processors. The PowerMac would be extremely fast at these. And this is the point where small quad systems like the proposed PowerBook here would start to be really useful. Heavy threading is a necessity (to steal Amorph's word) for such a machine.

It would of course, as Nr9 says, require a whole new programming model. And that for a PowerBook? I'm having a hard time believing in this even if they switched all lines of computers to this programming model. Developers have gone through quite a lot already, and so have Apples customers.

I could see the programming model being used for some problems, but it would be disastrous for all apps that'd have to be ported to the platform. It would have to be rewritten, rethought, re-implemented, and re-debugged, and you'd end up with two, or maybe three totally/very different code-bases. One for the original platform, one for the Powerbook, and one for the PowerMac (unless it would be made to use the same tech as in the Powerbook, but there's a lot of ifs in here already). I don't see how this is feasible.

So my argument is; To have such a machine work as a professional laptop is supposed to, you'd have to have quite powerful processors to begin with. The G4, more specifically the IBM "750VX", or whatever they choose to call it, is such a beast. Give it a decent bus, add another processor to the equation (if this is even needed), and you'll have this problem quickly solved already.

Good point , when we see the number of software coming from the PC world poorly optimised for the mac (that's why we many tests are in favor of the PC), it's obvious that nobody will optimise heavily (because it's a very special way of programming) the code for a bunch of laptops.

To my advice the 750 VX will be a perfect chip for a laptop. My vision of the near future is G5 for desktop (i mac and powermac) 750 VX for laptops (i book and powerbook).

Dual core chips will not reach the consumer PC market until the 65 nm process will be avalaible.

here is my guessed roadmap.

Towers : G5 90nm for 2004
G5 power 5 derived (1MB cache and multithreading) for 2005
Dual core version of the lattest chip for 2006

Laptops : 750 vx for 2004 and 2005
dual core 750 vx variant or 65 nm G5 power 5 derived for 2006

I Mac : G5 90 nm for 2004
G5 power 5 derived for 2005
Dual core version of the lattet chip for 2006.

The tower line will have an entry single chip models, others will be dual.
post #327 of 376
Quote:
Originally posted by wizard69
Utter garbage!

There are classes of problems that we will never have enough power to solve. Granted many of these problems are not typical of the work load place on most PC's these days. The difference is that Apple is going after the atypical power user with these machines.

Further for general usage many applications are limited by processor power available to them. The big item here is games believe it or not.

What part of that was 'utter garbage'?
I absolutely agree with what you're saying here, I'm not sure what we're arguing about here :-)
post #328 of 376
Quote:
Originally posted by Amorph
C and C++ do not do parallellism. They are designed to assume that there is one CPU. All means to distribute work across multiple CPUs or cores has to be done outside of those languages, using system libraries. 99% of the time, it's done manually by the programmer, and the threads are very coarse because the application only expects 1 or 2 large, fast CPUs.

If you want that application to run on 4 small CPUs (or possibly many more in the case of Cell) a mere recompile would be the best case. The more execution cores you have to target, the more redesign and reimplementation you'd have to do. Because there is no provision in C or in C++ for dividing up work, and historically there's been no need, except in supercomputing applications, this is not easy. In particular, debugging and troubleshooting threaded code is exasperating, and the exasperation increases supralinearly with the number of threads.



It's not an issue now because the only dual-processor machines also happen to use the fastest available chips, so threading is a luxury. On the platform proposed in this thread, it would become a necessity, and that would have a tremendous impact on application design - especially among the big, monolithic Carbon applications originally designed for an OS that had mediocre threading support bolted on late in its life. Furthermore, an application designed to spread itself over a large number of individually weak chips won't run as well on a platform built on one or two fast chips, because threading carries overhead.

Amorph, I'll quibble just a tad here. Languages do not do parallelism in general, making the choice of any particular language kind of immaterial. Java is a semi-special case, Java is really doing the tightrope between being a language and a API set. If that is an un-objectionable stance than C/C++ derived API's are not far fetched for multi-threading. Just another half baby-step has an non-OS supplier providing a library of these API-like multi-threading tools.

The key to making massive parallelism work as a widely used commodity programming paradigm is the set of design tools available, much more so than a particular language, as a capable lower level language like C can write these tools as well. You know how well suited C is to such a task, just look at C++, Obj-C and Java as the three biggest progeny of Bell Labs original baby (some directly, other like Java more circuitously).

Threading is fast leaving the luxury realm and becoming necessary. SMT is barreling towards us from both IBM and Intel, suddenly a dual processor or dual core can look like 4, and that will be the mainstream in the next couple years.
post #329 of 376
The implication that processors will become so powerfull that only one or two threads of execution will be needed. It is my position that we will not see in our lifetimes, a PC made by anybody, that is so powerfull it would meet the needs of evey user. The future has been pretty much laid out before us, it is just a matter of having the hardware delivered. That future will be multi-processor and processors supporting multi-thread execution.

Lets face it Apple has been optimizing for multi processing for years now. Frankly it is the only good thing that Motorola ever did for Apple. That is the poor performance of the G4 force Apple to apply SMP to keep performance on a par with Intel. This resulted in a OS that takes advanatge of the 970 series processors to a far greater extent than any comparable OS for desktop users.

Even with the fantastic support for multiple processors, OS/X and the 970's still are only on a par with Intel hardware. For many aplications this is not good enough as the huge increase in the installation of cluster computers indicates. These day you have everybody from genetic researchers to race car teams trying to get realtime results from clusters of computers, we are far from having computers that are fast enough. Each time we see an incremental increase in the performance of PC's, or recently the PowerMac, new markets open up for the hardware as the economics change.

Beyond the issue of us ever having computers that are powerfull enough, I'd have to say yes we agree.

Dave

Quote:
Originally posted by Zapchud
What part of that was 'utter garbage'?
I absolutely agree with what you're saying here, I'm not sure what we're arguing about here :-)
post #330 of 376
Quote:
Originally posted by wizard69
The implication that processors will become so powerfull that only one or two threads of execution will be needed. It is my position that we will not see in our lifetimes, a PC made by anybody, that is so powerfull it would meet the needs of evey user. The future has been pretty much laid out before us, it is just a matter of having the hardware delivered. That future will be multi-processor and processors supporting multi-thread execution.

Oh, I'm sorry, I might not have been clear enough on that. :-)

My point was not that the processors will be powerful enough to solve any given problem with only one or two execution threads. The processors will be fast enough to sustain a good enough performance-level compared to what is to be expected of the computer, or the computer will not be interpreted as being slow compared to its competition.

I think we agree on this.
post #331 of 376
One thing about competition is that it never stands still. It is very hard to project who will be the technology leader 5 years down the road. After all who would have suspected that Intel would have tripped up with repsect to getting out a 90nm processor.

As to computers if we don't continue to expect more from them the market would quickly stagnate. The expectation that one will be able to buy faster machine every year allows software development technology to continue to move forward. It is the development of software that takes advantage of the latest processor capabilities that drives the market place.

One just has to look at the simple realm of the games industry. Without the develpment of hardware to allow the deployment of more advanced software tools and applications there would be little that is new in the field of games. Same with amny real industries, processor power allows one to do things in the futer that only can be dreamed about today.

Quote:
Originally posted by Zapchud
Oh, I'm sorry, I might not have been clear enough on that. :-)

My point was not that the processors will be powerful enough to solve any given problem with only one or two execution threads. The processors will be fast enough to sustain a good enough performance-level compared to what is to be expected of the computer, or the computer will not be interpreted as being slow compared to its competition.

I think we agree on this.
post #332 of 376
Quote:
Originally posted by wizard69


. . One just has to look at the simple realm of the games industry. Without the develpment of hardware to allow the deployment of more advanced software tools and applications there would be little that is new in the field of games. . .



Interesting that you mention games. If I understand correctly, Xbox will be using the IBM Power5 derivative, the 975 or whatever it will be called. MS obviously want a lot of processor power in their next game machine. So, I would expect Sony to have a similar goal for the PlayStation 3, and Sony is going with a PPC cell architecture. It will be interesting to see how these two compare.
post #333 of 376
Quote:
Originally posted by snoopy
Interesting that you mention games. If I understand correctly, Xbox will be using the IBM Power5 derivative, the 975 or whatever it will be called. MS obviously want a lot of processor power in their next game machine. So, I would expect Sony to have a similar goal for the PlayStation 3, and Sony is going with a PPC cell architecture. It will be interesting to see how these two compare.

I heard the same about Microsoft. personally i dont think processors are a big factor when it comes to graphic performance for games on consoles. I mean having a 975/970 in a gaming machine is overkill !! this thing will be able to crunch numbers at a rate that video editors or genetic researchers need, hardly comparable to gaming AI imo.

either way.... it's Microfluffy.... who give a rats arse ?
post #334 of 376
Upon using that phrase one too many times, I received a plastic rat's arse in the mail from an anonymous admirer... with a loving note. I, for one, can now "give a rat's arse"...

"Stand Up for Chuck"
"Stand Up for Chuck"
post #335 of 376
Quote:
Originally posted by Hawkeye_a
I mean having a 975/970 in a gaming machine is overkill !!

Just like a 733 MHz Celeron is overkill when it comes to gaming today? The Xbox is about two years old and everyone would probably agree that all the game consoles would do better if their CPUs and GPUs was more powerful. Is there anyone that thinks that future games like Doom III and Halo 2 would run smoothly with excellent graphics on todays consoles? And they are just two-three years old.

Xbox 2 is due for release in 2005/6, ie two years from now, so why would a CPU that's already half a year old today be overkill then? And four-five years from now? Will it be overkill then?

Microsoft would make a wise choice not to chose a CPU that's not a low end compared to contemporairy CPUs for computers. 4-5 GHz might seem much today, but it won't in 2007-8.
post #336 of 376
Oh come on now, snap open a game console and look inside. There really isn't much to them other than a processor, GPU and a little memory. The processor directly affects the performance of the console there is no other way to look at it.

Granted FP performance may not be everything in a game but it sure does help the graphicaly intense ones. The whole point though is that the extra performace means fewer restrictions for the developers, who can thus provide new functionality. Even a good old game of Chess on a console can use a significant amount of new computing resources to provide more challenging play. So even old games can benefit from increasing CPU horse power.

The reality is that a 970 or its folow on in a console is just an incremental step to allow software developers to realize their potential. A 97xyz is not going to be the end of it even if they hit 5GHz next year. Admittedly the software developers may lag by a few months with respect to making good use of that power. None the less; after a blockbuster of a game comes out, that does use that power, everybody will be clamoring for faster consoles. It's the way of the industry.

Dave

Quote:
Originally posted by Hawkeye_a
I heard the same about Microsoft. personally i dont think processors are a big factor when it comes to graphic performance for games on consoles. I mean having a 975/970 in a gaming machine is overkill !! this thing will be able to crunch numbers at a rate that video editors or genetic researchers need, hardly comparable to gaming AI imo.

either way.... it's Microfluffy.... who give a rats arse ?
post #337 of 376
yeah well... good for XBox devotees then i guess. i could 'give a rats arse' anyway. hehe.... microFluffy can come out with a console that wipes my arse and i wont buy it. I'm not one of those plae bald dudes from '1984'.

As long as Microsoft (or any company) subsidized their product to this extent that it exterminated the competition, they wont get my dollars.

Besides, if i want games ill get it from the ppl who do it best.... NIntendo.

Cheers.
post #338 of 376
Sorry if this is all been mentioned, I read the first page or so then decided to post.

About this 2 or 4 chip PPC440 system with moduled altivec, ethernet etc.. here's my thoughts:

1. I like it.
2. It could be a thin client.
3. If it's for a portable, then maybe they'll drop the Gx series lables for them.
4. Go back to iBook, PowerBook plain and simple.
5. Maybe it'll be a total sub notebook or tablet (i know!)
6. Maybe it'll be a rendezvous-based server admin devices for sysadmins.

For OS X to handle all that SMP action don't they just have to get the kernal to arbitrate it all and let the rest sit in top? It's pretty hardware independent like that.

Anway my 2c worth...
na-na na-na na-na na-na
na-na na-na na-na na-na
na-na na-na na-na na-na
Batman!
na-na na-na na-na na-na
na-na na-na na-na na-na
na-na na-na na-na na-na
Batman!
post #339 of 376
OK, now I've read a few more postings.

Response to some PowerBooks/Portables postings:

"iBook G4 hints at something coming"
It hints at the fact that the G3 version sucked wasn't selling enough running up to christmas.

"Apple's portables run SO much hotter than Intel/AMD's"
Really? - ever hear any news stories of burns victims from PowerBooks? No but you get plenty of Dell victims.

"IBM have a secret project that they've kept under wraps"
No. Apple do this. IBM don't. They announced the 970 a year before we saw it. We all knew the phrase GP-UL way back.
If there is something it'll be based on an existing development stream.

"Lot's of software work needed to get things running on this new system"
Nope. Just a kernal recompile and some .kext work. The rest sits on top.

"January is too soon to launch anything"
They've suprised too many times to count.
na-na na-na na-na na-na
na-na na-na na-na na-na
na-na na-na na-na na-na
Batman!
na-na na-na na-na na-na
na-na na-na na-na na-na
na-na na-na na-na na-na
Batman!
post #340 of 376
Quote:
Originally posted by Hawkeye_a
Besides, if i want games ill get it from the ppl who do it best.... NIntendo.

Cheers.

You'll be eating those words.
"Many people would sooner die than think; in fact, they do so." - Bertrand Russell
"Many people would sooner die than think; in fact, they do so." - Bertrand Russell
post #341 of 376
Quote:
Originally posted by wizard69
Agreed C/C++ is not natively threaded but that has nothing to do with the fact that there are a great many threaded application available for OS/X right now.

It does have to do with the way they're threaded, and the number of threads they spawn, though.

Hardly any of them - if any of them at all - try to split up a computationally intense procedure over multiple processors, for example, which would be necessary to support a platform of multiple CPUs which are individually weak.

The problem has to do with the fact that there are a lot of obstacles to pervasive threading, and no current incentives. Currently, you can always fall back on a single, powerful CPU.

Quote:
Lets face it the PowerMac before the G5 did a rather credible job of making use of the processors it had available to it.

The processors were individually powerful. That's the crux of my argument. I'm not talking about parallelism, I'm talking about a paradigm shift from small groups of one or more individually powerful CPUs to large groups of many individually weak CPUs. That change requires a completely different approach to programming that is poorly accommodated now. The research has been done on how to write for that style of architecture, but who uses the result?

Quote:
Threading is already a development issue for anybody targeting the professional line. The big but is that this machine does not force threading and parallel coding technigues onto a developer anymore that the G5 PowerMacs do.

Yes it does! If you fail to take advantage of the SMP feature in the G5, you get the considerable power of one 970 to play with. If you try that in this putative PowerBook, you get the inadequate power of something weaker than a G3. If Cell (note the cap) uses even smaller cores, as it appears to, this becomes even more painfully true.

As soon as you switch to an architecture where one CPU is not adequate to the task of powering an application to the expectation of the user, everything changes.

Quote:
But your argument about the number of processors really hold no wieght. What is going to happen when they start to put dual core chips into the G5 PowerMac and there are now 4 real processors to deal with.

That's because my argument has nothing to do with the number of processors, and everything to do with the power of each individual processor. I've focused on dealing with a large number of processors only because the architecture under discussion in this thread uses many processors to make up for the weakness of each one. This issue is that the building block for this architecture is too weak to rely on individually. Threading and multiprocessing become mandatory for decent performance in this case, and that's when the architectural assumptions of the popular languages and of Carbon ports become burdensome.

I'm not worried about G5 PowerMacs, because of the power of the CPUs. Apple can put as many dual-core SMT POWER-derived CPUs in those towers as they please. I'm conerned about architectures that use multiple weak CPUs in place of one or two powerful CPUs, like the architecture discussed in this thread. Like Cell.
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
post #342 of 376
As for current programming paradigms and the effects of a 440 based solution as it has currently been described, Amorph pretty much has it correct. While parallel processing is not as hard as many make it out to be, we are stuck in the economic rut of having to support a legacy dominated by uni-threaded apps and that is where the 4x440 loses most of it's luster.
post #343 of 376
Quote:
Originally posted by Amorph
It does have to do with the way they're threaded, and the number of threads they spawn, though.


The number of threads and the way they are spawned are often an indication of the developers understanding of the problem. If the devloper only has two thread functioning you still end up with the potential to use two procesors, plus any of the processors handling system calls and the Windowing system.
Quote:
Hardly any of them - if any of them at all - try to split up a computationally intense procedure over multiple processors, for example, which would be necessary to support a platform of multiple CPUs which are individually weak.

The allocation of procesors to specific threads ought to be handled by the OS. The application should never know how many processors it has available to it.
Quote:
The problem has to do with the fact that there are a lot of obstacles to pervasive threading, and no current incentives. Currently, you can always fall back on a single, powerful CPU.

Well this is where the speculation into the suitability of such a laptop comes into the situation. What if Apple is without a single powerful CPU to move the Powerbook forward? Maybe we will get lucky and see a PowerBook with a 970 in it in the near future, I doubt that will happen so they need an alternative if the G4 has truely hit the wall.

I'd love to see a Powerbook with a 970 running at 2GHz or even a little slower, but at the moment it does not look like that will be a real possibility. The described laptop does appear to be a solution to the problem of giving the user a credible upgrade. We also have to realize that the 440 series is something that is customizable and is still the subject of development. Two or four of these processors running at 1.2GHz would provide the curent PowerBook users with a reasonable upgrade.
Quote:
The processors were individually powerful. That's the crux of my argument. I'm not talking about parallelism, I'm talking about a paradigm shift from small groups of one or more individually powerful CPUs to large groups of many individually weak CPUs. That change requires a completely different approach to programming that is poorly accommodated now. The research has been done on how to write for that style of architecture, but who uses the result?

I almost believe we are talking about two completely different things. Your talking about massively parallel systems and I'm talking about extending the performance of the PowerBooks through a design that offers more processors to a system that is already multithreaded.

This is ultimately a take on the dual G4's. Apple was in a bad place performance wise with the G4, thus moving to SMP. The describe PowerBook could be looked at the same way.
Quote:
Yes it does! If you fail to take advantage of the SMP feature in the G5, you get the considerable power of one 970 to play with. If you try that in this putative PowerBook, you get the inadequate power of something weaker than a G3. If Cell (note the cap) uses even smaller cores, as it appears to, this becomes even more painfully true.

I have to disagree you are not force to write multi thread applications for the G5 any more that you where forced to on the old SMP G4's. Certianly if your application needed the power you did but not every application is multithreaded. That considerable power of one 970 applies to every ohter machien that that application will run on be it a 603, 440, G4 or some other older machine.

The only differrence is that that single thread application MAY perform better on a multiprocessor machine due to the rest of the system being threaded and taking advantage of multitasking across the installed processors.
Quote:
As soon as you switch to an architecture where one CPU is not adequate to the task of powering an application to the expectation of the user, everything changes.

This is certianly true. Since we are speculating we have no idea what the mystery CPU looks like. IBM currently has a bunch of options for this family of CPU's plus just about anything you can develop yourself. So imagineing how the processor would perform is pure speculation.
Quote:

That's because my argument has nothing to do with the number of processors, and everything to do with the power of each individual processor. I've focused on dealing with a large number of processors only because the architecture under discussion in this thread uses many processors to make up for the weakness of each one. This issue is that the building block for this architecture is too weak to rely on individually. Threading and multiprocessing become mandatory for decent performance in this case, and that's when the architectural assumptions of the popular languages and of Carbon ports become burdensome.

Well my argurment is that this is a reasonable line of research if you are in a situation where; one you don't have a processor to upgrade your line of laptops and two you are looking at avenues to cut power usage. If Apple had access to faster G4s or something similar to do a real upgrade of the PowerBooks that would be one thing, at the moment though it appears that Apple does have an issue sourcing suitable processors for the laptop line.

If Apple can get a dual procesor SOC impelemntation, with a reasonable speed increase over the current listed 440 implementations, they would be well on their way to solveing both of these issues.
Quote:

I'm not worried about G5 PowerMacs, because of the power of the CPUs. Apple can put as many dual-core SMT POWER-derived CPUs in those towers as they please. I'm conerned about architectures that use multiple weak CPUs in place of one or two powerful CPUs, like the architecture discussed in this thread. Like Cell.

We don't know exactly how the 440 would be implemented so we can't really say that CPU will be that weak. But is that realy the issue if there are no other avenues for an upgrade of the current G4 hardware it would certainly be worth while to look into as an alternative.
Quote:

Now all of this discussion is really not that productive as I give more weight to the rumors about IBM building a new 32 bit CPU for Apple. If a reasonable performance upgrade can be had over the current G4s, such a chip would probally satisfy most of Apples customers until something can be done about the 970.
post #344 of 376
You are right Wizard, the more i read this thread, the more i am convinced that an IBM G4 will be perfect for the Powerbook.
post #345 of 376
Quote:
Originally posted by wizard69
The number of threads and the way they are spawned are often an indication of the developers understanding of the problem. If the devloper only has two thread functioning you still end up with the potential to use two procesors, plus any of the processors handling system calls and the Windowing system.

The question is not whether there are enough threads to apportion. The question is which architecture the threads are designed for.

You don't design for a few powerful CPUs the way you design for a lot of weak CPUs. Period.

Quote:
The allocation of procesors to specific threads ought to be handled by the OS. The application should never know how many processors it has available to it.

But the application designer must know generally what sort of platform the application will run on, or it won't run well. Historically, that hasn't been an issue, because there's only been one kind of CPU architecture. This thread introduces another.

This is not a change of design that you can abstract away. It must impact application design. Believe me, I do this for a living. There's no way that it can't.

Quote:
Well this is where the speculation into the suitability of such a laptop comes into the situation. What if Apple is without a single powerful CPU to move the Powerbook forward? Maybe we will get lucky and see a PowerBook with a 970 in it in the near future, I doubt that will happen so they need an alternative if the G4 has truely hit the wall.

Apple will do what it has to do. But they know as well as anyone that this decision will have serious ramifications for the way applications are developed; moreso than AltiVec, moreso even than the dual G4. It's not an interim architecture or stopgap. It's a sea change.

If there's nothing ready to replace the G4 right now, they'll just have to limp along on the G4. If there's nothing on the roadmap at all, they're screwed.

Incidentally, there are bits of OS X that can be adapted to this paradigm fairly easily (more easily once the reentrant QuickTime 7 appears). Legacy apps are the problem here, and unfortunately that category includes several platform-critical bread-and-butter applications.

Quote:
I almost believe we are talking about two completely different things. Your talking about massively parallel systems and I'm talking about extending the performance of the PowerBooks through a design that offers more processors to a system that is already multithreaded.

That's because the design we're both talking about requires threading to an extent that no previous Mac has. Any 400-series processor gets its butt kicked by a G3 (700-series processor). 400-series processors are not targeted at personal computer applications.

This is a step toward significantly (although not massively) parallel computers. Now, I think this step is not only good, but eventually inevitable. That doesn't mean Apple should leap into it, or use what is rapidly becoming their most popular line as a guinea pig. I don't see how they're ready yet.

Quote:
This is ultimately a take on the dual G4's. Apple was in a bad place performance wise with the G4, thus moving to SMP. The describe PowerBook could be looked at the same way.

Except that the 440 makes the G4 look like a fire-breathing monster, so that it really can't be looked at that way.

Quote:
I have to disagree you are not force to write multi thread applications for the G5 any more that you where forced to on the old SMP G4's.

Are you even reading what I'm writing?! I said precisely, repeatedly and unambiguously that threading on the G5 is a luxury, because you can always fall back on the raw power of one 970.

By contrast, you can't fall back on the anemic power of a 440 without the user experiencing performance worse than an iBook at half or less than half the price.

How many times do I have to repeat that before it sinks in?
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
post #346 of 376
Having no technical knowledge whatsoever of chip design and whatnot (except from what I understood in this thread and the G5 one a year ago ) ...

ibook up to 1400mhz 1 x 440 (dual core 700mhz)
powerbook up to 2800mhz 2 x 440 (dual core 700mhz)

imac up to 2000mhz 1 x 970 (single core 2000mhz)
powermac up to 5.2 Ghz 2 x 970 (dual core 2600mhz)

werent 970's supposed to have the ability to go dual core as well ?

Earlier (much earlier in this thread) people were talking about a compiler that automatically multi threaded your programs (albiet not very well). If this compiler was refined enough (and worked of course and Apple gave it away for free), wouldnt it be a small matter to recompile apps to work on the new architecture. Apps that are current and still sell would benefit from being recompiled for the new chips, Apps that are not too current should run fine on one 700mhz chip.. (I use my 600mhz ibook with 10.3, although not right this second).
post #347 of 376
www.macrumors.com

Looks like the next PowerBook will be a G5 if these numbers are correct.
"People don't want handouts! People want hand jobs!" ~ Connecticut governor William O'Neil at a political rally, followed by riotous applause
"People don't want handouts! People want hand jobs!" ~ Connecticut governor William O'Neil at a political rally, followed by riotous applause
post #348 of 376
Quote:
Originally posted by Algol

Looks like the next PowerBook will be a G5 if these numbers are correct.

Yet, we don't know what exactly is the alleged PowerTune technology; it is supposed to help much heat reduction when little processing power is needed, more than the existing bus slewing feature of the G5.
post #349 of 376
Quote:
Originally posted by PB
Yet, we don't know what exactly is the alleged PowerTune technology; it is supposed to help much heat reduction when little processing power is needed, more than the existing bus slewing feature of the G5.


A bit more info:

BM claims massive power cut for 90nm G5
By Tony Smith
Posted: 22/01/2004 at 15:19 GMT
Get The Reg wherever you are, with The Mobile Register


"You can see why Apple waited for the 90nm version of the PowerPC 970 before launching a G5-based Xserve 1U rackmount server: the latter's heat dissipation characteristics.

While Intel continues to have problems with the power consumed by its 90nm 'Prescott' processor - 100W at around 3.2GHz - IBM's own documentation claims the 90nm 970 eats 24.5W at 2GHz. By comparison, the 130nm 970, currently used by Apple in its Power Mac G5 desktop line, consumes 51W at 1.8GHz.

You'd expect the smaller process to yield a power reduction at close clock speeds, but the issue of current leakage at the smaller transistor size can counter that assumption. Certainly that's what Intel has been forced to accept - Prescott consumes more power clock-for-clock than its 130nm predecessor, 'Northwood'.

One crucial difference between IBM's processors and Intel's is the former's use of silicon-on-insulator technology, which undoubtedly helps reduce leakage at the smaller process.

That bodes well for AMD. Its 90nm processors are due later this year. Like the IBM chips, they too utilise SOI. IBM's success lends weight to the claim by American Technology Research analyst Rick Whittington that SOI will be crucial to AMD's transition to 90nm.

The 970FX, meanwhile, consumes a mere 12.3W at 1.4GHz, paving the way for PowerBook G5s. That figure is comparable to the 7.5W a 1GHz consumption of the G4-class Motorola MPC7447 that drives the current PowerBook G4s. The 970FX's SpeedStep-style PowerTune technology will help too. It also lays the foundation for faster desktops, including the 3GHz version Apple CEO Steve Jobs has promised for next summer.

IBM is expected to offer more details of the 970 at the IEEE Solid State Circuits Conference next month. For now, the name and power characteristics are all we know, coming from the company's latest processor Quick Reference Guide. ®"


Looking good?
post #350 of 376
Quote:
Originally posted by \\/\\/ickes
Ummmm.... a PPC 440 is not a G5

well could apple pull a page from the intel amd playbook, g5 lite or g5 M ?
You can't quantify how much I don't care -- Bob Kevoian of the Bob and Tom Show.
You can't quantify how much I don't care -- Bob Kevoian of the Bob and Tom Show.
post #351 of 376
so will the new PB have:

one of the newly announced G5's

or

a 440 (which this thread has been discussing)

?
Trying hard to think of a new signature...
Trying hard to think of a new signature...
post #352 of 376
The newly announced REAL G5.
post #353 of 376
so Nr9 has been talking out of his...
Trying hard to think of a new signature...
Trying hard to think of a new signature...
post #354 of 376
based on power consumption specs from the ibm page, i dont see why they wouldnt just use the regular g5 chip
post #355 of 376
g5 powerbooks in april?
post #356 of 376
I'd guess so, sounds reasonable to me...Maybe I'm optomistic but I think at the LATEST we will get them this summer. My gut feeling is they will be announced in April or May though.
People that are passionate about what they do, truly believe in their good cause, have a clear vision and understanding of what they want, those people are heroes.
People that are passionate about what they do, truly believe in their good cause, have a clear vision and understanding of what they want, those people are heroes.
post #357 of 376
Quote:
Originally posted by ipodandimac
based on power consumption specs from the ibm page, i dont see why they wouldnt just use the regular g5 chip

Well, I don't know.

The new Xserve is then using a cooler processor than my G4 Xserve, but they needed to yank a drive bay in order to increase cooling? Something isn't adding up here. Either:

1) Apple seriously overengineered the Xserve so they could put 2.4, 2.8? GHz G5s in there that generate much more heat, or

2) There's something else in the G5 Xserve that is eating up tons of power - and I think it's the memory controller. That same controller will need to be on the Powerbook, and I think that's what's interrupting Apple's plans here.

Apple jumped clean up to a 1GHz FSB, well past Intel, and I think they have incurred a huge hit on the controller. My guess is that a G5 powerbook is more like a dual G4 powerbook to engineer around.
The plural of 'anecdote' is not 'data'.
The plural of 'anecdote' is not 'data'.
post #358 of 376
dont forget too that the xserve has two processors (well, some of them). we wont see that in g5 pbooks, at least for a while...
post #359 of 376
Quote:
Originally posted by johnsonwax
Well, I don't know.

The new Xserve is then using a cooler processor than my G4 Xserve, but they needed to yank a drive bay in order to increase cooling? Something isn't adding up here. Either:

1) Apple seriously overengineered the Xserve so they could put 2.4, 2.8? GHz G5s in there that generate much more heat, or

2) There's something else in the G5 Xserve that is eating up tons of power - and I think it's the memory controller. That same controller will need to be on the Powerbook, and I think that's what's interrupting Apple's plans here.

Apple jumped clean up to a 1GHz FSB, well past Intel, and I think they have incurred a huge hit on the controller. My guess is that a G5 powerbook is more like a dual G4 powerbook to engineer around.

Clocking down the controller to 250MHz (500MHz DDR) and putting the CPU on a 4:1 ratio would help WRT heat, but also I think that the controller used in the Xserve is still made on a 130nm process. Perhaps the 90nm version is not yet in manufacturing . Consider also that the new controller may support some more advanced features with the move to 90nm like DDR-II support and Apple wasn't ready to have this included in the Xserve. The powerbook however may include these features for the summer, so if this is the case, we may still see another G4 powerbook before a G5.
post #360 of 376
Quote:
Originally posted by Outsider
... we may still see another G4 powerbook before a G5.

And we may see a dual G4 Powerbook if that happens.
OSX + Duals, Quads & Octos = World Domination
OSX + Duals, Quads & Octos = World Domination
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
This thread is locked  
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › PowerBook G5