Cell Processors nearly complete

Posted:
in Future Apple Hardware edited January 2014
http://www.gamesindustry.biz/content_page.php?aid=4447



Hopefully IBM learned a lot from this, and intends to get the PPC back up to speed.
«13

Comments

  • Reply 1 of 42
    I dont expect CELL to be the major speed-revolution some people touted it to be a few years back..



    And anyway, IBM is also developing the tecnology, so if it has anything the PPC could need, it sure will get it!
  • Reply 2 of 42
    mellomello Posts: 555member
    I remembered reading a story about the cell chip when it was first announced

    that the chip would link up with other cell chips to give your games a graphics

    upgrade (PS3 + Sony TV + Sony DVR). Is that still gonna happen or was that cut?
  • Reply 3 of 42
    Quote:

    Originally posted by mello

    I remembered reading a story about the cell chip when it was first announced

    that the chip would link up with other cell chips to give your games a graphics

    upgrade (PS3 + Sony TV + Sony DVR). Is that still gonna happen or was that cut?




    Probably cut.
  • Reply 4 of 42
    Quote:

    Originally posted by onlooker

    http://www.gamesindustry.biz/content_page.php?aid=4447



    Hopefully IBM learned a lot from this, and intends to get the PPC back up to speed.




    Out of the major 3 Chip suppliers only Intel has higher clocking parts. The POWER5 is a monster. The PPC chips have never really been the speediest clockwise but the architecture. I have hopes for Cell Processors in devices like STB and game machines, I just hope it's inexpensive to fab.
  • Reply 5 of 42
    How much help do you think it would be if they moved to an iPod hard drive in the Powerbooks.



    This would free up space inside the machine for a larger heat sync and more fans?
  • Reply 6 of 42
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by salmonstk

    How much help do you think it would be if they moved to an iPod hard drive in the Powerbooks.



    Not much. Fans can only ever be part of the answer in a space as restricted as a notebook. A lot of the difficulty is in finding ways to wick heat away from hot parts in tight places over to some place which a fan (or ambient air) can cool, and spacing them on the board so that the hot spots aren't clustered together. A smaller hard drive might make this incrementally easier, at the expense of yet another small heat source (iPod drives run pretty hot) and a lethal performance hit every time you needed the hard drive. If you think a 4200 RPM laptop drive is slow... It might work for, say, the 12" iBook or PowerBook, where space is severely constrained and HD capacity is less of an issue.



    I'd also expect iPod drives to be more expensive than laptop drives, just because miniaturization usually comes at a premium, but the sheer volume of iPod production might offset that.



    The optical drive is a worse culprit, especially if it can burn CDs and DVDs (writing to optical media generates lots of heat). But there are limits to how much it can be miniaturized, because it has to be able to accept (relatively large) optical media.
  • Reply 7 of 42
    onlookeronlooker Posts: 5,252member
    To those who didn't get it, or read it. (not T'hain Esh Kelch) I never suggested IBM (Apple) use a Cell processor. What I was getting at is what was learned in creating the processor, and if it could help push the G5's evolution process foreword.





    Quote:

    I dont expect CELL to be the major speed-revolution some people touted it to be a few years back..



    And anyway, IBM is also developing the tecnology, so if it has anything the PPC could need, it sure will get it!



    Toshiba, and IBM are united on this processor for the PS3. The article is essentially about IBM. It just headlines toshiba for some reason.
  • Reply 8 of 42
    bungebunge Posts: 7,329member
    Quote:

    Originally posted by T'hain Esh Kelch

    I dont expect CELL to be the major speed-revolution some people touted it to be a few years back..



    I think it will be revolutionary for games. If you think about a game, they're usually a number of complex things happening simultaneously and a parallel computer is probably the best way to tackle this. AI off to one core, GPU on another, logic on a third, etc. It's an interesting way to tackle game design and I think it will succeed.
  • Reply 9 of 42
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by bunge

    I think it will be revolutionary for games. If you think about a game, they're usually a number of complex things happening simultaneously and a parallel computer is probably the best way to tackle this. AI off to one core, GPU on another, logic on a third, etc. It's an interesting way to tackle game design and I think it will succeed.



    It's going to be more interesting than that, because cores in Cell are not like traditional CPU cores. They're fractional cores. Remember that integer-only G3-derived core that IBM clocked all the way up to 1GHz during the Eternal Reign of 500MHz? That was, in hindsight, a technological exercise for Cell. The idea is radical customization. You pick a bunch of cores with limited, specific functions and assemble them all onto a single chip with a high-speed fabric connecting them all, and off you go. The simplicity of each core means they can clock very high. Theoretically, they could each run at independent clock speeds, but I don't know if IBM will pursue that initially (it simplifies the design significantly to have everything running on the same clock).



    Basically, this is like Book E, only more so, and more scalable to boot.



    But one of the upshots is that every Cell platform will be a new development target. Development will be made somewhat easier to the extent that Cell platforms will share building blocks (and to the extent that the developers are up to speed on the relevant object-oriented design principles), but vendors will have to go all the way through the optimization phase for every Cell platform they target.



    As for performance, it's hard to say. IBM appears to have gone back to one of their monopoly-era bad habits: overpromising and underdelivering.
  • Reply 10 of 42
    wizard69wizard69 Posts: 13,377member
    I have a slightly different take on IBM in that as a monopoly they have never learned to compete. That is they never learned the importantce of technology for this group of customers (us).



    IBM's monopoly has never been technology or computers as such, it is rather a monopoly of sorts with respect to services and making the customer feel good. IBM's traditional cusotmers where not concerned about technology as much as they where about having the service available.



    IBM is attempting to take their in house design and build apparatus and compete with the rest of the world selling technology. The problem is the expectations of the customers are different. IBM's internal customers think nothing of tacking on a water chiller if that is what is needed to service their income generating customers. Thus the a different emphasis is placed on semiconductor design as displayed in IBM's failure with low power devices.



    Dave





    Quote:

    Originally posted by Amorph

    As for performance, it's hard to say. IBM appears to have gone back to one of their monopoly-era bad habits: overpromising and underdelivering.



  • Reply 11 of 42
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by wizard69

    I have a slightly different take on IBM in that as a monopoly they have never learned to compete. That is they never learned the importantce of technology for this group of customers (us).



    I understand what you're getting at, but I'd say that they learned to compete by other means. The assumption that "competition" implies "the best product winning in an open market" is naïve at best, and IBM realized that early on.



    In my opinion, their lack of success at low-power and high-yield devices simply comes down to a combination of arrogance and lack of experience. The POWER4 is a great design given its target market, but it teaches you nothing about how to make bushels of CPUs that perform well on a >10W budget.



    However, the last year or so of bluster coming from IBM is troubling. In the wake of the 13-year-long antitrust suit, they'd learned to be (relatively) humble to avoid attracting further attention. I hope they haven't been emboldened to revert to their old tricks, although given the fact and nature of the MS antitrust trial, which truly snatched defeat from the jaws of victory, I wouldn't be at all surprised if they've been emboldened again.
  • Reply 12 of 42
    anyone notice the massive role IBM has in the next generation of the next-gen consoles? its producing the cell technology in the PS3, and its providing the G5-esque PPC processor for xBox 2. they're gonna be rolling in cash in the next few years...
  • Reply 13 of 42
    chagichagi Posts: 284member
    Quote:

    Originally posted by wizard69

    I have a slightly different take on IBM in that as a monopoly they have never learned to compete. That is they never learned the importantce of technology for this group of customers (us).



    I would disgree with your statement greatly. Microsoft, as vendor of an operating system to 95% of the computer market is about as close to a monopoly as you are likely to see in the business world (for now). IBM on the other hand must indeed be very competitive in order to maintain their marketshare in the PC business, if they stop being competitive, their marketshare is likely to erode rather rapidly.



    My point is that just because a corporation is large does not mean that you can automatically assume that they uncompetitive.
  • Reply 14 of 42
    wizard69wizard69 Posts: 13,377member
    Quote:

    Originally posted by Amorph

    However, the last year or so of bluster coming from IBM is troubling. In the wake of the 13-year-long antitrust suit, they'd learned to be (relatively) humble to avoid attracting further attention. I hope they haven't been emboldened to revert to their old tricks, although given the fact and nature of the MS antitrust trial, which truly snatched defeat from the jaws of victory, I wouldn't be at all surprised if they've been emboldened again.



    At this time I think it is simply a matter of not having a clue when it comes to mass production. As public as the fiasco with the 970FX is I can't believe that there isn't a huge amount of embarassment at IBM right now. It is not like it is a secret that IBM couldn't meet the requirements of one of their customers.



    Personally I hope that Apple has long term plans to keep Freescale on board. Either that or they corral somebody else into the PPC fold. If Freescale can't handle high performance electronics maybe TI or somebody else can take up the slack. Freescales offerings for the portable market are pretty good and have been for some time. The problem is that they are just that pretty good and are no longer examples of the best that can be had. If Freescale can retake that leadership position (high performance, low Power) agian I could see Apple staying with them a bit longer.



    Thanks

    dave
  • Reply 15 of 42
    I've been reading a bit here and there about Cell, and I'm pleasantly surprised. The ultimate victory for cell, in my opinion, would be the death of embedded bloatware. That might not make any sense to you guys, but take a look at a PocketPC. It's the kind of travesty that occurs when businesses think it's too expensive to develop hardware solutions, and instead just get devices that can do everything, and write software for them. If getting cell chips is more or less like getting ASICs, which I think it will be, it will make embedded hardware NOT SUCK.



    Plus, I fully expect that the Macs of the future will use cell or something like it. The current vein of processor design is ready to keel over and die.
  • Reply 16 of 42
    airslufairsluf Posts: 1,861member
    Kickaha and Amorph couldn't moderate themselves out of a paper bag. Abdicate responsibility and succumb to idiocy. Two years of letting a member make personal attacks against others, then stepping aside when someone won't put up with it. Not only that but go ahead and shut down my posting priviledges but not the one making the attacks. Not even the common decency to abide by their warning (afer three days of absorbing personal attacks with no mods in sight), just shut my posting down and then say it might happen later if a certian line is crossed. Bullshit flag is flying, I won't abide by lying and coddling of liars who go off-site, create accounts differing in a single letter from my handle with the express purpose to decieve and then claim here that I did it. Everyone be warned, kim kap sol is a lying, deceitful poster.



    Now I guess they should have banned me rather than just shut off posting priviledges, because kickaha and Amorph definitely aren't going to like being called to task when they thought they had it all ignored *cough* *cough* I mean under control. Just a couple o' tools.



    Don't worry, as soon as my work resetting my posts is done I'll disappear forever.

  • Reply 17 of 42
    Quote:

    Originally posted by AirSluf

    Cell is going to be a bitch to write software for



    I'd bet money that Cell is much easier to write software for, network structure and all, than x86. As far as I can tell, the min-cost network calculations are in hardware, and transparent to the user. Compilers will be made, and Cell has huge potential. The basic idea is clearly the direction of 21st century computing, regardless of the success of the early modules.



    Quote:

    It is a new paradigm and Sony has not yet put software dev kits in the mainstream.



    Obviously. The chip doesn't exist yet. Furthermore, IBM would probably be sending the dev kits. What's exciting is the idea and the potential. Most people who know agree.



    Quote:

    It is a conscious tradeoff of more less individually capable units available to the programmers and independently pipelined



    A 5 stage pipeline (or less) can be built without any real bloat. A 750 without any FPU and large L2 doesn't dissipate much power. If a Cell has 30 750 cores like this, that's less than 30W and a lot of computational grunt.



    Quote:

    It is a conscious tradeoff of more less individually capable units available to the programmers and independently pipelined. meaning each unit will be super-linearly slower than you might expect for the Hz too.





    Sure, but you have way more of them, and generally speaking different parts of the same program require different functions of the CPU. So a conditional block, an integer block, and a memory access block working in parallel will be nice.



    Quote:

    It will shine for the specific multi-tasked, multi-simulation and entertainment arena it is targeted at--if you can get enough programmers to care to spend the time to learn how to best use it. That will be hard because there aren't ANY examples to use. That whole area will have to be bootstrapped, and that will take some time.





    The basic PPC instruction set is really pleasant to program in. The mincost stuff is abstracted. Compilers will follow. Intel wrote great compilers for Itanium. Think of Cell as what Itanium should have been.



    Quote:

    Cell has potential, but that's all it is right now--potential. And that potential has been dumbed down an order of magnitude or more since the original hyperbolic announcements. I will be surprised to see it used much outside the PS 3 and the Sony dev stations for the next 5 years or so. Only if it proves to be vastly superior in real world use then will it have a chance to progress beyond that circle.





    Cell doesn't exist yet. I'm just supremely excited that someone decided to do something different. This would be like reading about MIPS in the early 80's. Perhaps even bigger. It's hard to say. People will have to change the software paradigm eventually, since anyone who knows anything about microprocessors will colorfully explain to you that the way we're going know, just pushing more electrons in shorter intervals, is a suboptimal solution. It's a sequential solution. The human brain works in parallel, and is better than any computer today by quite a margin in terms of overall capability. In engineering much of what we do is mimic nature, and computers shouldn't be an exception.



    As for my excitement about embedded use, it appears that it will be easy for any provider to built many types of custom cell chips that are best suited to do various things. This is great for embedded computing. no doubt about it. Instead of being bottlenecked, and having to use -- say -- a 400Mhz XScale even though only one, tiny component of your requirements need that kind power, I could use a cell will an encryption block that only draws power when it's in use. That would rock. Using an ecryption ASIC is another way to do it, but with the cell it's all on a die, and it's all possible through software. Outside of the embedded realm, power is less of an issue. Lastly, encryption ASICs these days are usually just off-the-shelf MCUs with a program loop in the ROM. The Cell solution would be the same from a low level perspective, but would be easier to implement. (Hardware drivers already done.)







    Just out of curiosity, where are you getting this information? I'm not an expert on CompArch (my focus is on communications, signals, analog, etc.), but generally speaking I do know way more than average about computers, given that I have a degree in EE and have programmed my fair share of embedded devices.
  • Reply 18 of 42
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by AirSluf

    Cell is going to be a bitch to write software for. It is a new paradigm and Sony has not yet put software dev kits in the mainstream.



    It doesn't strike me as any newer than, say, SmallTalk.



    The main obstacle will be that, for historical reasons, game designers are used to writing monolithic, single-threaded apps for single-core CPUs. But the idea of massive MP, and the languages and programming techniques suited to massive MP, are fairly old and well-understood. The only real wrench that Cell throws in is that it's asymmetric MP. The main issue would be making the message-passing code as efficient as possible, because the architecture carries a real risk of drowning itself in overhead.



    As I mentioned above, to the extent that individual hardware cells will be standard, developers and platform vendors can build and/or offer software "actors" to match them. Once there, this looks great for games, which characteristically have lots of independent actors () simultaneously following relatively simple logic.
  • Reply 19 of 42
    Quote:

    Originally posted by Amorph

    The main obstacle will be that, for historical reasons, game designers are used to writing monolithic, single-threaded apps for single-core CPUs. But the idea of massive MP, and the languages and programming techniques suited to massive MP, are fairly old and well-understood. The only real wrench that Cell throws in is that it's asymmetric MP. The main issue would be making the message-passing code as efficient as possible, because the architecture carries a real risk of drowning itself in overhead.





    I am recalling something I read a couple of years ago, so, please correct me if I am wrong; however, the current PS2 uses more than one processing unit: a DSP chip, RISC CPU, GPU, any else? The difficulty initially was that the API's required developers to wrote code for each chip and pass messages (of course, each chip was a different architecture requiring even more learning curve).



    I would think that good PS2 development houses would have little trouble with the Cell paradigm.



    EDIT: I forgot to add...



    Back on topic, I really do not see the Cell design as something that will translate to Apple's products. The only benefit from Cell to Apple I can see is what wizard69 referred to: teaching IBM how to deliver mass quantities at low prices.
  • Reply 20 of 42
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by atomicham

    I am recalling something I read a couple of years ago, so, please correct me if I am wrong; however, the current PS2 uses more than one processing unit: a DSP chip, RISC CPU, GPU, any else? The difficulty initially was that the API's required developers to wrote code for each chip and pass messages (of course, each chip was a different architecture requiring even more learning curve).



    I would think that good PS2 development houses would have little trouble with the Cell paradigm.




    This is true. Consoles have featured dedicated hardware for years, so this will be nothing new. It'll just be on a smaller scale. to add...



    Quote:

    Back on topic, I really do not see the Cell design as something that will translate to Apple's products. The only benefit from Cell to Apple I can see is what wizard69 referred to: teaching IBM how to deliver mass quantities at low prices.



    The major stumbling block I see is AltiVec, which can't be broken up into pieces without losing a great deal of its power, and which would make a relatively Brobdignagian cell - really, it depends on how uptight IBM is about grandfathering a big pile of transistors into their pretty new design (last time this came up, remember, they turned their nose up).



    Besides that, I think this has considerable promise for Apple down the road. They've already hitched their wagon to a message-based OO language and a messaging kernel (even if they've taken some liberties with Mach). If IBM can make AltiVec a "cell" then it shouldn't be hard to assemble a cell-based processor that looks pretty much like a conventional CPU in terms of its aggregate capabilities.



    Where it gets interesting is: By deconstructing a monolithic core into lots of little cores on a fabric, IBM is requiring any cell-based software to maintain a fairly high-level logical view of the "CPU" it runs on. From there, it's a small step to distributing the "CPU" over the board as convenient. So, assuming that IBM offers a standard bin of Cell parts, Apple could choose to put a couple of DSP cells on the memory controller. All the software would have to be aware of is that the messages between those cells and the others would be higher latency - and Objective-C and Cocoa already acknowledge low-latency and high-latency messages, by analogy (intraprocess and interprocess messaging). Then Apple could offload common transforms to dedicated hardware from a standard parts bin, and offset the increasingly high latency of system busses by distributing the CPU logically.



    Maybe I'm barking up the wrong tree, but this could really go somewhere interesting.
Sign In or Register to comment.