News: IBM starts up new chip foundry

12346

Comments

  • Reply 101 of 123
    brendonbrendon Posts: 642member
    [quote]Originally posted by Programmer:

    <strong>

    I think you're missing the point. The point is that IBM is talking about building game consoles with 32+ processors -- that means they are less than US$500. A $2000 Apple machine could have 256 processors. And this is in the 2005-6 time frame.

    </strong><hr></blockquote>



    Will 256 or 32 processors be memory bandwidth starved?? <img src="graemlins/oyvey.gif" border="0" alt="[No]" /> <img src="graemlins/bugeye.gif" border="0" alt="[Skeptical]" />
  • Reply 102 of 123
    animaniacanimaniac Posts: 122member
    The whole "no need for power" argument is just for us to feel better about our performance lagging macs. Why don't we knock at the power of a the typical $2000 Wintel machine? Do people that buy a machine at that price-poimnt really _need_ 2.53Ghz? DDR333 memory? Geforce4 Ti 4600? No they don't, but they get it don't they? I want to have the fastest computer, simple as that.
  • Reply 103 of 123
    programmerprogrammer Posts: 3,467member
    [quote]Originally posted by Brendon:

    <strong>Will 256 or 32 processors be memory bandwidth starved?? <img src="graemlins/oyvey.gif" border="0" alt="[No]" /> <img src="graemlins/bugeye.gif" border="0" alt="[Skeptical]" /> </strong><hr></blockquote>



    There are lots of possibilities here... especially since they have a couple of years to figure it out. My guess is that we're going to see a NUMA-style architecture with shared on-chip memory controllers (i.e. all the cores on a single die will share a memory controller, and multiple chips will communicate via a high speed bus like RapidIO). Certainly it would be foolish to assume that IBM wouldn't address this issue in such a design.
  • Reply 104 of 123
    [quote] The main thing is what "killer apps" does this enable for the consumer? <hr></blockquote>



    Hyper realistic games.

    Realtime reytracing in your 3D apps.



    [quote] There is no such thing as "too much computing power" <hr></blockquote>



    This guy knows what he is talking about. Second that!!!



    [quote] and this kind of power is going to arrive on your desk in the not-too-distant future (at least if IBM can deliver on the promise of its Cell architecture). <hr></blockquote>



    I hope so.



    There is a new race, the TeraFlop race. Is apple going to take on the challenge or are they just going to make neat little apps.
  • Reply 105 of 123
    [quote]Originally posted by Programmer:

    <strong>

    As for the "average Joe" not needing such computing power... hogwash. The main thing is what "killer apps" does this enable for the consumer? Sure the things you're doing nowadays don't require this kind of computing power, but if this kind of computing horsepower was commonly available what sort of software would come out to use it? I don't have an answer right now, but only because I haven't started thinking about it. </strong><hr></blockquote>



    But won't these systems suffer from the same lack of support as other modern CPU enhancements? The problem is that so few programmers and engineers are adequately trained to use, let alone design for vector units, SMP systems, etc. And the tools to write this software is weak as well.



    It makes sense for consoles - their software base is so small and controlled. But linux? Dear god, how on earth would you get anything but a handful of useful products out of that community for a 256 processor system?
  • Reply 106 of 123
    outsideroutsider Posts: 6,008member
    [quote]Originally posted by Programmer:

    <strong>



    There are lots of possibilities here... especially since they have a couple of years to figure it out. My guess is that we're going to see a NUMA-style architecture with shared on-chip memory controllers (i.e. all the cores on a single die will share a memory controller, and multiple chips will communicate via a high speed bus like RapidIO). Certainly it would be foolish to assume that IBM wouldn't address this issue in such a design.</strong><hr></blockquote>



    I think the on die bus between cores and memory controller would be wide, fast and proprietary. I always assumed RIO would be an external (on the motherboard PCB) bus connecting controllers/processors. Come to think of it IBM already has an on-die connection scheme called CoreConnect that supports 256bit wide bus at high speeds. Most likely multiple Cell modules would be connected via RIO...
  • Reply 107 of 123
    outsideroutsider Posts: 6,008member
    i think Apple's part in all of this is to make a really optimized compiler to take advantage of all the new cores at its disposal. And OS X making used of a myriad of threads would be helpful too. Imagine an Xserve webserver handling all those simultaneous java requests with lighting speed. If Apple makes the right tools available to developers then code can slowly migrate to the heavy parallelization paradigm that will be the future of software design.
  • Reply 108 of 123
    Don't get your hopes up.



    Keep in mind that this is NOT being designed as as a general CPU architecture. It's more like a big DSP or SIMD/Altivec controller. Essentially, what SIT (Sony, IBM, Toshiba) are doing is developing a generalized graphics processor, not a generalized processing unit.



    Think of it as one smokin', programable graphics accelerator (video card). It is not a PPC, or POWER chip, OK?



    You wouldn't expect to run a Mac on an NVidia GeForce core would you? Yet it supports DDR, has HUGE performance (at specialized tasks) and does tremendous FP. So why would you think that a next generation graphics/media core would be a suitable substitiute? Never mind that it hasn't even been taped out yet, much less been sampled.



    There's more info here:

    <a href="http://arstechnica.com/wankerdesk/3q02/playstation3.html"; target="_blank">http://arstechnica.com/wankerdesk/3q02/playstation3.html</a>;
  • Reply 109 of 123
    outsideroutsider Posts: 6,008member
    [quote]Originally posted by Tomb of the Unknown:

    <strong>

    Think of it as one smokin', programable graphics accelerator (video card). It is not a PPC, or POWER chip, OK?

    [/URL]</strong><hr></blockquote>



    And you know this how? I'll tell you what I think: You really don't know a thing about the Cell architecture. And neither does that guy at Ars. And he doesn't pretend to. he just says it is probably based on RISC or VLIW. So don't make assumptions. If anything it WILL be based on assets held by IBM including but not limited to PowerPC architecture.
  • Reply 110 of 123
    blackcatblackcat Posts: 697member
    [quote]Originally posted by Tomb of the Unknown:

    <strong>Think of it as one smokin', programable graphics accelerator (video card). It is not a PPC, or POWER chip, OK?</strong><hr></blockquote>



    But that's the whole point, several sources suggest Cell is indeed a PPC derivative, just as POWER is. IBM is involved, they are PPC builders, Sony & Toshiba have the graphics knowhow and such from PS2, plus Sony has seen what a 466Mhz G4 can do in GameCube.



    Also, making Cell PPC based will make development of test systems easier as the basic ISA is well known within IBM.



    Either way we won't know for 2 years...
  • Reply 111 of 123
    programmerprogrammer Posts: 3,467member
    With all due respect to the guy at ArsTechnica, he doesn't know any more than the rest of us. The "Cell" structure that is discussed in IBM's architectural papers are general purpose processors, although they needn't necessarily be. And frankly, it doesn't matter a whole lot. With this level of parallelism there really needs to be a paradigm shift where large computations and their data are "described" and then sent off to be computed on whatever resources happen to be available. If (when?) this kind of a mechanism is introduced it doesn't matter it the compute engine is a PowerPC, a GPU, a Cell machine, or a network cluster (real-time requirements notwithstanding). This is the kind of machine we can expect in the next few years and if Apple is going to survive in the hardware market, they will need to have it in their machines. Since IBM and Apple seem to be on reasonably good terms these days it doesn't seem impossible that this is the direction things will go. Further, if the Cell is built out of general purpose processors, IBM's obvious choice of ISA is PowerPC. All considered, Apple is in a pretty good position to ride the next technological wave.



    As for software developers jumping on the bandwagon... a funny thing happens as hardware gets more and more capable. It becomes easier to do things in software. Ten years ago it was hard to do 3D graphics, now practically every game has 3D graphics even if its not a 3D game. A couple of years ago hardly any software was threaded, now there is quite a bit of it. The tools will get built, and people will figure out what to do with them -- and being freed from the old limitations is very empowering. It takes time to learn the new limitations, but a jump from gigaflops to teraflops holds lots interesting possibilities.



    Its interesting to see what people think of what this kind of compute power is discussed. Yes the games will be amazing, and the 3D rendering will be awesome... but that is really just improvements to stuff we do already. What about all the stuff that we don't currently even try to do that is made possible by a 100x improvement in computation? Look at the largely unfulfilled promise that Sony made for the PS2 -- they put this thing in called the "Emotion Engine" which was supposed to make the games behaviour and intelligence better. It wasn't even close to enough to do that... but they are trying again with PS3, and this time it will probably work. There is nothing game-specific about this technology, so its just a matter of time after you put this into the hands of developers (professional & amature) that somebody somewhere will start using it in new ways. Or should I say ways we're not used to? There are thousands of research papers out there about stuff that is going to be revisited in light of the new performance capabilities.



    Wow, I sound like I'm preaching! Really though, we're not talking about another 500 MHz speed bump... this is like the leap from '90 to today, but it builds on what we have today.
  • Reply 112 of 123
    "Wow, I sound like I'm preaching! Really though, we're not talking about another 500 MHz speed bump... this is like the leap from '90 to today, but it builds on what we have today."



    Easy, tiger!



    (That's the spirit...)



    Lemon Bon Bon
  • Reply 113 of 123
    yevgenyyevgeny Posts: 1,148member
    [quote]Originally posted by Outsider:

    <strong>i think Apple's part in all of this is to make a really optimized compiler to take advantage of all the new cores at its disposal. And OS X making used of a myriad of threads would be helpful too. Imagine an Xserve webserver handling all those simultaneous java requests with lighting speed. If Apple makes the right tools available to developers then code can slowly migrate to the heavy parallelization paradigm that will be the future of software design.</strong><hr></blockquote>



    1. Apple doesn't make compilers and hasn't for some time. OS X 10.2 uses GCC.



    2. An optimizing compiler can not make an application use threads if it isn't already. This is so far beyond the reach of compiler technology that it is pure fantasy.



    3. Apps that are very thread heavy will see a good speed bump. Apps that simply spin off some worker code as a thread will see a much smaller speed bump. If an App only creates a maximum of three threads, then it doesn't matter if you have 4 CPU's or 256.



    4. Not all code can be parallelized (or easily parallelized). Rendering is a good example of code that can be done in parallel. Not every program can see a speed boost. I worte some on this in another thread (no pun intended).



    5. Bandwidth is a serious issue. I assume that if IBM is smart enough to see that bandwidth was a serious issue for the Power4, then they would be smart enough to see that bandwidth is an even more serious issue for Cell.



    6. Sony needs to get a clue and ship a real SDK for the PS3, unlike the bare bones API's that it ships for the PS2.



    Back to work I go...



    [ 08-07-2002: Message edited by: Yevgeny ]</p>
  • Reply 114 of 123
    amorphamorph Posts: 7,112member
    *ahem*



    I'll point those curious about how to program such a beast at <a href="http://forums.appleinsider.com/cgi-bin/ultimatebb.cgi?ubb=get_topic&f=3&t=001304"; target="_blank">my little thread in Software</a>. It looks like it's a solved problem, but developers will have to wean themselves off conventional languages, nearly all of which are essentially linear, with parallelism tacked on as an afterthought.



    But if the hardware guys can get used to building clockless chips, the software guys can get used to writing for massive parallelism. This is the future.
  • Reply 115 of 123
    nevynnevyn Posts: 360member
    Apple's compiler is basically GCC. There's been a lot of work to make it as close to the FSF's mainbranch as possible.



    The Cell seems cool enough that there'd be a lot of people interested in optimizing the bejeezus out of it. Especially with the source to the compiler available.



    So it seems like it would be more readily adopted/fully utilized than AltiVec was.
  • Reply 116 of 123
    programmerprogrammer Posts: 3,467member
    [quote]Originally posted by Nevyn:

    <strong>So it seems like it would be more readily adopted/fully utilized than AltiVec was.</strong><hr></blockquote>



    Wouldn't it be cool if the Cell machines turned out to be multi-core PowerPCs with VMX?
  • Reply 117 of 123
    [quote]Originally posted by Programmer:

    <strong>



    I think you're missing the point. The point is that IBM is talking about building game consoles with 32+ processors -- that means they are less than US$500. A $2000 Apple machine could have 256 processors. And this is in the 2005-6 time frame.

    </strong>

    <hr></blockquote>



    Sorry but your logic may be flawed here. Game consoles like the PS2 are typically sold at a loss but the money they make off the games more than makes up for it. Apple cannot do this however.



    [ 08-07-2002: Message edited by: apple.otaku ]</p>
  • Reply 118 of 123
    davegeedavegee Posts: 2,765member
    [quote]Originally posted by apple.otaku:

    <strong>



    Sorry but your logic is flawed here. Game consoles like the PS2 are actually sold at a loss but the money they make off the games more than makes up for it. Apple cannot do this however.



    [ 08-07-2002: Message edited by: apple.otaku ]</strong><hr></blockquote>



    A loss?!?! Okay ***MAYBE*** a slight loss but even then I doubt it... The way you're talking - (where it would make Programmers logic flawed) the console makers would have to be selling their boxes at a HUGE $500+ per box loss...



    Do they make GOBS of profit from just a console sale? Heck no but they ain't selling them at a major loss either.



    Logic may be flawed but not where you think.
  • Reply 119 of 123
    [quote]Originally posted by apple.otaku:

    <strong>



    Sorry but your logic is flawed here. Game consoles like the PS2 are actually sold at a loss but the money they make off the games more than makes up for it. Apple cannot do this however.



    [ 08-07-2002: Message edited by: apple.otaku ]</strong><hr></blockquote>



    No flammage but ...



    Before you make blanket statements like this, maybe you should verify the current state of "consoles". Just try a little Google first.



    <a href="http://www.actsofgord.com/Proclamations/chapter02.html"; target="_blank">Acts of Gord ... The PS2</a>
  • Reply 120 of 123
    [quote]Originally posted by Blackcat:

    <strong>But that's the whole point, several sources suggest Cell is indeed a PPC derivative, just as POWER is.</strong><hr></blockquote>

    The PowerPC chip architecture was derived from the POWER spec., not vice versa. And I don't know what sources you're referring to, but at a guess, I'd say they're just guessing themselves. Just because the idea is to use multiple cores, doesn't mean it was derived from the POWER spec. Hell, IBM also invented Token Ring, might as well speculate it's derived from that technology.



    But as described, the Cell architecture is a fairly radical departure from anything that has gone before it, with the exception perhaps of Sun's MAJC. Even if it is "derived from" the POWER architecture, that doesn't make it a CPU.

    [quote]<strong>IBM is involved, they are PPC builders, Sony & Toshiba have the graphics knowhow and such from PS2, plus Sony has seen what a 466Mhz G4 can do in GameCube.</strong><hr></blockquote>

    And according to the CNet article they wanted better performance than any of these, so why would they base the design on them?

    [quote]<strong>Also, making Cell PPC based will make development of test systems easier as the basic ISA is well known within IBM.</strong><hr></blockquote>

    That must be why we won't see commercial versions of the chip for two years even though it is almost "taped out" and they've been playing with components of the design for some time now?



    With all due respect to you and Programmer, you seem to be ignoring the fact that what IBM is currently contracted to develop is the heart of the PS3. Despite the tremendous hype around this processor, they aren't touting it as the next generation in general computing, they're bragging about video streaming, network throughput, graphics rendering -- DSP functions for crying out loud.



    So OK, this will make for one great chipset or a SoC, able to handle everything from 3D graphics to network I/O to video to joysticks and other input devices. But by the time it's available and they've developed API's for it, etc., your basic Radeon or GeForce may be faster thanks to incremental improvements in their core graphics engines.



    I'm fairly certain it will NOT be a CPU (PPC derivative or not) nonetheless.



    [ 08-07-2002: Message edited by: Tomb of the Unknown ]</p>
Sign In or Register to comment.