128 bit CPU chip : is there a need.

Posted:
in Future Apple Hardware edited January 2014
Since the firt 4 bit chips from intel the 4004, to the lattest G5 or opteron, the numbers of bits of a chip has increased.



64 bit chips seems to be the immediate future of desktop computer (even if Intel say the contrary). The question is, is there a need for 128 bit chips ?



From what i have read here, i doubt they'll make an advantage, 64 bits chips can address exabytes of memory and deals with very large number. The future seems to be more based upon multicore chips than anything else. I may be wrong : does someone has somethhing to say on the subject ?
«1

Comments

  • Reply 1 of 31
    svinsvin Posts: 30member
    I believe there will always be a need for accessing larger amounts of memory. Think huge 3D models for virtual reality engines. I think when games and other forms of virtual entertainment apps begin to max out the RAM of 64 bit chips, we will move to 128 bit, and so on.



    What is the maximum memory which can be assigned to 128 bit chips?



    ap
  • Reply 2 of 31
    bartobarto Posts: 2,246member
    I can't believe someone is asking this...



    Is there a need right now? No. Will there be a need before some other kind of computing arrives (ie quantum)? Maybe. My best estimate would be 50/50.



    Barto
  • Reply 3 of 31
    zapchudzapchud Posts: 844member
    Quote:

    Originally posted by svin



    What is the maximum memory which can be assigned to 128 bit chips?





    340.282.000.000.000.000.000.000.000.000.000.000.00 0 bytes or

    316.913.000.000.000.000.000.000.000.000 gigabytes.



    That's quite alot :P
  • Reply 4 of 31
    andersanders Posts: 6,523member
    Is there a need for a 128 bit CPU? Like everything associated with computers the answer is eventually.



    ..Unless we are talking about a decent replacement for the Newton
  • Reply 5 of 31
    henriokhenriok Posts: 537member
    There are 128 bitness in G4/G5 since its AltiVEc-units are 128 bit.

    Transmeta's Crusoe processors work with 128-bit VLIW instuctions (32 bit Int, 80 bit Fp and 64 bit Memory) .

    GPUs have been 128 bit for along time.. newer GPUs from nVidia and ATI are 256 bit and the Parhelia from Matrox is 512 bit.



    So.. there are applications for processors with more than 128 bits, but i really can't see any application for more than 64 bit memory management for a long, long time. 128 bit memory adressing could probably identify each bit in every memory module existing today and in a forseeable future.
  • Reply 6 of 31
    programmerprogrammer Posts: 3,467member
    Look at it this way... it took less than 5 years to go from 8-bit to 16-bit, and less than 5 years after that to reach 32-bit. 32-bit (on the desktop) lasted ~15 years. 64-bit allows 4 billion times the addressing space (not the mere 65 thousand fold increase that 32-bit allowed) so even accounting for the rate of increasing chip density it ought to last considerably more than 15 years... if Moore's Law of doubling every 18 months is followed we're looking at ~50 years, and assuming that this "law" will hold out for even the next 10 years is assuming a lot).



    Note that workstations and supercomputers went 64-bit a couple of decades ago (early 80s), but still haven't moved past it.
  • Reply 7 of 31
    shetlineshetline Posts: 4,695member
    Just to put the address range of a 128-bit computer into perspective...



    Imagine a block of iron where every single atom represents a single addressable bit of memory. A full 128-bit address would be able to access 2^128 bytes, or 2^131 bits.



    That's about 2.722*10^39 bits, in this case 2.722*10^39 atoms of iron.



    How much iron would that be? Take 2.722*10^39 and divide by Avagadro's number, 6.022*10^23 to get moles of iron: 4.521*10^15 moles.



    Multiple by the atomic weight of iron, 55.847, to get grams: 2.525*10^17.



    In other words, about 252 billion (thousand million for you Brits out there) metric tons of iron. The density of iron is about 7.6 metric tons per cubic meter, so a 252 billion ton cube of iron would measure about 3.2 kilometers (2 miles) on each side.



    Now that's big iron.



    My particular example of iron atoms isn't that important. The point is that right now, atomic scale storage where one atom represents one bit is a long way from being anything more than a laboratory curiousity, and practical 131^2 bit memory technology would require packing not just one bit, but TERAbits of data into single atoms.



    To contrast the enormous difference between 64-bit and 128-bit technology, a "mere" 64-bit address space, 2^64 bytes, or 2^67 bits, using one iron atom per bit, would result in a cube of iron roughly 1.2 mm (less that a 1/16") in size.



    Of course, you don't need anywhere near the full range of 128-bit addressing in order to use such addressing. As soon a just a little more that 2^64 bytes is desireable, it might be easier to simply double the number of addressing bits from 64 to 128, even if only a few additional bits are needed.



    I think I can feel fairly confident in saying, without being at too great a risk of the "if man were meant to fly" syndrome, that needing more than 128-bit addressing is a long, long, long way away... perhaps even worthy of that dangerous word "never". I can perhaps imagine wild sci-fi scenarios using that kind of data capacity, but even then, it doesn't seem it would be practical to treat that much memory as a contiguous address space for one processing unit -- some totally different kind of technology would likely apply.



    [Edit: Copied a few numbers into this post incorrectly at first (10^131 in a few places instead of 10^39), but still had the subsequent numbers and final calculations correct.]
  • Reply 8 of 31
    moogsmoogs Posts: 4,296member
    Yes that's right. I was going to explain it in exactly those terms...

    Shetline you're a genius.





    Actually, I was going to reply along the lines of "128 bit CPUs? Are you out of your fookin' gord?!" but I think the big iron metaphor is more subtle and there for will appeal to a wider audience.



  • Reply 9 of 31
    baumanbauman Posts: 1,248member
    Quote:

    Originally posted by shetline

    Just to put the address range of a 128-bit computer into perspective...



    Imagine a block of iron where every single atom represents a single addressable bit of memory. A full 128-bit address would be able to access 2^128 bytes, or 2^131 bits.



    That's about 2.722*10^131 bits, in this case 2.722*10^131 atoms of iron.



    How much iron would that be? Take 2.722*10^131 and divide by Avagadro's number, 6.022*10^23 to get moles of iron: 4.521*10^15 moles.



    Multiple by the atomic weight of iron, 55.847, to get grams: 2.525*10^17.



    In other words, about 252 billion (thousand million for you Brits out there) metric tons of iron. The density of iron is about 7.6 metric tons per cubic meter, so a 252 billion ton cube of iron would measure about 3.2 kilometers (2 miles) on each side.



    ...snip...




    Erm... 2^131 != 2.722*10^131.



    2^131 = 2.722*10^39.



    Going through your calculations again:

    Dividing by Avogadro's number gets you: 4.5205229747e15



    Oh, looks like it was just a typo in your post. You might want to change that
  • Reply 10 of 31
    shetlineshetline Posts: 4,695member
    Quote:

    Originally posted by bauman

    Oh, looks like it was just a typo in your post. You might want to change that



    I found your correction just after I'd already saved my correction... damn you for not waiting another two minutes!
  • Reply 11 of 31
    mokimoki Posts: 551member
    There's a difference between a CPU having 128 bit registers (as the G4 has had with its AltiVec core for some time) and a CPU having 128 bit general purpose registers and 128 bit memory addressing.



    The former is in-use and useful right now; the latter (128 bit addressing) isn't particularly useful currently, but who knows about the future.
  • Reply 12 of 31
    amorphamorph Posts: 7,112member
    It's also worth pointing out that while AltiVec deals with data in 128-bit chunks, and GPUs process even larger chunks, this more closely resembles parallellism than anything else: AltiVec cannot perform an operation on more than 32 bits of data, but it can perform four of those operations at once on a total of 4*32=128 bits of data. The same is true for GPUs, who use 256 bit registers to manipulate four 64 bit values (one each for R, G, B, and the alpha channel) simultaneously.



    In the terms in which "64 bits" is meaningful - i.e., as a reference to the size of a discrete value - only 80-bit IEEE floating point (which we had in the 68k!) has ever exceeded 64 bits in hardware, even now.
  • Reply 13 of 31
    haraldharald Posts: 2,152member
    Boy, we are in danger of going off topic here but ...



    1) Check out the work out British Telecom's futurologists; the ability to be able to achieve teleport with anything other then photons (OK, and some other clever stuff too) would require storage of data of every single particle in an object. Where it is, what sort of state it's in (don't mention Heisenberg, I'm on a roll).



    You would need -- NEED -- a shitload of memory to do that.



    2) OK, simulations. I read a great New Scientist article recently about how (if you can avoid the sniff of reductio ad absurdum) recent 'good science' theories related to multiverses would suggest it is statistically more likely that we are living in a simulated universe then a "real" one. You'd need fukcing TONS of memory to simulate a universe.



    To make this a non-BS point, number-crunching could reach a point where the amount of uh RAM or whatever you'd need to do enable a hardcore simulation (no, not like that ... well, OK, like that too) would be enormous.



    3) Backing yourself up. All your memories, your personality, your soul. On BT's roadmap for the next hundred years. Google for Ian Pearson and BT you'll see.



    You'd need a bunch of DDR10e555555 chips for that kind of activity too, I'd suggest.











    I shouldn't have got drunk at lunch.
  • Reply 14 of 31
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by Harald

    Boy, we are in danger of going off topic here but ...



    1) Check out the work out British Telecom's futurologists; the ability to be able to achieve teleport with anything other then photons (OK, and some other clever stuff too) would require storage of data of every single particle in an object. Where it is, what sort of state it's in (don't mention Heisenberg, I'm on a roll).




    The last time I was in England, BT couldn't even manage a reliable land line connection. Off topic, they need to spend more time on the ground and less time in the clouds. Wouldn't want their teleporter to cut out in mid-sample, would you?



    Back on topic, there is always a need for more precision. Somewhere in Apple's pages are algorithms to do 128 bit, 256 bit, and arbitrary precision FP using AltiVec. This illuminates the issue: It's not that nobody needs to do 128 bit computations, but that not enough people need to do 128 bit computations quickly enough to make it worth the trouble and die space involved in engineering support into hardware. You can do 128 bit FP on a 4 bit CPU, if you're not concerned about living to see your output. Similarly, you can give a process more than 4GB of RAM on a 32 bit CPU, as long as you're willing to accept a substantial drop in performance.



    The other side of the issue, of course, is that the wider your registers get, the more space is wasted storing the number 5 in memory, hauling it across the bus, storing it in the CPU, etc. If you double your data size, you halve your bandwidth for all data that fit into the smaller data size. This is probably the main reason why it took so long for 64 bit CPUs to reach the desktop: Desktops, traditionally, are starved for bandwidth. That just changed.



    So, as of right now, there isn't enough need to make 128 bit values native, relative to the trouble and the bandwidth penalty of implementing support for them in hardware.
  • Reply 15 of 31
    shetlineshetline Posts: 4,695member
    Quote:

    Originally posted by moki

    There's a difference between a CPU having 128 bit registers (as the G4 has had with its AltiVec core for some time) and a CPU having 128 bit general purpose registers and 128 bit memory addressing.



    The former is in-use and useful right now; the latter (128 bit addressing) isn't particularly useful currently, but who knows about the future.




    I tried to make clear in my post the I was only talking about address bits, and since we already have chips with 128-bit data handling, I figured that 128-bit addressing was the only interesting "future hardware" issue in question.



    Some cryptographic stuff could easily take advantage of 256-bit data, and that kind of technology (which AFAIK might even exist already) is a fairly simply matter of scaling available tech.
  • Reply 16 of 31
    bungebunge Posts: 7,329member
    Quote:

    Originally posted by Amorph

    AltiVec cannot perform an operation on more than 32 bits of data, but it can perform four of those operations at once on a total of 4*32=128 bits of data. The same is true for GPUs, who use 256 bit registers to manipulate four 64 bit values (one each for R, G, B, and the alpha channel) simultaneously.



    I had this thought about perhaps a future 970 (980?) or eventually a 128-bit CPU. That is, a properly built CPU could handle 4 32-bit (or 2 64-bit) instructions simultaneously. I think that's how things will evolve rather than emphasizing increased RAM capabilities.



    Unless of course we have a true breakthrough in RAM technology.
  • Reply 17 of 31
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by bunge

    I had this thought about perhaps a future 970 (980?) or eventually a 128-bit CPU. That is, a properly built CPU could handle 4 32-bit (or 2 64-bit) instructions simultaneously. I think that's how things will evolve rather than emphasizing increased RAM capabilities.



    Unless of course we have a true breakthrough in RAM technology.




    The problem is that that sort of instruction-level parallellism is extremely difficult to get out of code, especially in current languages, and for common tasks. Autovectorizing compilers are incredibly primitive and inefficient, and the amount of instruction reordering that would be necessary to have the CPU do the autovectorizing instead is staggering. Right now what you're seeing instead is multiple integer or FP units, and simultaneous multithreading to see that they're used more efficiently. Those accomplish much the same thing, and they do so without the headaches and complexity involved.



    One of the reasons the Itanium was so late arriving is that its design punts all the complexity involved in deciding how to run code to the compiler. The 970 bends backward to accomodate existing compilers instead, and it's a comparable performer built for much less in much less time (even if you account for the tech borrowed from the POWER4). You really don't want to shunt a lot of complexity over to the compiler. They have enough to do already, and the code they produce (or, at least, that the mainstream compilers produce) is already poor enough relative to hand coding.



    I have a feeling that vector code will remain primarily hand-tooled for a while yet. As long as it is, the sort of parallellism used by VPUs will have to remain segregated from the traditional CPU, as AltiVec is now.
  • Reply 18 of 31
    macnn suxmacnn sux Posts: 36member
    i think the next logical step is increasing the speed while decreasing the physical mass.



    for instance:



    Dimms with more memory on each.



    Hardrives with more capacity.



    Drives that combine optical solutions.





    I think the future will be cube sized computers as the norm. Expandability will not be as big as an issue as it once was.
  • Reply 19 of 31
    programmerprogrammer Posts: 3,467member
    Quote:

    Originally posted by bunge

    I had this thought about perhaps a future 970 (980?) or eventually a 128-bit CPU. That is, a properly built CPU could handle 4 32-bit (or 2 64-bit) instructions simultaneously. I think that's how things will evolve rather than emphasizing increased RAM capabilities.



    Unless of course we have a true breakthrough in RAM technology.




    The 970 already handles >200 instructions simultaneously. Future 9xx chips will allow those instructions to be from multiple threads which circumvents some of the problems with waiting for things from external sources that tend to cause problems with trying to do this much at once. None of this parallelism is related to the "bitness" of a processor.
  • Reply 20 of 31
    This is a fascinating question, in which history gives us some guidance.



    It could be argued that the first popular 16-bit minicomputer was DEC's PDP-11 (1970), and that the gap to the first mainstream business desktop was 12 years (1982).



    It took 8 years for the minicomputer to step-change to 32-bit with the introduction of VAX (again from DEC), and the business desktop caught up in the mid-eighties with the introduction of the Intel 386 and MC68020, fuelling the near-death of IBM (the PS/2 and it's misbegotten MCA architecture), the rise of Compaq (who went 32-bit and didn't try to own the bus standard) and the maturing of the Macintosh line. Now, the gap between server suite and desktop is only some seven or eight years.



    64-bit arrived with DEC's Alpha 21064 (what was it with the guys at DEC, was there nothing to do in Maynard, MA of an evening) in 1992, a generational gap of 14 years and a five year gap from the popularisation (?) of the 32-bit desktop and mainstream server. And now we know that, no matter what side of the street you walk, the mainstream - by which I mean affordable - 64-bit desktop becomes a reality in the second-half of 2003, a gap of 11 years from Alpha's release.



    If history repeats itself, 64-bit will become a mainstream desktop and server standard by 2006/7. Sadly rather than being fuelled by Apple's recent announcements, this step change will be generated by the introduction of Longhorn and the retiring of a million PIII and Xeon servers.



    This is when the high-performance computing R&D teams will start to get bored by stuffing more cores on each die and using high-performance bus technologies to join them together.



    By 2008, teams from HP/Intel and IBM - let's assume that MIPS has died by this time - will start to think about how the next generation of computing will pan out. and I would expect it'll be a decade before we gather to hear one team or another announce the next step change for the server world.



    As for 128-bit on the desktop, you'll have a wait - I would have guessed around 2018/2020.



    Of course, by then the entire manner in which we work and collaborate and socialise will have changed beyond all recognition. It's worth remembering that the first generation of analogue cellphones were introduced seventeen years ago, and the First World would be a wholly different place now had that technology not been invented.



    The concept of the server and the client as we understand it will be a distant memory by 2020, a system powering your home will have the ability to interact with hundreds or thousands of services provided by a grid of interconnected "yellow pages" systems that won't store data, but merely point our 128-bit capable home to a chunk of network attached memory somewhere in the ether so that we can watch a film or listen to every version of Beethoven's Fifth streamed to our home using the 155 megabit link that will be installed in each home in the economically developed world as standard.



    So is it required now? The answer would be No!



    Will we deploy the power intelligently in twenty years? To paraphrase Dr Ian Malcom in Jurassic Park, "Scientists will find a way".
Sign In or Register to comment.