128 bit CPU chip : is there a need.

2»

Comments

  • Reply 21 of 31
    vinney57vinney57 Posts: 1,162member
    Great thread!



    I could foresee a situation where two things occur:

    Massively increased memory capacities via holographic or biological technologies (or whatever) and ubiquitous very high speed datalinks. This leads to the era of data grids and memory farms. Would not 128 bit addressing allow the mapping of the entire 'datasphere' to an individual's local machine?
  • Reply 22 of 31
    bungebunge Posts: 7,329member
    Quote:

    Originally posted by Programmer

    None of this parallelism is related to the "bitness" of a processor.



    Naturally. My thinking is a way to utilize the "bitness" of a processor in a way that hasn't really been considered before.



    As for the current problems with compiling, 4 programs running simultaneously wouldn't necessarily need to take advantage of the four 32 bit channels if the processor grabbed the instructions independently.



    Even though what I'm saying basically exists in Alti-vec doesn't mean it'll come to pass. Just a though.
  • Reply 23 of 31
    neutrino23neutrino23 Posts: 1,562member
    I suspect it will be a long time till we find enough need to implement a 128 bit address space compute. I suspect that the future of computing will be in large clusters of 64 bit computers like the IBM 970.



    When you start looking at very large data sets like that the time just to access each point adds up to a large value.



    Just to access 2^64 memory words sequentially with 1ns read time per word (rather fast) takes about 1.8e10 secs or a little less than 600 years.



    A 128 bit address space computer with the same fast 1ns memory would take more than the age of the universe to do a sequential memory check.



    Who can wait that long just for a memory check to run before booting the OS?
  • Reply 24 of 31
    baumanbauman Posts: 1,248member
    OK, I may just be really obtuse, but why is everyone segregating RAM and storage in future models? Isn't RAM there only because mass storage tech isn't fast enough?



    The way I foresee things is: Solid state storage. You can snag 1GB CF cards now for under 200 bucks. That's down from 700 two years ago. I could see solid state storage completely removing the need for RAM, and allowing for smaller, faster, and less power computers.



    Of course, this isn't based on anything but the ideas floating around in my head, and so I may be missing some important elements. Take it as you will
  • Reply 25 of 31
    shetlineshetline Posts: 4,695member
    Quote:

    Originally posted by neutrino23

    A 128 bit address space computer with the same fast 1ns memory would take more than the age of the universe to do a sequential memory check.



    Who can wait that long just for a memory check to run before booting the OS?




    You may have just stumbled upon the deepest philosophical and metaphysical question of our time.
  • Reply 26 of 31
    macnn suxmacnn sux Posts: 36member
    Quote:

    Originally posted by bauman

    The way I foresee things is: Solid state storage. You can snag 1GB CF cards now for under 200 bucks. That's down from 700 two years ago. I could see solid state storage completely removing the need for RAM, and allowing for smaller, faster, and less power computers.



    Of course, this isn't based on anything but the ideas floating around in my head, and so I may be missing some important elements. Take it as you will




    i believe some companies already do this. i read it in a article but obviously these companies are very high end and depend on critical performance.



    i think we won't see it soon because now 40Gigs is not that much anymore. We routinely expect 100-180 GB drives all the time at a reasonable price. Reverting back to a solid state 10GB machine would be tough.



    HOWEVER, why not just have the OS entirely on a protected part of the RAM? If RAM prices sink down so that 8GB is routine on G5s...why not just install OSX on the RAM. Then we would have a truly snappy OS.



    It makes sense to me? Do you folks see any problems with this though?
  • Reply 27 of 31
    neutrino23neutrino23 Posts: 1,562member
    I guess if you installed 8GB or more of RAM in a G5 then OS X would automatically use it to cache the HD. In other words, yes it would work and you don't have to do anything more than install the memory.



    Conversely, this is the reason OS X slows down so much when you try to run it in 256MB or less.



    [edit] regarding flash memories, they are much slower than DRAM (especially writing) and they have a limited number of write cycles.



    Possibly in the near future there will be another technology for non-volatile RAM which will be faster. Several companies are claiming to be distributing samples but no one is in production.
  • Reply 28 of 31
    wired magazine had a cover story a while back about crazy new data storage technologies. on the cover was a medicine bottle with some powrder in it, and it said something like "this powder can store *inser flabbergastingly huge amount here* of data." anybody remember this?



    fascinating thread
  • Reply 29 of 31
    overtoastyovertoasty Posts: 439member
    Quote:

    Originally posted by Amorph



    One of the reasons the Itanium was so late arriving is that its design punts all the complexity involved in deciding how to run code to the compiler.




    ... ya know, I read this guy, I learn stuff ...



    Can you give me an example where things have been inappropriately foisted upon the compiler? And where hand coding is - for now - really the only solution?



    thanks in advance



    Mr. Curious

  • Reply 30 of 31
    programmerprogrammer Posts: 3,458member
    Quote:

    Originally posted by OverToasty

    ... ya know, I read this guy, I learn stuff ...



    Can you give me an example where things have been inappropriately foisted upon the compiler? And where hand coding is - for now - really the only solution?





    Its all a matter of opinion, really. There are tradeoffs in every design, and its hard to attribute problems to specific things because the alternatives aren't a simple change, they have impacts well beyond the obvious one.



    The Itanium's EPIC design is increadibly complex, despite foisting the additional complexity onto the compiler. Somehow they have managed to achieve both massive hardware and software complexity, and made it impossible for humans to hand code the instructions. The compilers are so tightly coupled to the hardware that each revision of the processor requires a new compiler and all of the software to be recompiled -- from what I have read this is not just to make it efficient, it is to make it correct! Contrast this to the PowerPC line which is almost completely code compatible all the way back to the 601 (indeed, to some extent the original POWER). The new 970 probably does a pretty good job of running the original code.
  • Reply 31 of 31
    powerdocpowerdoc Posts: 8,123member
    Quote:

    Originally posted by shetline

    I tried to make clear in my post the I was only talking about address bits, and since we already have chips with 128-bit data handling, I figured that 128-bit addressing was the only interesting "future hardware" issue in question.







    Yes the question was about 128 bit data adressing and 128 general purpose register. It did not refear to GPU also.



    Your iron cube analogy was amazing
Sign In or Register to comment.