32bit / 64bit too much talk.

Posted:
in Future Apple Hardware edited January 2014
I have a Question, I keep reading articles on new processors (AMD/Intel)

and getting confused. This does relate to Future Apple hardware.



The articles will make comments about Opteron/new intel processors, not being true 64 bit processors. That all they have done is add 64 bit memory addressing.



So I am not clear on what makes a true 64 bit processor. What does?



Which brings me to the question of the G5. How 64bit is the new G5?



With Intel announcing the adding of 64 bit extensions to there upcoming releases that are very similar to AMD's extensions and so much contradiction in what I have been reading, I am at a loss now.



Please help me clarify my thoughts.





PS. My comparison of the three chips is based soley on the fact that they are all 32 bit / 64 bit processors, or are they?
«1

Comments

  • Reply 1 of 37
    amorphamorph Posts: 7,112member
    Assuming that a "true 64-bit processor" is one whose instructions only deal with data in 64-bit chunks, there aren't very many of them because they're not very useful.



    The new x86-64 CPUs from AMD and (now) Intel "bolted on" 64-bit support because if they offered a chip that used a purely 64-bit instruction set, no existing x86 app would run on it! Is that a feature?



    Processor design has moved beyond attempts to implement "pure" anything, because the "pure" models inevitably fail to cope with at least one major aspect of reality, and real-world CPUs can't. It's useful to have lots of instructions that deal with 32-bit data, so CPUs have lots of instructions that deal with 32-bit data. It's just as simple as that.



    Just to drive the point home, there's no clear definition of what a "64 bit" processor is in the first place. It used to refer to the register size (a register is a very small, very fast bit of memory designed to hold a "word," which is classically the smallest size the CPU is designed to work with - modern designs, again, are too complex to hold to this), but that was back when floating point units were separate and optional support chips. The 68040 had an 80 bit register for floating point work, and the G4 has 128-bit registers for AltiVec and 64-bit registers for floating point. So that definition has fallen back on the size of the integer registers. In all modern implementations, addresses in memory are stored as integers, so this metric also measures the amount of virtual memory that the CPU can address efficiently. As a convenient coincidence, I'm not aware of any CPU where the integer registers are larger than the FP or vector or other registers, so this shorthand works pretty well in practice.



    So, after all that, a "64 bit CPU" is a CPU that uses 64 bit addresses into virtual memory, for all practical purposes. A "pure" 64-bit CPU - one which only deals with data in 64-bit chunks - is a pretty idea with limited practical application. The 970, and the Nocona Xeon, and the Athlon 64, and the Opteron, are hybrid 32/64 bit CPUs because that's what works.
  • Reply 2 of 37
    Quote:

    Originally posted by Amorph

    Assuming that a "true 64-bit processor" is one whose instructions only deal with data in 64-bit chunks, there aren't very many of them because they're not very useful.



    A "pure" 64-bit CPU - one which only deals with data in 64-bit chunks - is a pretty idea with limited practical application. The 970, and the Nocona Xeon, and the Athlon 64, and the Opteron, are hybrid 32/64 bit CPUs because that's what works.




    So can any of these four chips handle 64 bit data or not?
  • Reply 3 of 37
    cubistcubist Posts: 954member
    I'd add that a 64-bit processor must have 64-bit-wide registers on which it can perform all ALU operations.



    Even that's not perfect, because the Zilog Z8000 could do 32-bit operations and had a 32-bit bus, but it was considered a 16-bit processor. Hmm.



    I think it really has something to do with data path widths within the processor, and external data bus (not just address) width to memory.
  • Reply 4 of 37
    Apple's G5 processor Architecture page notes a few 'features'





    2^32 ...... 2^64 - these are exponents...

    4.3 billion. Numbers that big are hard to get your head around, but you could compare 32-bit processing to a glass of water, and 64-bit processing to the Niagara falls. This lets the G5 work with larger numbers in the same clock cycle for video effects, scientific and 3D calculations; the 32-bit Pentium must split such numbers across multiple cycles.



    Hannibal at ARS has some good technical threads about the G5



    and even if the G5 were consider "pure 64", despite the fact there is _no emulation penalty_ for 32-bit code according to IBM (who have had this legacy support designed in to many of the POWER series chips to allow backwards compatibility while expanding 64-bit design), we must remember that OS X.3 as it currently stands isn't truly a "64-bit OS", it's considered a bridge environment (mostly 32 with some 64 beginning to dominate) until 32-bit support requirements diminish and migration to a more 'pure 64 OS" occurs.



    not sure if that just confuses the issue.
  • Reply 5 of 37
    The real point is that most of the time, the 64 bit-ness of the processor means very little to the average consumer. It's a popular buzzword right now, but not much else. There are some applications where true 64-bit support will matter, but they are not nearly as common as folks' compulsion to dwell on the issue would suggest.



    From a hardware design standpoint, it means just about nothing at all. In speculating about the G5 PowerBook, for example, sometimes people talk about what a challenge it is to "fit a 64-bit processor into a laptop." Completely irrelevant; the 64 bit-ness is hidden inside the chip.



    The move from 16 bits to 32 bits was critical because you very often need to deal with numbers greater than 2^16. The move from 32 bits to 64 bits is much fuzzier, though the ability to address a humongous amount of memory will be very valuable for some.



    My guess is that if the 970 were a 32-bit chip with otherwise the same performance, Apple would still have used it. The 64-bitness gave them a marketing hook that they chose to take advantage of, for better or worse.
  • Reply 6 of 37
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by oldmacfan

    So can any of these four chips handle 64 bit data or not?



    All of them can.
  • Reply 7 of 37
    johnqjohnq Posts: 2,763member
    Amorph,



    Am I correct in reading into your post that it's "merely" a chicken and egg problem?



    You seem to be saying that there are limited practical applications for a "pure" 64-bit CPU because no one yet develops software for a purely 64-bit CPU?



    So is it a matter of backwards-compatibility alone?



    I guess I'm asking if, barring backwards-compatibility, barring lack of 64-bit developments tools (assuming), is there anything else that prevents going truly 64-bit?



    I seem to get from this that 32-bit is good enough for most things and that only certain things will "ever" be 64-bit?
  • Reply 8 of 37
    the clearest illustration I know of the advantages of 64 bit are BLAST word length for DNA Sequencing







    parsing larger chunks of data makes a huge difference after a certain threshold



    from here



    cryptography will benefit from access to vastly larger numbers

    certain mathematical, physics and modeling apps will clearly benefit

    massive databases will benefit from this as well as larger memory address space
  • Reply 9 of 37
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by johnq

    Amorph,



    Am I correct in reading into your post that it's "merely" a chicken and egg problem?



    You seem to be saying that there are limited practical applications for a "pure" 64-bit CPU because no one yet develops software for a purely 64-bit CPU?




    More to the point, there's not much reason to develop software for a purely 64-bit CPU.



    The transition from 16-bit to 32-bit was made quickly and eagerly because 16 bits only offers you 65,536 values - usually -32,768 to 32,767. That's 64K of memory, or a painfully small range of integers (FP was a luxury back then). 32-bit offers something over 4 billion possible values, which addresses 4GB of RAM and covers just about every other need pretty well. The only place where more possible values were needed (for precision, not for range) was in floating point, and CPUs just used huge special registers for that - as with the 68040, a 32-bit CPU that could do arithmetic on 80-bit floating point values, or the G4, a 32-bit CPU that can do arithmetic on 64-bit floating point values or 128-bit vectors.



    But any application that doesn't need more than 4GB RAM, or doesn't have to deal with more than 4 billion of anything, does not need to be 64 bit. Think about how many apps do not need this, and how many never will. Do you really need to enter 5 billion contacts into Address Book? (You really would have the world at your fingertips, I suppose...)



    I should note here that it is possible for a 32-bit application to deal with larger quantities. There are ways around any such limit. You pay a penalty in performance, but then you probably wouldn't expect 100 billion photos to pop up instantaneously anyway. At least, not any time soon.



    Quote:

    I seem to get from this that 32-bit is good enough for most things and that only certain things will "ever" be 64-bit?



    Exactly. And if there's always going to be a significant set of applications that run better as 32-bit applications, why not accommodate them? You could force them to pure 64-bit, but why? It's not like the 32-bit support is an albatross on the neck of CPU developers or anything. A hybrid CPU can run legacy code, can efficiently run applications that are most comfortable as 32 bit, and they can also run applications that are most comfortable at 64 bit. This seems to me like a perfectly acceptable state of affairs.



    Put another way, the applications that do require 64 bits wouldn't run a hair better if 32-bit support was eliminated from the 970, and the Nocona, and the Athlon 64, and the Opteron. But 32-bit applications would run slower in many cases.
  • Reply 10 of 37
    Quote:

    Originally posted by curiousuburb

    the clearest illustration I know of the advantages of 64 bit are BLAST word length for DNA Sequencing







    parsing larger chunks of data makes a huge difference after a certain threshold



    from here



    cryptography will benefit from access to vastly larger numbers

    certain mathematical, physics and modeling apps will clearly benefit

    massive databases will benefit from this as well as larger memory address space




    Careful what you attribute the G5's better performance to... I believe their BLAST implementation uses AltiVec and is fast for that reason (128-bit data chunks).



    Cryptography can probably benefit from AltiVec more than 64-bit integers, although I might be mistaken on that one. Some modeling algorithms might benefit, but most use double precision floating point and don't require a "64-bit processor" (which is why the 970 has 2 FPUs). Massive databases which use 64-bit address certainly benefit, and that is the clearest demonstration of the 64-bit advantage that you listed.



    Note that the designation "64-bit" has nothing to do with internal data path widths, floating point unit widths, vector unit widths, cache line sizes, external data bus sizes, etc. Those are all implementation details of the particular processor. The designation "64-bit" means the size of the integer registers, and therefore (virtual) memory addresses.
  • Reply 11 of 37
    johnqjohnq Posts: 2,763member
    Thank you Amorph, very clear.



    I've slacked since G4 and my interest in this level of detail has become merely armchair-grade. It is fascinating and exciting nonetheless. I appreciate the somewhat rare adult response. I was fearing "WTF!!!!!! Don't you know 32-bit offers something over 4 billion possible values???? NITWIT!!!!" or some-such (not from anyone in particular!)



    I'm thinking that us 'regular users' (that is, everyone short of true scientists/doctors/researchers and students thereof) are reaching a zenith of performance? I mean that the other shoes (sic) to drop are bottleneck issues, RAM, drives, fiber, etc. basically everything but CPU?
  • Reply 12 of 37
    johnqjohnq Posts: 2,763member
    double post
  • Reply 13 of 37
    mr. memr. me Posts: 3,221member
    Quote:

    Originally posted by cubist

    I'd add that a 64-bit processor must have 64-bit-wide registers on which it can perform all ALU operations.



    Even that's not perfect, because the Zilog Z8000 could do 32-bit operations and had a 32-bit bus, but it was considered a 16-bit processor. Hmm.



    I think it really has something to do with data path widths within the processor, and external data bus (not just address) width to memory.




    I think that you are confusing things. The Zilog Z8000 had 64 segment registers. These extremely flexible registers could be combined into sixteen 16-bit, eight 32-bit, or four 64-bit registers. The Z8000's internal registers were later expanded to 32-bit with a six-stage pipeline. The 32-bit version of the processor was the Z80000.
  • Reply 14 of 37
    Quote:

    Originally posted by curiousuburb

    the clearest illustration I know of the advantages of 64 bit are BLAST word length for DNA Sequencing







    parsing larger chunks of data makes a huge difference after a certain threshold



    from {picture}



    cryptography will benefit from access to vastly larger numbers

    certain mathematical, physics and modeling apps will clearly benefit

    massive databases will benefit from this as well as larger memory address space




    It seems that the G5 is a true 64-bit processor with 48-bit (edit: it's actually 42-bit) memory addressing, isn't that right? That means that, should the memory addressor increase to the G5's outright bit support, the G5 could introduce a ceiling of :



    18,448,744,073,709,551,616.00 Bytes of addressable data.



    That's 18,000 Petabytes.

    Or that's 18 million Terabytes.



    ======

    I think that could leave one in an incomprehensible state for a while. Wipe the drool from your lip. Aaaaaah...a man can dream. Imagine if you were to max that out and stick 18,446,744,073.7 (18 Million) sticks of 1GB DDR-RAM to max that beast out...wow.

    ======



    64-bit is there for the future. We might not see a direct need for it now, but it will help us in the long run. I mean, going back to 1970's, there wasn't a need for a 4 digit date register in some computers.



    One thing that I always see with computers is that they always have a ceiling or maximum. Making that ceiling just a little bit farther away will help us to accomplish more.



    As per the DNA models. That kind of computing is needed by few now, but more later.



    Avg. RAM:

    1970 : ~000,064,000 (bytes)

    2004 : ~512,000,000 (bytes)



    If this is a linear graph, and I haven't plotted anything, we should see by 2034,



    2034 : 4,096,000,000,000



    Computers coming with 4 TB? I don't think that's too much to hope for.



    G5's would still be well in as far as memory addressing is concerned. Every 32-bit processor would have a bottle neck, itself (edit: if using virtual addressing)



    -walloo.
  • Reply 15 of 37
    Quote:

    Originally posted by willywalloo

    It seems that the G5 is a true 64-bit processor with 48-bit memory addressing, isn't that right? That means that, should the memory addressor increase to the G5's outright bit support, the G5 could introduce a ceiling of :





    The G5 is a "true 64-bit processor", but its frontside bus protocol "limits" it to 42-bit physical addresses. This means you're stuck with a lowly 4096 gigabytes of RAM, maximum. Your virtual address space theoretically can be up to 16 billion billion bytes. I say theoretically because there is usually some other subtle factor that comes into play which would prevent that in practice... perhaps the virtual memory page tables are too large to fit in the physically addressable memory, or something silly like that. In any case it is going to be academic for quite some time yet as that much memory would cost a truly astronomical amount.



    Quote:

    64-bit is there for the future. We might not see a direct need for it now, but it will help us in the long run. I mean, going back to 1970's, there wasn't a need for a 4 digit date register in some computers.



    For a lot of things 64-bits will never be needed... one day all software could be written for 64-bit mode, but that will most likely be because 32-bit mode was discarded completely. By that point, however, things will have changed so much that it is all rendered moot anyhow.



    What 64-bit is interesting for is a bunch of scientific computing applications, for software that manipulates huge amount of data (i.e. big databases), and for applications that will show up only when this hardware is in many people's hands. This is one of those chicken-and-the-egg problems ... why create software that requires 64-bit hardware when there is no 64-bit hardware? Why create 64-bit hardware when there is no software to use it? Well, they've built the hardware and in the next few years we'll see who can come up with compelling uses for it. Hopefully its built for the Mac first.
  • Reply 16 of 37
    pbpb Posts: 4,255member
    Quote:

    Originally posted by willywalloo



    Avg. RAM:

    1970 : 000,064,000

    2004 : 512,000,000



    If this is a linear graph, and I haven't plotted anything, we should see by 2034,



    2034 : 4,096,000,000,000



    Computers coming with 4 TB? I don't think that's too much to hope for.







    So, you are suggesting to write an equation of the form



    y = a * x + b



    where y is the average RAM in bytes, x the time in years and a, b constant parameters. Using your data for the years 1970 and 2004 one finds:



    a = 15056941 (bytes/year)

    b = -29662109770 (bytes).



    So, in year 2034 you will have average RAM:



    y( 2034 ) = 963708224 Bytes ~ 963 MB.



    With the same law, the suggested amount of 4 TB, will occur at year



    x = 274004.



    Am I missing something?
  • Reply 17 of 37
    johnqjohnq Posts: 2,763member
    Quote:

    Originally posted by PB

    With the same law, the suggested amount of 4 TB, will occur at year



    x = 274004.





    Will that be "end of summer" 274,004?



    Should I wait for MacWorld 274,005 before I buy my next Mac?



  • Reply 18 of 37
    pbpb Posts: 4,255member
    Quote:

    Originally posted by johnq

    Will that be "end of summer" 274,004?



    Should I wait for MacWorld 274,005 before I buy my next Mac?











    Good idea. But if you need it now, don't wait, buy now .
  • Reply 19 of 37
    lucaluca Posts: 3,833member
    You're looking at it in a linear fashion. With only two data points, you're going from the coordinates of (0,64000) to (34, 512000000). I chose to use "years after 1970" instead of "year" for the x-axis.



    Anyway, if you approximate that, it's basically saying that every 30-odd years, RAM increases by about 500,000,000 bytes. If we add two or three more data points and fit something like an exponential curve to it, it might make more sense. We could look at how often the amount of RAM has doubled in the high-end machines of their day.
  • Reply 20 of 37
    I hate quoting Moore's Law, but since it basically says the transistor count doubles every 18 months you can see that this is definitely an exponential curve. With your numbers 1974 was 2^16, this year is 2^29 so that is 13 doublings in 30 years (instead of the 20 doublings Moore predicted). To reach a terabyte we need 2048 more, which is a further 11 doublings... so that will take another 25 years.





    Note that I take issue with your 1974 number. It wasn't until the early 80's that we really had commercial machines with 64K of memory. This would reduce the time period by about 8, meaning there were 13 doublings in 22 years... much closer to Moore's prediction of 14-15 in that time period. If we believe Moore instead of you (no offense) then it'll take only another 16 or so years to reach a terabyte.
Sign In or Register to comment.