32 bit vs. 64 bit

Posted:
in Current Mac Hardware edited January 2014
Would anyone versed in the subject mind explaining the pros and cons of a 64-bit processor in relation to a 32-bit processor? What, if any, impact would be observed if the processor is a dual-core?



Thanks for helping me clear this up



PS I know next to nothing, so build from the basics
«1

Comments

  • Reply 1 of 23
    dbug7dbug7 Posts: 11member
    64bit could address more memory than 32bit?

    correct me if i'm wrong..
  • Reply 2 of 23
    brussellbrussell Posts: 9,812member
    It's twice as much.
  • Reply 3 of 23
    telomartelomar Posts: 1,804member
    64 bit processors can address integers of up to 64-bits in size, whether it is for math or memory. x86 processors gain an additional 8 registers, this is simply related to the changes made in the architecture with IA-32e and has nothing to do with 64 bit though.



    Downside is increased memory usage.
  • Reply 4 of 23
    cosmonutcosmonut Posts: 4,872member
    A dual core chip generally makes more efficient use of the system bus and cache than dual processors does, from what I understand.
  • Reply 5 of 23
    lundylundy Posts: 4,466member
    Quote:

    Originally posted by Animal Farm

    Would anyone versed in the subject mind explaining the pros and cons of a 64-bit processor in relation to a 32-bit processor? What, if any, impact would be observed if the processor is a dual-core?



    Thanks for helping me clear this up



    PS I know next to nothing, so build from the basics




    The "bitness" of a processor refers to



    1) how large an integer it can do arithmetic on at one time

    2) how big a number it can hold to indicate a memory address



    So when we had 8-bit processors (Apple ][ for example), they could only do arithmetic on numbers from -128 to 127 (2^8th power) without having to do it in more than one step.



    16-bit processors (IBM PC) could do arithmetic on -32768 to 32767 without having to do it in more than one step.



    32-bit processors can do arithmetic on integers up to 2^32, but they also can hold the address of a memory location up to 4 GB.



    Since it is very rare to need to do arithmetic on INTEGERS larger than 2^32 (anything larger than that usually does fine in scientific notation, also known as floating point), then the projects that need more than 32 bits are few and far between.



    However, many large-scale scientific and math projects may want to address more than 4GB of data. In that case, the 64-bit processor will allow them to put 8, 16, 32, etc. GB in the computer and be able to represent the addresses of all those memory locations in a single number. In actual practice, the entire 64 bits is never used for the address location, as that would be more RAM than the number of atoms in the universe. But using say 36 or 42 bits for the address would not be out of the question in the near future.



    In a nutshell, unless you need to do arithmetic on huge integers, or need to address more than 4GB of RAM, 64 bits does not help anything (except for the bonus 8 registers as mentioned).
  • Reply 6 of 23
    mzaslovemzaslove Posts: 519member
    Thanks, lundy, that was the most concise and easily understood explanation of 32 vs 64 bitness I've ever read. Normally I'm stymied by this whole thing (not being a programmer and all). Thanks much. I would, personally, like to address more than all the atoms in the universe, though. Perhaps in OS XI?



    MZ
  • Reply 7 of 23
    Great answers everyone, thank you! I think I'm beginning to see the true facets of this issue.



    Not to shove more work on you guys, but I have one remaining question:



    In a comparison between two like processors, with one processor as a 32-bit and the other 64, what differences would one see?



    Someone mentioned 64 bit processors are more memory-intensive. Let's assume we're running from 1.5GB of RAM, then.



    Processor one:

    2.0GHz 32-bit dual core, 1.5GB RAM



    Processor two:

    2.0GHz 64-bit dual core, 1.5GB RAM



    In running these two pieces of hardware through daily tasks, what differences would be seen? We're talking internet browsing, word processing, playing HD movies, and possibly some graphics editing. I assume power consumption, as well, would differ? How much?



    I don't mean to inundate you all with questions, but you seem quite knowledgable on the subject and I'd like to clear up my ignorance once and for all



    Thanks again
  • Reply 8 of 23
    lundylundy Posts: 4,466member
    Quote:

    Originally posted by Animal Farm

    Great answers everyone, thank you! I think I'm beginning to see the true facets of this issue.



    Not to shove more work on you guys, but I have one remaining question:



    In a comparison between two like processors, with one processor as a 32-bit and the other 64, what differences would one see?



    Someone mentioned 64 bit processors are more memory-intensive. Let's assume we're running from 1.5GB of RAM, then.



    Processor one:

    2.0GHz 32-bit dual core, 1.5GB RAM



    Processor two:

    2.0GHz 64-bit dual core, 1.5GB RAM



    In running these two pieces of hardware through daily tasks, what differences would be seen? We're talking internet browsing, word processing, playing HD movies, and possibly some graphics editing. I assume power consumption, as well, would differ? How much?



    I don't mean to inundate you all with questions, but you seem quite knowledgable on the subject and I'd like to clear up my ignorance once and for all



    Thanks again




    LOL - as usual, the answer is "it depends".



    It depends on whether you are using x86 or PPC. I don't know for sure about the 64-bit extensions for x86. But in PPC, I do know how it works.



    Now the other thing it "depends" on in PPC is whether the app was compiled for 64-bit or not. What this means is that the compiler was told that IF it NEEDED to do 64-bit arithmetic, it should use the special 64-bit arithmetic instructions.



    Most likely, the apps used in surfing, email, etc. had no reason for the programmer to specify "OK to use 64-bit" because that would only work on the G5 - it would crash on the G4 or G3.



    So on the PPC, we assume that the compiler compiled code AS IF it were running on a G4, so you would not see any penalty, as the G5 will use 32-bit arithmetic if it is instructed to. The G5 has a "add 32 bit" instruction as well as an "add 64-bit" instruction, so if the compiler uses only the 32-bit instructions, nothing out of the ordinary happens.



    I am not sure how it works on x86 - the x86 may have a distinct "64-bit mode" which usually wouldn't be turned on so the situation is probably the same as the PPC.
  • Reply 9 of 23
    xoolxool Posts: 2,460member
    The only 64-bit app I use is MySQL and I'm only using it for iShits and giggles. I probably could do benchmarks comparing its performance on my G5 dually to that of the 32-bit version.
  • Reply 10 of 23
    Quote:

    Originally posted by lundy

    LOL - as usual, the answer is "it depends".



    It depends on whether you are using x86 or PPC. I don't know for sure about the 64-bit extensions for x86. But in PPC, I do know how it works.



    Now the other thing it "depends" on in PPC is whether the app was compiled for 64-bit or not. What this means is that the compiler was told that IF it NEEDED to do 64-bit arithmetic, it should use the special 64-bit arithmetic instructions.



    Most likely, the apps used in surfing, email, etc. had no reason for the programmer to specify "OK to use 64-bit" because that would only work on the G5 - it would crash on the G4 or G3.



    So on the PPC, we assume that the compiler compiled code AS IF it were running on a G4, so you would not see any penalty, as the G5 will use 32-bit arithmetic if it is instructed to. The G5 has a "add 32 bit" instruction as well as an "add 64-bit" instruction, so if the compiler uses only the 32-bit instructions, nothing out of the ordinary happens.



    I am not sure how it works on x86 - the x86 may have a distinct "64-bit mode" which usually wouldn't be turned on so the situation is probably the same as the PPC.




    You win the Most Helpful Poster award. You've made a previously baffled person relatively knowledgable (or at least aware) of the subject matter. Good job on that, and thank you



    Xool, it's not really necessary, but I would be curious to see those numbers if you were so inclined in a few minutes of spare time.
  • Reply 11 of 23
    xoolxool Posts: 2,460member
    Quote:

    Originally posted by Animal Farm

    Xool, it's not really necessary, but I would be curious to see those numbers if you were so inclined in a few minutes of spare time.



    I'm curious too. Theory says that for large data sets and certain queries 64-bit MySQL should show benefits, however in practice this may not be the case. It is still communicating with other 32-bit Apps (Apache and MySQL) although I'm sure I could compile 64-bit versions of those as well. MySQL should still show the most benefit.



    Next time I update my MySQL installation, I'll test both versions and see what I find.
  • Reply 12 of 23
    All I know is... when the original Nintendo came out, it was 8-bit. And then the SNES was 16-bit, and that was huge! Then Sega experimented with the 32x which magicly made the Genisis 32-bit, but mostly we just skipped to 64-bit!!



    ...then they quit measuring game consoles like that. I have no idea how this relates to the topic at all... other than it's what I think of when I read "64-bit". heh.
  • Reply 13 of 23
    eckingecking Posts: 1,588member
    Quote:

    Originally posted by MoonShadow

    All I know is... when the original Nintendo came out, it was 8-bit. And then the SNES was 16-bit, and that was huge! Then Sega experimented with the 32x which magicly made the Genisis 32-bit, but mostly we just skipped to 64-bit!!



    ...then they quit measuring game consoles like that. I have no idea how this relates to the topic at all... other than it's what I think of when I read "64-bit". heh.




    I think the original playstation was 32bit.



    And I think they claimed dreamcast was 128bit for a minute then they shut up about bits and no company mentioned bits again.
  • Reply 14 of 23
    xoolxool Posts: 2,460member
    Quote:

    Originally posted by MoonShadow

    All I know is... when the original Nintendo came out, it was 8-bit. And then the SNES was 16-bit, and that was huge! Then Sega experimented with the 32x which magicly made the Genisis 32-bit, but mostly we just skipped to 64-bit!!



    ...then they quit measuring game consoles like that. I have no idea how this relates to the topic at all... other than it's what I think of when I read "64-bit". heh.




    The reason for the significant improvements in the early days of video gaming has to do relative improvements and the poor quality of early systems.



    The original NES has a limited color palette and sound architecture. Whereas the SNES supported far more colors and had a number of audio improvements. These were very tangible and the differences were obvious. Aside from the processing power permitting the 3D generation, subsequent improvements are far less noticeable.



    For example, going from 16 colors to 256 colors is incredibly noticeable. Likewise going from 256 colors to 16 thousand colors. The jump from 16 thousand to 24 million is important but far less significant and any further improvement is practically negligible for most users.



    It is this point of diminishing returns that we are facing today. There will always be special cases where the improvements will be required, but it will take some new killer app to make future processing power worth it. One could argue that HD is such an improvement, but it is really just an increase in resolution and nothing more.
  • Reply 15 of 23
    zandroszandros Posts: 537member
    Quote:

    Originally posted by MoonShadow

    All I know is... when the original Nintendo came out, it was 8-bit. And then the SNES was 16-bit, and that was huge! Then Sega experimented with the 32x which magicly made the Genisis 32-bit, but mostly we just skipped to 64-bit!!



    ...then they quit measuring game consoles like that. I have no idea how this relates to the topic at all... other than it's what I think of when I read "64-bit". heh.




    Yes, because the PlayStation wasn't 32-bit at all? Anyway, they did measure the Xbox, PS2 and GC as 128-bit, but I guess it kind of got dropped after a few months.
  • Reply 16 of 23
    Well, we all know that the Xbox isn't 128bit...
  • Reply 17 of 23
    kiwimackiwimac Posts: 80member
    Quote:

    Originally posted by Zandros

    Yes, because the PlayStation wasn't 32-bit at all? Anyway, they did measure the Xbox, PS2 and GC as 128-bit, but I guess it kind of got dropped after a few months.



    doesn't the xbox have a intel pentium 3 chip in it?? so that would make it 32 bit??
  • Reply 18 of 23
    zandroszandros Posts: 537member
    Quote:

    Originally posted by kiwimac

    doesn't the xbox have a intel pentium 3 chip in it?? so that would make it 32 bit??



    Yes, yes it has. It was only the PS2 and Dreamcast that were touted as 128-bit, I were wrong. Though they aren't true 128-bit either.
  • Reply 19 of 23
    Quote:

    Originally posted by Zandros

    Yes, because the PlayStation wasn't 32-bit at all? Anyway, they did measure the Xbox, PS2 and GC as 128-bit, but I guess it kind of got dropped after a few months.



    Eh... I just forgot about the Playstation.
  • Reply 20 of 23
    lundylundy Posts: 4,466member
    Marketing-speak also can claim that something is 64 or 128 or 256 bit when in fact they are referring to some internal bus in the product that has nothing to do with the CPU.



    For example, the G5 can transfer 128 bits simultaneously to the Altivec unit, and even more to the cache. But that does not make the processor a 128-bit processor. The bitness of the processor is the size of the largest integer that it can do arithmetic on without having to do it in more than one step.
Sign In or Register to comment.