Full 64 bit optimization

13»

Comments

  • Reply 41 of 55
    lundylundy Posts: 4,466member
    Quote:

    Originally posted by Programmer The only place it really makes a difference is when you have a 64-bit address space[/B]



    Any insight as to why this guy's assignment statement at the Apple boards

    Code:




    double A[128][128][128][129];









    Won't compile? (long long A[128][128][128][129]; ) won't compile either).



    error: size of variable `A' is too large



    Array "A" is just over 2GB in size.



    This is with gcc 3.3 and flags "-mcpu=G5 -mpowerpc64"
  • Reply 42 of 55
    Quote:

    Originally posted by lundy

    Any insight as to why this guy's assignment statement at the Apple boards

    Code:




    double A[128][128][128][129];









    Won't compile? (long long A[128][128][128][129]; ) won't compile either).



    error: size of variable `A' is too large



    Array "A" is just over 2GB in size.



    This is with gcc 3.3 and flags "-mcpu=G5 -mpowerpc64"



    C/C++ (and probably Obj-C) caps the total size of stack variables. I don't remember the exact size, but I believe it's measured in KB. Use dynamic memory allocation and all should be well.
  • Reply 43 of 55
    Uh... because the compiler and OS don't yet support 64-bit pointers, perhaps?
  • Reply 44 of 55
    amorphamorph Posts: 7,112member
    Someone would try to allocation 2GB on the stack.
  • Reply 45 of 55
    Quote:

    Originally posted by Amorph





    For an Alpha. Which is where my information comes from.





    Oh, then what you said earlier makes some sense now, assuming it is correct. My experience is limited to PowerPC, Pentium, and lesser architectures.



    Do feel free to fire up XCode if using Panther or ProjectBuilder if using Jaguar and try out some 32-bit versus 64-bit integer code on your own. For those who might not know, "long long" is the data type to use for 64-bit integers. If you can come up with some test code where the 64-bit integer version is faster than the 32-bit integer version on a G5 I would like to know.
  • Reply 46 of 55
    lundylundy Posts: 4,466member
    Quote:

    Originally posted by Programmer

    Uh... because the compiler and OS don't yet support 64-bit pointers, perhaps?



    Is that true, Programmer? The compiler flags are only to trigger the generation of the longlong arithmetic opcodes? If he were to put 8GB of physical RAM in the machine, there would have to be a 64-bit pointer somewhere, wouldn't there?



    I'm off to see if a bunch of smaller arrays would behave any differently.



    You'd think a MS in CS would make this easy for me, but that was 30 years ago..
  • Reply 47 of 55
    kickahakickaha Posts: 8,760member
    Quote:

    Originally posted by Amorph

    Someone would try to allocation 2GB on the stack.



    Oh dear god, they did, didn't they?





    Please please please tell me this was in a loop...
  • Reply 48 of 55
    tidristidris Posts: 214member
    Quote:

    Originally posted by lundy

    Is that true, Programmer? The compiler flags are only to trigger the generation of the longlong arithmetic opcodes? If he were to put 8GB of physical RAM in the machine, there would have to be a 64-bit pointer somewhere, wouldn't there?





    According to gcc3.3 the sizeof(char*) is 4 when using the -fast option.
  • Reply 49 of 55
    yevgenyyevgeny Posts: 1,148member
    Quote:

    Originally posted by lundy

    Any insight as to why this guy's assignment statement at the Apple boards

    Code:




    double A[128][128][128][129];









    Won't compile? (long long A[128][128][128][129]; ) won't compile either).



    error: size of variable `A' is too large



    Array "A" is just over 2GB in size.



    This is with gcc 3.3 and flags "-mcpu=G5 -mpowerpc64"



    Because, duh, you can't allocate 2GB of stack space. Stack space is limited (for example, on Windows, the default stack size for a thread is 1 MB!). Just because you have 64 bit addresses doesn't mean that you can have 2GB of stack space in a nice contiguous block, especially on the call stack!!!! This guy doesn't quite understand how a compiler works.



    Perhaps he should write:

    Code:




    double* pA = new double[128][128][128][129];



    // do stuff with bug array



    delete [] pA; // don't forget to free the memory!





  • Reply 50 of 55
    yevgenyyevgeny Posts: 1,148member
    Quote:

    Originally posted by lundy

    Is that true, Programmer? The compiler flags are only to trigger the generation of the longlong arithmetic opcodes? If he were to put 8GB of physical RAM in the machine, there would have to be a 64-bit pointer somewhere, wouldn't there?



    I'm off to see if a bunch of smaller arrays would behave any differently.



    You'd think a MS in CS would make this easy for me, but that was 30 years ago..




    Well, no I think your motherboard could support 8GB or RAM and the OS could just ignore anything over 4. If the OS is supporting more than 8GB of RAM, then yes, the OS is using a 64bit pointer somewhere, but it may only allow a max of 4GB per process.
  • Reply 51 of 55
    amorphamorph Posts: 7,112member
    And that's about how it works. The hardware, and presumably the kernel, can handle up to 42 bits of address space. For the nonce, all pointers available in user space are 32 bits, so the virtual memory available to any application is 4GB regardless of the amount of real RAM.



    None of this has anything to do with how much you can allocate on the stack. 2GB! Man, I haven't laughed that hard in a good while...
  • Reply 52 of 55
    Quote:

    Originally posted by Yevgeny

    Well, no I think your motherboard could support 8GB or RAM and the OS could just ignore anything over 4. If the OS is supporting more than 8GB of RAM, then yes, the OS is using a 64bit pointer somewhere, but it may only allow a max of 4GB per process.



    The portion of the OS that needs 64-bit pointers to support this is very very small. It might even be written in assembly language at this point. 8 GB of RAM is only 2 million 4K pages, after all.





    No, currently Apple does not support 64-bit pointers. I wish it wasn't so, but that is the current situation.
  • Reply 53 of 55
    So, I guess the way to take advantage of more than 4 GB of real memory under OSX today is for my main app to spawn one or more background apps and divide the workload and data set among all of them. Each app would be able to allocate up to 4 GB of private virtual memory which would be mapped to real memory if there is enough real memory available.
  • Reply 54 of 55
    Quote:

    Originally posted by Tidris

    So, I guess the way to take advantage of more than 4 GB of real memory under OSX today is for my main app to spawn one or more background apps and divide the workload and data set among all of them. Each app would be able to allocate up to 4 GB of private virtual memory which would be mapped to real memory if there is enough real memory available.



    Sounds good to me. Though I'm pretty sure that if you're just now starting development, 64b will be there be the time you actually need them. Or maybe I'm crazy and you'll have to stick with your idea.
  • Reply 55 of 55
    I don't know the details, but apparently there is a way to use file mapping to access more than 4 GB. Digging around on the Apple developer site might turn up more info.
Sign In or Register to comment.