G5 : 64 bits or 32 bits ?

24567

Comments

  • Reply 21 of 126
    mmastermmaster Posts: 17member
    [quote]Originally posted by Programmer:

    If you read the PowerPC 64-bit specification you will see that the instruction word remains at 32-bits, so code size does not change.

    <hr></blockquote>



    yeah, I was not thinking very straight and forgot that most 64-bit architectures still use 32-bit instruction words to conserve space and allow easier compatibility with 32 bit version. I even studied the MIPS architecture in school which has both 32bit and 64bit specifications so should have remembered that many of the instructions were shared across the various MIPS I-V specifications. IA-64 did not have to maintain the backwards compatibility and so went with 40-bit sub-instructions.
  • Reply 22 of 126
    What we have here is a great opportunity for Apple to grab both mind share (for future expansion) and market share (for immediate expansion) with the 64bit jump ... Apple should do is as soon as they can confidently believe they won't screw it up.



    Why?



    Two reasons:



    A - Intel is completely messed up on the issue. (which is why I hope Apple doesn't merely ad FUD on FUD by also rushing to market before they're ready and screwing up their 64bit lead)



    B - There's this huge maturation of needs which a desktop 'puter is just about ready to take over, in the area of bio-tech (BLAST etc)with their huge memory needs, & Video Arts (Nothing Real, Maya, FCP) with their huge memory needs - BTW these two fields in themselves converge directly together to give you proper 3D protien folding vizualization, which in itself is a nice bonus, but I digress ...



    Mix those two together, add water, then stir in the traditional advantages of UNIX with parallelism and this rockin' POOCH software, and you bake up ....



    Racks of 64bit G5 drones in a back room someplace, all wired up with switched Gigabit

    and running POOCH on OSX.2.



    Ya wanna render? Config the job on your machine, and then farm it out ...



    Ya wanna BLAST? Config the job on your machine, and then farm it out ...



    Ya wannd do both? Sure, go ahead, POOCH will just config whoever's available, even buddy's computer down the hall if he's not using it ...



    You started a job at home? But it's way too big for you to handle? No problem, walk in with your powerBook to a service centre, and send your job to the racks ... just rent the power ... (all you home video junkies think about that one!) ... walk out with a DVD of your results. We're talking kinko's conveniece, clustered super computing power, at the price of two large pizza's ...



    What we have here is one hell of a vector processor (altivec), inside a machine that is incredibly easy to cluster (anybody 'round here ever tried clustering Linux using Beowulf? ) ... what's important is the Apple platform itself becomes both a desktop system (it's traditional space), and a whole new method of incredibly powerful modular, scalable computing ... right at the time when it's needed, in exactly the spaces where it's needed, biotech and vid.



    They'd be crazy not to do this.



    The name for this clustering parallel system?



    why "Apple Tree" of course ...



    Steve owe's me a latte
  • Reply 23 of 126
    tjmtjm Posts: 367member
    A bit of semantics speculation:



    Supposedly, there are a bunch of modifications coming to the G4 line which sound rather similar to all the great new technologies the G5 was going to bring. The only thing I didn't see was 64-bits. Could this be the "generational shift" to distinguish the G4 from the G5? I. e. anything 32-bit will remain officially a G4, while the G5s will be the 64-bit line.



    A bit pedestrian compared to most of these other posts, but I'm a chemist. Microprocessor architecture is a ways out of my field.
  • Reply 24 of 126
    billybilly Posts: 34member
    What about companies, like Adobe, who just spent all this time optimizing for OS X and for G4 and altivec, are they going to want to come back again and optimize for 64-bits now?



    What if Apple made a uber-pro line with the 64-bit chip, then bought Maya and optimized that for 64-bit, would that run on the G4 towers that are still around. Or are you going to have to optimize all of your software twice, 1 for G4 and AltiVecm and another time for 64-bit.



    If you have to do that, I think that's a bad idea.
  • Reply 25 of 126
    programmerprogrammer Posts: 3,458member
    [quote]Originally posted by Billy:

    <strong>What about companies, like Adobe, who just spent all this time optimizing for OS X and for G4 and altivec, are they going to want to come back again and optimize for 64-bits now?



    What if Apple made a uber-pro line with the 64-bit chip, then bought Maya and optimized that for 64-bit, would that run on the G4 towers that are still around. Or are you going to have to optimize all of your software twice, 1 for G4 and AltiVecm and another time for 64-bit.



    If you have to do that, I think that's a bad idea.</strong><hr></blockquote>



    The 32-bit software will still run at full speed on a 64-bit machine. I think only very specialized software would take advantage of the 64-bit features, although some apps might ship with two versions of the code (which MacOSX supports via bundling), one 32-bit and one 64-bit. If the code is written well then its just a matter of recompiling and using the 64-bitness in key places where you need it.
  • Reply 26 of 126
    programmerprogrammer Posts: 3,458member
    [quote]Originally posted by TJM:

    <strong>A bit of semantics speculation:



    Supposedly, there are a bunch of modifications coming to the G4 line which sound rather similar to all the great new technologies the G5 was going to bring. The only thing I didn't see was 64-bits. Could this be the "generational shift" to distinguish the G4 from the G5? I. e. anything 32-bit will remain officially a G4, while the G5s will be the 64-bit line.

    </strong><hr></blockquote>



    I was thinking much the same thing. Not a bad idea really, as it sidesteps the whole notion that the G4 is obsolete... the G5 would be billed as a workstation/server class machine for those who need really advanced techniques. Hopefully there would be a G4 that is almost equivalent except that it is only 32-bit, but has the other advances incorporated in it.
  • Reply 27 of 126
    razzfazzrazzfazz Posts: 728member
    [quote]Originally posted by Amorph:

    <strong>64 bit makes sense for: Gigantic address spaces (per process, per file and per logical volume), not just system addressable RAM), </strong><hr></blockquote>



    Agreed for the address space, but since the speed of disk accesses is limited by the disk hardware rather than the CPU, being able to do 64-bit-ops in many cases isn't going to get you too much of a speed increase, because the disk hardware is still many many orders of magnitude slower than the CPU.





    [quote]<strong>and accelerating code that, whether by intent or by default, relies heavily on double precision floating point.</strong><hr></blockquote>



    All the current 32bit processors already have full 64 bit FPUs. The 32-/64-bit-ness issue only concerns integer registers, and thus doesn't relate to FP code at all (apart from the larger address space, of course).



    Bye,

    RazzFazz
  • Reply 28 of 126
    razzfazzrazzfazz Posts: 728member
    [quote]Originally posted by Outsider:

    <strong>The G4 has 36bit addressing but is not considered a 36bit processor. Also the first Alpha 64bit chips has 40-48bit addressing but were still considered 64bit chips.</strong><hr></blockquote>



    That's because the important factor here is the virtual adress width, not the physical one. The G4 has 32 bit wide virtual adresses (ignoring segmentation), and all the Alphas have 64 bit wide virtual adresses.



    Bye,

    RazzFazz
  • Reply 29 of 126
    amorphamorph Posts: 7,112member
    [quote]Originally posted by RazzFazz:

    <strong>Agreed for the address space, but since the speed of disk accesses is limited by the disk hardware rather than the CPU, being able to do 64-bit-ops in many cases isn't going to get you too much of a speed increase, because the disk hardware is still many many orders of magnitude slower than the CPU.</strong><hr></blockquote>



    Uhhhh, yeah.



    I was talking about address spaces, not performance. A filesystem is an addressable space. The issue was file and (logical) volume size.



    [ 03-30-2002: Message edited by: Amorph ]</p>
  • Reply 30 of 126
    razzfazzrazzfazz Posts: 728member
    [quote]Originally posted by Amorph:

    <strong>

    I was talking about address spaces, not performance. A filesystem is an addressable space. The issue was file and (logical) volume size.

    </strong><hr></blockquote>



    It indeed is, but unlike memory address space, you can use 64-bit-filesystems on 32-bit-CPUs (BeOS' BFS does this, for example, and NTFS allows for files exceeding 4GB too; also, I don't think any current filesystem is limited to 4GB volumes).



    The only downside of not having a 64-bit-CPU in that case is that it takes more than one instruction to do a single calculation on a 64-bit-value, but as I stated before, the speed penalty incurred by additional instructions needed to handle those 64-bit-offsets on a 32-bit-CPU doesn't really do a lot of harm as the accompanying disk accesses take multiple orders of magnitude longer anyway.



    Bye,

    RazzFazz
  • Reply 31 of 126
    matsumatsu Posts: 6,558member
    hrrmmmm...



    Bearing in mind, always, that I know nothing about this, how hard would it be to give PPC's very good 36bit adressing/functionality? I know that G4's can do this already, but what if they went to an essentially hybrid 32/36 bit design. All the 32bit code we know and love, but some really fast 36 bit memory adressing/data organization. I think 64GB of 'main memory'/'data set' capacity ought to keep everyone happy for a few years to come, no? At least on the desktop. Or, has the industry just decided that 64 is the next logical step? Or is it a question of; 'if you have to redesign the chip anyway, you may as well just fatten the pipe as much as possible?'



    ??? Remember, I'm totally ignorant of the interio workings of any computer, so go easy on me.
  • Reply 32 of 126
    programmerprogrammer Posts: 3,458member
    [quote]Originally posted by Matsu:

    <strong>hrrmmmm...



    Bearing in mind, always, that I know nothing about this, how hard would it be to give PPC's very good 36bit adressing/functionality? I know that G4's can do this already, but what if they went to an essentially hybrid 32/36 bit design. All the 32bit code we know and love, but some really fast 36 bit memory adressing/data organization. I think 64GB of 'main memory'/'data set' capacity ought to keep everyone happy for a few years to come, no? At least on the desktop. Or, has the industry just decided that 64 is the next logical step? Or is it a question of; 'if you have to redesign the chip anyway, you may as well just fatten the pipe as much as possible?'



    ??? Remember, I'm totally ignorant of the interio workings of any computer, so go easy on me.</strong><hr></blockquote>



    The main factor in address space size is the number of bits in a pointer. Pointers have to fit in the integer registers (at least on the PowerPC and most other processor designs). Integer registers are typically an even power of 2 bits wide, which is done for a variety of reasons and is a fairly firmly established convention. If they did try to go with a 36-bit integer word the software community would certainly balk at it. Changing the word width is also a significant change, so it is better to make one big change once rather than a series of small leaps. I can easily see applications hitting the 36-bit limit very quickly, whereas a 64-bit address space is probably going to be more than sufficient for a very long time (remember, a 64-bit address space is about 4 billion times larger than a 32-bit address space).
  • Reply 33 of 126
    razzfazzrazzfazz Posts: 728member
    [quote]Originally posted by Programmer:

    <strong>

    Integer registers are typically an even power of 2 bits wide, which is done for a variety of reasons and is a fairly firmly established convention.</strong><hr></blockquote>



    Sorry for nitpicking, but this should really only be "a power of two" rather than "an even power of two" - at least I was under the impression that 32 (=2^5) bit wide registers weren't really all that uncommon.



    Bye,

    RazzFazz



    [ 03-31-2002: Message edited by: RazzFazz ]</p>
  • Reply 34 of 126
    [quote]Sorry for nitpicking, but this should really only be "a power of two" rather than "an even power of two" - at least I was under the impression that 32 (=2^5) bit wide registers weren't really all that uncommon.<hr></blockquote>



    While I can't speak for him, I would tend to assume that what he meant by "even power of two" was that the bit-width would be even as a two-based exponential, i.e. no remainder.



    [code]2^5 = 32 even

    4+ (2^5) = 36 not even</pre><hr></blockquote>
  • Reply 35 of 126
    [quote]Originally posted by Programmer:

    <strong>



    I was thinking much the same thing. Not a bad idea really, as it sidesteps the whole notion that the G4 is obsolete... the G5 would be billed as a workstation/server class machine for those who need really advanced techniques. Hopefully there would be a G4 that is almost equivalent except that it is only 32-bit, but has the other advances incorporated in it.</strong><hr></blockquote>



    Funnily enough, this fits in nicely with something from the latest Dorsal rumour. S/he wrote that the processor and RAM were on one big daughter card. The main board was mostly just for all the I/O. This arrangement would be very convenient if you were implementing G4 and G5 systems and wanted to keep a somewhat unified motherboard.



    Grain of salt and all that but it is an interesting speculative convergence.
  • Reply 36 of 126
    programmerprogrammer Posts: 3,458member
    [quote]Originally posted by RazzFazz:

    <strong>



    Sorry for nitpicking, but this should really only be "a power of two" rather than "an even power of two" - at least I was under the impression that 32 (=2^5) bit wide registers weren't really all that uncommon.

    </strong><hr></blockquote>



    By "even" I meant "non-fractional", not "even" as opposed to "odd". Sorry for any confusion.
  • Reply 37 of 126
    I'm perplexed by all the 'rumour' I've read on the net re: 'G5'.



    I've checked out loads of links. I don't get the confusion.



    As far as a few years back, the G5 was going to be a 32 bit part initially. Unless Mac X is 64 bit why bother...?



    This probably fits the 7500 idea into some kind of half 64 bit half brother yikes kind of set up.



    Come the end of the year then maybe the full 64 bit implementation of the G5 will be readied for a New Year 2003 intro'. Though maybe due to Apollo's implied 'success' then maybe the '32' bit G5 won't be with us until next year...



    I thought Apollo would only ensure that the G4 would top out at 1 gig or just over. I'd have thought 1.2 optimistic. That was what was initially said some time ago...



    The confusion seems to come from the fact that Motorola have had a bit of joy (insinuation from both Apple and Moto...) in pushing the G4 ceiling up a bit further. Which means 1.2 as a conservative estimate for the top of the pro line come New York Macworld...leaving a crippled...cacheless 1 gig at the bottom end.



    So if you can push the G4 to 1.4 with a few motherboard whistles etc...then that's more cash to be squeezed out of the G4 part.



    This seems to push the G5 '32' bit flavour back a little...unless it is offered in a Powermac 'split line' of Apollo G4s and '32' bit G5s...



    The problem with the 'conservative' option is that it's all very evolutionary. No surprises.



    With a 2.4 gig Pentium imminent, I'm going to take some convincing that a 1.2 Apollo with PC133 can hang against a 2.4 gig Pentium with a Titanium Geforce 4 and 512 meg of 333ddr for half the price!



    Looking at it logically, thinks look saunteringly casual on the PPC roadmap.



    The 'quiet' seems to imply something more is going to happen. A tweaked motherboard ram and bumped Apollo?



    Yet Dorsal seems to be saying that G5s are on their way back to Apple after being tested/seeded to developers...



    A two tier strategy?



    I hope Apple doesn't lable a motherboard boosted G4 'G5'.



    Hmmm. Maybe I do see why there's confusion...



    Lemon Bon Bon



    <img src="graemlins/smokin.gif" border="0" alt="[Chilling]" />
  • Reply 38 of 126
    powerdocpowerdoc Posts: 8,123member
    [quote]Originally posted by RazzFazz:

    <strong>



    It indeed is, but unlike memory address space, you can use 64-bit-filesystems on 32-bit-CPUs (BeOS' BFS does this, for example, and NTFS allows for files exceeding 4GB too; also, I don't think any current filesystem is limited to 4GB volumes).



    The only downside of not having a 64-bit-CPU in that case is that it takes more than one instruction to do a single calculation on a 64-bit-value, but as I stated before, the speed penalty incurred by additional instructions needed to handle those 64-bit-offsets on a 32-bit-CPU doesn't really do a lot of harm as the accompanying disk accesses take multiple orders of magnitude longer anyway.



    Bye,

    RazzFazz</strong><hr></blockquote>



    So how , do you explain that IBM choose 64 bit CPU for his high end server with the power 3 and Power 4 64bits chips, if the only important thing is the speed of the HD ?
  • Reply 39 of 126
    smirclesmircle Posts: 1,035member
    Originally posted by powerdoc:

    [quote]So how , do you explain that IBM choose 64 bit CPU for his high end server with the power 3 and Power 4 64bits chips, if the only important thing is the speed of the HD ?<hr></blockquote>

    1) they are "big irons" which do not just have your standard ATA-66 HD inside but high-performance storage systems



    2) the argument was about file systems, "bittiness" and speed. Database calculations profit enormously from 64Bit, for average users, they are in fact inferior to 32 in many situations.
  • Reply 40 of 126
    123123 Posts: 278member
    64bit addressing can wait for now. What we really need is 64bit FP SIMD (altivec)!





    Amorph

    [quote]In fact, it can slow things down, because the minimum size of the data a 64 bit processor reads is 64 bits, and if you're reading in 8 bit ASCII characters then you're pulling 8 times the bandwidth the data actually requires across the bus, and since the bus is always a bottleneck, this actually hurts performance.<hr></blockquote>



    Today, if you read 8bits from memory, you will already read 64 bits, because that's the bus' width. Actually, you'll even read 4x64 (or 8x64 or 8 quads on DDR boards), because:



    - SDRAM bursts are cheap.

    - entire cache blocks are read at once.





    Programmer

    [quote]Context switching is the single largest data size cost because the size of the integer register file has doubled and must be saved on every function call and thread switch.<hr></blockquote>



    Actually, this depends a lot on the processor design and calling convention. Most processors have gp registers that are not saved for simple function calls. Then, there are leaf procedures... As for regular function calls, if you have a machine with multiple register windows (SPARC), you don't have to save your registers at all (well, most of them, most of the time). However, I entirely agree that generally stack size will increase, but how much is that?



    [ 04-02-2002: Message edited by: 123 ]



    [ 04-02-2002: Message edited by: 123 ]</p>
Sign In or Register to comment.