G5 : 64 bits or 32 bits ?

Posted:
in Future Apple Hardware edited January 2014
According to many rumors and from the roadmap the G5 will be a 64bit chip.

The question is : is there any interest for us Apple average consumers for such a chip.

64 bit chips are usefull for servers, like the power 3 and power 4 chips, but did they worth the prize for graphics use and multimedia ?



Won't it be more interesting to put the extra amount of transistors that it need for other features (more fpu unit for example ..)



Your opinions is wellcome.
«134567

Comments

  • Reply 1 of 126
    stevessteves Posts: 108member
    [quote]Originally posted by powerdoc:

    <strong>Won't it be more interesting to put the extra amount of transistors that it need for other features (more fpu unit for example ..)



    Your opinions is wellcome.</strong><hr></blockquote>



    How many more transistors does it take? I'm a software engineer, not a hardware engineer, but I'd guess not much. 64bit essentially means both 64bit Integers and 64bit addressing. So, yes, there will be extra storage involved for INT registers, etc. but relative to a chip's design, I'd consider that very marginal.



    As for Apple, I don't think they'd take advantage of 64bits right away. For starters, there is no immediate need for it. Additionally, it will take some time before G5's make their way to iMacs, I'd imagine.



    Finally, as you mentioned, outside of large database servers, there aren't many algorithms that come to mind that would benfit from 64bits. Chess is one such algorithm that would be enhanced by native 64bit hardware.



    Still, it's sort of like the chicken or the egg deal, you need the 64bit hardware in order to migrate the software to it.



    Steve
  • Reply 2 of 126
    onlookeronlooker Posts: 5,252member
    I think it's a step in the right direction of real computing. If your realy asking if I think it's premature I'd say no. Unless just bringing the 64bit architectecure &lt;-Spelling?) would cause the G5 to scale poorly, or cause another G4 speed cap situation.



    You your self mentioned servers, and the words Unix, and Server go togeter like peanut butter, and jelly. Ruuun Foorrest!!! So 64bit seems like natural step towards a brighter future if you ask me.
  • Reply 3 of 126
    All other things equal, 64bit processors are slower than 32bit because code size increases; you can fit less code into chip caches, and memory requirements double.



    However, all other things are not typically equal -- 64bit processors are usually equipped with larger caches, and have more exotic memory controllers and more system bandwidth to make up for the increase in code size.



    If you're not going to use the primary advantage of 64bit processors -- 64bit flat memory addressing; you're wasting time and performance unnecessarily. And very few programs need more than the 4GB that 32bit addressing provides. Databases and largescale scientific computing algorithms are two examples. To the vast amount of Apple's customer base, 64bit addressing is pointless.
  • Reply 4 of 126
    outsideroutsider Posts: 6,008member
    The G4 has 36bit addressing but is not considered a 36bit processor. Also the first Alpha 64bit chips has 40-48bit addressing but were still considered 64bit chips. I think most of the idustry considers it a 64 bit chip if it can handle 64bit ints.
  • Reply 5 of 126
    amorphamorph Posts: 7,112member
    64 bit makes sense for: Gigantic address spaces (per process, per file and per logical volume), not just system addressable RAM), and accelerating code that, whether by intent or by default, relies heavily on double precision floating point.



    For desktop purposes, it's (currently) well-nigh useless. For scientific and engineering tasks, and for running servers (and the sort of Great Big Applications that run on servers, like Oracle) it's invaluable. So the question really is, just how serious is Apple about pushing into web broadcasting, high-end 3D, (back) into science and higher ed, and (once more) into UNIX's traditional stronghold? They've certainly been taking action on all these fronts (although they partnered with Sun at QT Live! to get the big server hardware needed for that purpose). I'm not suggesting that they're going to start making minicomputers, but they could offer a platform that was robust enough that, when clustered, it would scale well enough to do some real heavy lifting.



    Unless Apple has some software in the works that will make 64-bit as necessary as the iApps have made vector processing (another tech that was not considered general purpose), it won't be of interest to most desktop/workstation users for a few years yet. The most pressing need I could see would be 64-bit color, which Apple's huge base of artists and designers would probably appreciate.



    Wild thought: Apple is working with Motorola to blend AltiVec into a 64-bit hybrid processor, where each execution unit can perform an op on 1 64 bit, 2 32 bit, or 4 16 bit chunks. Given some spiffy compiler support, Apple could then help to obviate the code and data bloat that 64 bit architectures introduce, and significantly accelerate 32-bit performance. That might work, and it might become AIM's very own Itanium. I don't know enough about low-level architectural issues to say.
  • Reply 6 of 126
    mattyjmattyj Posts: 898member
    Originally posted by Amorph

    [quote] For desktop purposes, it's (currently) well-nigh useless. For scientific and engineering tasks, and for running servers (and the sort of Great Big Applications that run on servers, like Oracle) it's invaluable. So the question really is, just how serious is Apple about pushing into web broadcasting, high-end 3D, (back) into science and higher ed, and (once more) into UNIX's traditional stronghold? They've certainly been taking action on all these fronts (although they partnered with Sun at QT Live! to get the big server hardware needed for that purpose). <hr></blockquote>



    Would this mean that the NAB show would be the best time to release the G5, if it is really powerful? <img src="graemlins/hmmm.gif" border="0" alt="[Hmmm]" />
  • Reply 7 of 126
    amorphamorph Posts: 7,112member
    [quote]Originally posted by mattyj:

    <strong>

    Would this mean that the NAB show would be the best time to release the G5, if it is really powerful? <img src="graemlins/hmmm.gif" border="0" alt="[Hmmm]" /> </strong><hr></blockquote>



    It wouldn't be a bad time, hypopathetically speaking, but since there are no Big Apples keynoting NAB, it's not gonna happen. Not unless Apple has decided to take their patented stealth marketing to the next level, anyway.



    MWNY would be a good venue. So would Seybold. WWDC wouldn't be a bad one, but what would be better IMO if Apple wants to target WWDC (a good idea if you're rolling out a serious revision to the platform!) is to unveil the machines beforehand at an Apple Event, saturate the press, and then show off the new kit at WWDC once everyone's been able to read up on it. That way, Apple can concentrate on the kind and depth of detail that developers prefer without making things difficult for the mainstream press.



    However, I have a hunch that WWDC is too early for that to happen, so I'll say it's MWNY, Seybold SF, or an Apple Event sometime this summer. Keep in mind that, by my lights, this is when a wholly redesigned platform will roll out, not necessarily anything called a "G5." If the G5 appears - especially in a 32-bit flavor, I'd expect it around the beginning of next year. Just a hunch.



    I realize that's not narrowing things down much, but that's what happens when all I have to go on is rampant speculation.



    [ 03-27-2002: Message edited by: Amorph ]</p>
  • Reply 8 of 126
    randycat99randycat99 Posts: 1,919member
    Interesting topic! So what about the Itanium and its uber-balls floating point performance? Wouldn't that alone be a compelling benefit to the demanding 3D/CAD/multimedia user (assuming appropriate 64-bit software was to follow)? ...or would this not bear out in general practice within the confines of the desktop user environment, or is it just a needlessly inefficient way to achieve better floating point performance anyway?
  • Reply 9 of 126
    I think that the roadmap use to have two "G5" chips the 75XX and 85XX. I think that the 7500 was a 32-bit chip while the 8500 was a 64-bit chip. Dorsal M posted a message about apple using the 7500 in current prototypes, so looks like the first "G5" we will see will be 32-bits. For most consumer apps today 64 bits won't be an advantage.
  • Reply 10 of 126
    snoopysnoopy Posts: 1,901member
    Every time that processor word size doubled, there has been a dramatic increase in performance. Processors went from 8 bit to 16 bit. The software quickly took advantage of it. Processors then went from 16 bit to 32 bit, and performance increased. So, why wouldn't this happen again when a 64 bit processor becomes available? If the hardware is there, I believe the software will soon take advantage of it. And don't forget, the G5 is said to run 32 bit software just fine too, while it's waiting for the 64 bit stuff to come along.
  • Reply 11 of 126
    amorphamorph Posts: 7,112member
    [quote]Originally posted by snoopy:

    <strong>Every time that processor word size doubled, there has been a dramatic increase in performance. Processors went from 8 bit to 16 bit. The software quickly took advantage of it. Processors then went from 16 bit to 32 bit, and performance increased. So, why wouldn't this happen again when a 64 bit processor becomes available? If the hardware is there, I believe the software will soon take advantage of it. And don't forget, the G5 is said to run 32 bit software just fine too, while it's waiting for the 64 bit stuff to come along.</strong><hr></blockquote>



    The hardware is there. 64 bit platforms are not at all a new idea. The software that can take advantage of it is mature at this point.



    There were marked improvements from 8 -&gt; 16 -&gt; 32 bits (and the other variants - there were 9 bit machines too, back in the day) because bit depths of 8 and 16 don't offer a wide enough range to be useful. As soon as you need a number greater than 65535 (for example, if you'd like to address more than 64K of memory), 16 bits is no longer adequate, and you have to break the number into 16 bit (or 8 bit!) pieces and work on one at a time, which is costly. 32 bits covers a range of 4 billion and change, which has proven sufficient for almost all tasks - RAM hasn't come close to 4GB in most machines, files larger than 4GB are extremely rare, and very few software applications need to handle numbers in the billions. They're just not part of most peoples' (or most businesses') everyday lives. As a consequence of that, the mind-boggling range covered by 64 bits is largely superfluous, and for the most part it does not offer any speed increase. In fact, it can slow things down, because the minimum size of the data a 64 bit processor reads is 64 bits, and if you're reading in 8 bit ASCII characters then you're pulling 8 times the bandwidth the data actually requires across the bus, and since the bus is always a bottleneck, this actually hurts performance. A 32 bit processor only pulls 4 times as much as it needs in that case.



    This is why I proposed a weird hybrid architecture, where all the execution units were essentially SIMD units capable of applying the same operation to 1 64 bit or 2 32 bit or 4 16 bit piece(s) of data at once. Then a clever compiler could pack small pieces of data in memory, and there wouldn't be a performance hit for unpacking the data. It wouldn't speed everything up all the time, but it would provide a significant boost in a number of common circumstances. AltiVec would remain on board to handle more advanced needs, and also to offer greater potential acceleration with its 128 bit registers.



    [ 03-28-2002: Message edited by: Amorph ]</p>
  • Reply 12 of 126
    wfzellewfzelle Posts: 137member
    Anandtech has a very good FAQ on 64 bit CPU's:

    <a href="http://www.anandtech.com/guides/viewfaq.html?i=112"; target="_blank">The myths and realities of 64-bit computing</a>



    Some interesting points that the story makes:

    - Programmers rarely (need to) use 64 bit integers in performance critical code.

    - Because of extra cache misses, overall performance goes down by about 10-15%.

    - x86-64 will increase performance, but mostly because they fixed x86-flaws (a lack of registers has been an age old flaw).



    Personally I'd rather see Apple wait for a year or two before they switch to 64-bit. Although in the interest of marketing it might be better to take the speed hit so you can boast about 64-bitness. But it will not benefit us much, in contrast to a better bus and a low-latency onboard memory controller that interfaces with fast DDR-DRAM.
  • Reply 13 of 126
    [quote]Interesting topic! So what about the Itanium and its uber-balls floating point performance?<hr></blockquote>



    Itanium, and future IA64 processors in general, will excel at FP code in general for a number of reasons.



    1. 128 dedicated hardware registers for FP.

    2. FP code is easily predictable, and IA64 processors use full predication.

    3. IA64 processors will generally be gifted with large, full-speed low-latency caches, so a lot of code can fit in them and be accessed quickly.

    4. IA64 processors will in general be gifted with multiple scalar FP execution units.



    Itanium itself comes with either 2MB or 4MB of off-die but full-speed L3, running at either 733MHz or 800MHz, the two clock speeds Itaniums are sold at. McKinley, Itanium's successor, will come with either 1.5MB or 3MB of full-speed on-die lower-latency L3. It will also be running at or exceeding 1GHz. It's performance in both FP and INT is expected to be greatly superior to Itanium's. In fact if it's not, Intel got some splainin' to do.
  • Reply 14 of 126
    programmerprogrammer Posts: 3,458member
    There certainly seems to be a lot of misinformation about what having a 64-bit processor means.



    - Code size does not increase, data size increases.

    - Context switching is the single largest data size cost because the size of the integer register file has doubled and must be saved on every function call and thread switch.

    - Most data structures won't get any larger, only those which actually use 64-bit ints; but if they need 64-bit ints then they would have had to be that big even on a 32-bit processors. Sloppy programming will contribute a bit of bloat (using larger-than-necessary integers).

    - As mentioned above, the gains from doubling the word size this time will not be nearly as significant as going from 8-&gt;16 or 16-&gt;32. This is an exponential effect, so we've already gained the most useful increase in integer values and address spaces.

    - The PowerPC architecture defines a 32-bit mode so that existing software runs at full speed, and new software will continue to be delivered in 32-bit mode if it doesn't need to use 64-bit to run. The existing user base isn't going away so developers will want to support it.

    - It doesn't take that many transistors, so it is probably worth it for the extra capabilities that can potentially be used. Since WIntel/AMD are going this way, I'm sure more software will start appearing that wants to use 64-bit mode.

    - The G4's 36-bit addressing is physical. A single process' address space is limited to 32-bits because pointers are only 32-bits. This ignores the segmentation support which will never be used.

    - The suggestion to add SIMD in the 64-bit registers is fairly pointless. This is essentially what MMX is, and it is half as capable as AltiVec and there is extremely little compiler technology that can take advantage of this. If code is going to be parallelized, use the AltiVec unit to get 128-bit registers... and leave the integer units to handle the counting, addressing and looping in parallel with the vector execution. Adding SIMD to the integer unit would just bloat the instruction set and needlessly complicate the integer unit.

    - The Itanium's fast FPU has little to do with the fact that it is a 64-bit chip. They chose to share the register file, but that's not what gave them excellent performance. The PowerPC could improve its FPU (and have multiple of them) to achieve equivalent or better floating point performance. Having 128 registers doesn't buy that much since compilers already under utilize the existing 32 PowerPC registers.



    I think it is definitely worthwhile to go 64-bit on the PowerPC, and it will probably happen in 2003. There are probably some killer apps waiting for 64-bit, and they may very well be consumer/desktop apps.
  • Reply 15 of 126
    amorphamorph Posts: 7,112member
    [quote]Originally posted by Programmer:

    <strong>

    - The suggestion to add SIMD in the 64-bit registers is fairly pointless. This is essentially what MMX is, and it is half as capable as AltiVec and there is extremely little compiler technology that can take advantage of this. If code is going to be parallelized, use the AltiVec unit to get 128-bit registers... and leave the integer units to handle the counting, addressing and looping in parallel with the vector execution. Adding SIMD to the integer unit would just bloat the instruction set and needlessly complicate the integer unit.</strong><hr></blockquote>



    Ah well. It was just a thought.
  • Reply 16 of 126
    powerdocpowerdoc Posts: 8,123member
    If my memory is good , i read several years before, that the itanium will have a special integer unit (like amorph discribe) able to deal with 64 bit , 2 two 32 bits or 4 16 bits or one 32 bit and two 16 bits instruction per cycle. This was not a SIMD unit, however i have forgot the name of this technlogy (read the article four years ago) , but i think a limited sort of this integer unit will be fine (just one 64 bits, or two 32 bits instruction per cycle).
  • Reply 17 of 126
    Call me crazy, but for at least a while I think that there will be two places for the G5 immediately.



    The first will be in the server (rackmount PLEASE) line. Hopefully this will make it more different form the Professional line and second it will make it more like a server... that's what 64bits were born to do!



    The second is at the very top of the professional line. Basically... the machine that was built to run the Nothing Real software and Maya from Alias/Wavefront. Call it a creative workstation. These are the things that need 64bit currently... leave the rest of the professional line in the G4 for as long as it will scale.



    Now I know that I'm always posting about the servers needing to be more like "REAL" servers, but that's just because OSX can run a server really well - and they're absoubtly essential for some key markets... particularly education.



    Let the G5 kick into the professional line in a 1-1 1/2 years... untill then use an interim chip like they did with the first G4s. This will save everyone money and let the serer line make a name for itself.
  • Reply 18 of 126
    mmastermmaster Posts: 17member
    [quote] - Code size does not increase, data size increases.

    - Most data structures won't get any larger, only those which actually use 64-bit ints; but if they need 64-bit ints then they would have had to be that big even on a 32-bit processors. Sloppy programming will contribute a bit of bloat (using larger-than-necessary integers).

    <hr></blockquote>



    This is not quite true. Code size does increase, especially in a RISC architecture where all instructions have gone from being 32-bits to 64-bits.



    Also, data structure size might change even if they originally used 32-bit ints. For one, the chip might not support a 32-bit mode or native 32-bit ints. Also, on most 64 bit architectures, an int in C will be 64 bits just as an int in a 16 bit machine is 16 bits. This is why making a program work on a 64 bit chip usually takes more than just a recompile.
  • Reply 19 of 126
    programmerprogrammer Posts: 3,458member
    [quote]Originally posted by mmaster:

    <strong>This is not quite true. Code size does increase, especially in a RISC architecture where all instructions have gone from being 32-bits to 64-bits.



    Also, data structure size might change even if they originally used 32-bit ints. For one, the chip might not support a 32-bit mode or native 32-bit ints. Also, on most 64 bit architectures, an int in C will be 64 bits just as an int in a 16 bit machine is 16 bits. This is why making a program work on a 64 bit chip usually takes more than just a recompile.</strong><hr></blockquote>



    If you read the PowerPC 64-bit specification you will see that the instruction word remains at 32-bits, so code size does not change.



    Most compilers will continue to use 32-bit "int" and "long int" types, while the "long long int" will be the 64-bit type. This is likely to be the case on the Mac in order to remain as compatible as possible with older code. Like I said in my previous message, however, structures which use 64-bit ints will be larger (regardless of exactly which types need to be used to get a 64-bit integer). Your point is probably that code not carefully/explicitly typed will have its data structures grow, and that I would agree with.
  • Reply 20 of 126
    airslufairsluf Posts: 1,861member
Sign In or Register to comment.