or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › G5 : 64 bits or 32 bits ?
New Posts  All Forums:Forum Nav:

G5 : 64 bits or 32 bits ?

post #1 of 127
Thread Starter 
According to many rumors and from the roadmap the G5 will be a 64bit chip.
The question is : is there any interest for us Apple average consumers for such a chip.
64 bit chips are usefull for servers, like the power 3 and power 4 chips, but did they worth the prize for graphics use and multimedia ?

Won't it be more interesting to put the extra amount of transistors that it need for other features (more fpu unit for example ..)

Your opinions is wellcome.
post #2 of 127
[quote]Originally posted by powerdoc:
<strong>Won't it be more interesting to put the extra amount of transistors that it need for other features (more fpu unit for example ..)

Your opinions is wellcome.</strong><hr></blockquote>

How many more transistors does it take? I'm a software engineer, not a hardware engineer, but I'd guess not much. 64bit essentially means both 64bit Integers and 64bit addressing. So, yes, there will be extra storage involved for INT registers, etc. but relative to a chip's design, I'd consider that very marginal.

As for Apple, I don't think they'd take advantage of 64bits right away. For starters, there is no immediate need for it. Additionally, it will take some time before G5's make their way to iMacs, I'd imagine.

Finally, as you mentioned, outside of large database servers, there aren't many algorithms that come to mind that would benfit from 64bits. Chess is one such algorithm that would be enhanced by native 64bit hardware.

Still, it's sort of like the chicken or the egg deal, you need the 64bit hardware in order to migrate the software to it.

Steve
post #3 of 127
I think it's a step in the right direction of real computing. If your realy asking if I think it's premature I'd say no. Unless just bringing the 64bit architectecure &lt;-Spelling?) would cause the G5 to scale poorly, or cause another G4 speed cap situation.

You your self mentioned servers, and the words Unix, and Server go togeter like peanut butter, and jelly. Ruuun Foorrest!!! So 64bit seems like natural step towards a brighter future if you ask me.
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
Reply
post #4 of 127
All other things equal, 64bit processors are slower than 32bit because code size increases; you can fit less code into chip caches, and memory requirements double.

However, all other things are not typically equal -- 64bit processors are usually equipped with larger caches, and have more exotic memory controllers and more system bandwidth to make up for the increase in code size.

If you're not going to use the primary advantage of 64bit processors -- 64bit flat memory addressing; you're wasting time and performance unnecessarily. And very few programs need more than the 4GB that 32bit addressing provides. Databases and largescale scientific computing algorithms are two examples. To the vast amount of Apple's customer base, 64bit addressing is pointless.
post #5 of 127
The G4 has 36bit addressing but is not considered a 36bit processor. Also the first Alpha 64bit chips has 40-48bit addressing but were still considered 64bit chips. I think most of the idustry considers it a 64 bit chip if it can handle 64bit ints.
post #6 of 127
64 bit makes sense for: Gigantic address spaces (per process, per file and per logical volume), not just system addressable RAM), and accelerating code that, whether by intent or by default, relies heavily on double precision floating point.

For desktop purposes, it's (currently) well-nigh useless. For scientific and engineering tasks, and for running servers (and the sort of Great Big Applications that run on servers, like Oracle) it's invaluable. So the question really is, just how serious is Apple about pushing into web broadcasting, high-end 3D, (back) into science and higher ed, and (once more) into UNIX's traditional stronghold? They've certainly been taking action on all these fronts (although they partnered with Sun at QT Live! to get the big server hardware needed for that purpose). I'm not suggesting that they're going to start making minicomputers, but they could offer a platform that was robust enough that, when clustered, it would scale well enough to do some real heavy lifting.

Unless Apple has some software in the works that will make 64-bit as necessary as the iApps have made vector processing (another tech that was not considered general purpose), it won't be of interest to most desktop/workstation users for a few years yet. The most pressing need I could see would be 64-bit color, which Apple's huge base of artists and designers would probably appreciate.

Wild thought: Apple is working with Motorola to blend AltiVec into a 64-bit hybrid processor, where each execution unit can perform an op on 1 64 bit, 2 32 bit, or 4 16 bit chunks. Given some spiffy compiler support, Apple could then help to obviate the code and data bloat that 64 bit architectures introduce, and significantly accelerate 32-bit performance. That might work, and it might become AIM's very own Itanium. I don't know enough about low-level architectural issues to say.
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
post #7 of 127
Originally posted by Amorph
[quote] For desktop purposes, it's (currently) well-nigh useless. For scientific and engineering tasks, and for running servers (and the sort of Great Big Applications that run on servers, like Oracle) it's invaluable. So the question really is, just how serious is Apple about pushing into web broadcasting, high-end 3D, (back) into science and higher ed, and (once more) into UNIX's traditional stronghold? They've certainly been taking action on all these fronts (although they partnered with Sun at QT Live! to get the big server hardware needed for that purpose). <hr></blockquote>

Would this mean that the NAB show would be the best time to release the G5, if it is really powerful? <img src="graemlins/hmmm.gif" border="0" alt="[Hmmm]" />
Abhor the Stereotype, respect the Individual.
1.33Ghz 15" Powerbook: 80GB HD, 1GB RAM, OSX.4.7, Soundsticks II, 320GB LaCie FW800 EXT HD, iPod 20GB 4G
Reply
Abhor the Stereotype, respect the Individual.
1.33Ghz 15" Powerbook: 80GB HD, 1GB RAM, OSX.4.7, Soundsticks II, 320GB LaCie FW800 EXT HD, iPod 20GB 4G
Reply
post #8 of 127
[quote]Originally posted by mattyj:
<strong>
Would this mean that the NAB show would be the best time to release the G5, if it is really powerful? <img src="graemlins/hmmm.gif" border="0" alt="[Hmmm]" /> </strong><hr></blockquote>

It wouldn't be a bad time, hypopathetically speaking, but since there are no Big Apples keynoting NAB, it's not gonna happen. Not unless Apple has decided to take their patented stealth marketing to the next level, anyway.

MWNY would be a good venue. So would Seybold. WWDC wouldn't be a bad one, but what would be better IMO if Apple wants to target WWDC (a good idea if you're rolling out a serious revision to the platform!) is to unveil the machines beforehand at an Apple Event, saturate the press, and then show off the new kit at WWDC once everyone's been able to read up on it. That way, Apple can concentrate on the kind and depth of detail that developers prefer without making things difficult for the mainstream press.

However, I have a hunch that WWDC is too early for that to happen, so I'll say it's MWNY, Seybold SF, or an Apple Event sometime this summer. Keep in mind that, by my lights, this is when a wholly redesigned platform will roll out, not necessarily anything called a "G5." If the G5 appears - especially in a 32-bit flavor, I'd expect it around the beginning of next year. Just a hunch.

I realize that's not narrowing things down much, but that's what happens when all I have to go on is rampant speculation.

[ 03-27-2002: Message edited by: Amorph ]</p>
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
post #9 of 127
Interesting topic! So what about the Itanium and its uber-balls floating point performance? Wouldn't that alone be a compelling benefit to the demanding 3D/CAD/multimedia user (assuming appropriate 64-bit software was to follow)? ...or would this not bear out in general practice within the confines of the desktop user environment, or is it just a needlessly inefficient way to achieve better floating point performance anyway?
Lauren Sanchez? That kinda hotness is just plain unnatural.
Reply
Lauren Sanchez? That kinda hotness is just plain unnatural.
Reply
post #10 of 127
I think that the roadmap use to have two "G5" chips the 75XX and 85XX. I think that the 7500 was a 32-bit chip while the 8500 was a 64-bit chip. Dorsal M posted a message about apple using the 7500 in current prototypes, so looks like the first "G5" we will see will be 32-bits. For most consumer apps today 64 bits won't be an advantage.
post #11 of 127
Every time that processor word size doubled, there has been a dramatic increase in performance. Processors went from 8 bit to 16 bit. The software quickly took advantage of it. Processors then went from 16 bit to 32 bit, and performance increased. So, why wouldn't this happen again when a 64 bit processor becomes available? If the hardware is there, I believe the software will soon take advantage of it. And don't forget, the G5 is said to run 32 bit software just fine too, while it's waiting for the 64 bit stuff to come along.
post #12 of 127
[quote]Originally posted by snoopy:
<strong>Every time that processor word size doubled, there has been a dramatic increase in performance. Processors went from 8 bit to 16 bit. The software quickly took advantage of it. Processors then went from 16 bit to 32 bit, and performance increased. So, why wouldn't this happen again when a 64 bit processor becomes available? If the hardware is there, I believe the software will soon take advantage of it. And don't forget, the G5 is said to run 32 bit software just fine too, while it's waiting for the 64 bit stuff to come along.</strong><hr></blockquote>

The hardware is there. 64 bit platforms are not at all a new idea. The software that can take advantage of it is mature at this point.

There were marked improvements from 8 -&gt; 16 -&gt; 32 bits (and the other variants - there were 9 bit machines too, back in the day) because bit depths of 8 and 16 don't offer a wide enough range to be useful. As soon as you need a number greater than 65535 (for example, if you'd like to address more than 64K of memory), 16 bits is no longer adequate, and you have to break the number into 16 bit (or 8 bit!) pieces and work on one at a time, which is costly. 32 bits covers a range of 4 billion and change, which has proven sufficient for almost all tasks - RAM hasn't come close to 4GB in most machines, files larger than 4GB are extremely rare, and very few software applications need to handle numbers in the billions. They're just not part of most peoples' (or most businesses') everyday lives. As a consequence of that, the mind-boggling range covered by 64 bits is largely superfluous, and for the most part it does not offer any speed increase. In fact, it can slow things down, because the minimum size of the data a 64 bit processor reads is 64 bits, and if you're reading in 8 bit ASCII characters then you're pulling 8 times the bandwidth the data actually requires across the bus, and since the bus is always a bottleneck, this actually hurts performance. A 32 bit processor only pulls 4 times as much as it needs in that case.

This is why I proposed a weird hybrid architecture, where all the execution units were essentially SIMD units capable of applying the same operation to 1 64 bit or 2 32 bit or 4 16 bit piece(s) of data at once. Then a clever compiler could pack small pieces of data in memory, and there wouldn't be a performance hit for unpacking the data. It wouldn't speed everything up all the time, but it would provide a significant boost in a number of common circumstances. AltiVec would remain on board to handle more advanced needs, and also to offer greater potential acceleration with its 128 bit registers.

[ 03-28-2002: Message edited by: Amorph ]</p>
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
post #13 of 127
Anandtech has a very good FAQ on 64 bit CPU's:
<a href="http://www.anandtech.com/guides/viewfaq.html?i=112" target="_blank">The myths and realities of 64-bit computing</a>

Some interesting points that the story makes:
- Programmers rarely (need to) use 64 bit integers in performance critical code.
- Because of extra cache misses, overall performance goes down by about 10-15%.
- x86-64 will increase performance, but mostly because they fixed x86-flaws (a lack of registers has been an age old flaw).

Personally I'd rather see Apple wait for a year or two before they switch to 64-bit. Although in the interest of marketing it might be better to take the speed hit so you can boast about 64-bitness. But it will not benefit us much, in contrast to a better bus and a low-latency onboard memory controller that interfaces with fast DDR-DRAM.
post #14 of 127
[quote]Interesting topic! So what about the Itanium and its uber-balls floating point performance?<hr></blockquote>

Itanium, and future IA64 processors in general, will excel at FP code in general for a number of reasons.

1. 128 dedicated hardware registers for FP.
2. FP code is easily predictable, and IA64 processors use full predication.
3. IA64 processors will generally be gifted with large, full-speed low-latency caches, so a lot of code can fit in them and be accessed quickly.
4. IA64 processors will in general be gifted with multiple scalar FP execution units.

Itanium itself comes with either 2MB or 4MB of off-die but full-speed L3, running at either 733MHz or 800MHz, the two clock speeds Itaniums are sold at. McKinley, Itanium's successor, will come with either 1.5MB or 3MB of full-speed on-die lower-latency L3. It will also be running at or exceeding 1GHz. It's performance in both FP and INT is expected to be greatly superior to Itanium's. In fact if it's not, Intel got some splainin' to do.
post #15 of 127
There certainly seems to be a lot of misinformation about what having a 64-bit processor means.

- Code size does not increase, data size increases.
- Context switching is the single largest data size cost because the size of the integer register file has doubled and must be saved on every function call and thread switch.
- Most data structures won't get any larger, only those which actually use 64-bit ints; but if they need 64-bit ints then they would have had to be that big even on a 32-bit processors. Sloppy programming will contribute a bit of bloat (using larger-than-necessary integers).
- As mentioned above, the gains from doubling the word size this time will not be nearly as significant as going from 8-&gt;16 or 16-&gt;32. This is an exponential effect, so we've already gained the most useful increase in integer values and address spaces.
- The PowerPC architecture defines a 32-bit mode so that existing software runs at full speed, and new software will continue to be delivered in 32-bit mode if it doesn't need to use 64-bit to run. The existing user base isn't going away so developers will want to support it.
- It doesn't take that many transistors, so it is probably worth it for the extra capabilities that can potentially be used. Since WIntel/AMD are going this way, I'm sure more software will start appearing that wants to use 64-bit mode.
- The G4's 36-bit addressing is physical. A single process' address space is limited to 32-bits because pointers are only 32-bits. This ignores the segmentation support which will never be used.
- The suggestion to add SIMD in the 64-bit registers is fairly pointless. This is essentially what MMX is, and it is half as capable as AltiVec and there is extremely little compiler technology that can take advantage of this. If code is going to be parallelized, use the AltiVec unit to get 128-bit registers... and leave the integer units to handle the counting, addressing and looping in parallel with the vector execution. Adding SIMD to the integer unit would just bloat the instruction set and needlessly complicate the integer unit.
- The Itanium's fast FPU has little to do with the fact that it is a 64-bit chip. They chose to share the register file, but that's not what gave them excellent performance. The PowerPC could improve its FPU (and have multiple of them) to achieve equivalent or better floating point performance. Having 128 registers doesn't buy that much since compilers already under utilize the existing 32 PowerPC registers.

I think it is definitely worthwhile to go 64-bit on the PowerPC, and it will probably happen in 2003. There are probably some killer apps waiting for 64-bit, and they may very well be consumer/desktop apps.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #16 of 127
[quote]Originally posted by Programmer:
<strong>
- The suggestion to add SIMD in the 64-bit registers is fairly pointless. This is essentially what MMX is, and it is half as capable as AltiVec and there is extremely little compiler technology that can take advantage of this. If code is going to be parallelized, use the AltiVec unit to get 128-bit registers... and leave the integer units to handle the counting, addressing and looping in parallel with the vector execution. Adding SIMD to the integer unit would just bloat the instruction set and needlessly complicate the integer unit.</strong><hr></blockquote>

Ah well. It was just a thought.
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
post #17 of 127
Thread Starter 
If my memory is good , i read several years before, that the itanium will have a special integer unit (like amorph discribe) able to deal with 64 bit , 2 two 32 bits or 4 16 bits or one 32 bit and two 16 bits instruction per cycle. This was not a SIMD unit, however i have forgot the name of this technlogy (read the article four years ago) , but i think a limited sort of this integer unit will be fine (just one 64 bits, or two 32 bits instruction per cycle).
post #18 of 127
Call me crazy, but for at least a while I think that there will be two places for the G5 immediately.

The first will be in the server (rackmount PLEASE) line. Hopefully this will make it more different form the Professional line and second it will make it more like a server... that's what 64bits were born to do!

The second is at the very top of the professional line. Basically... the machine that was built to run the Nothing Real software and Maya from Alias/Wavefront. Call it a creative workstation. These are the things that need 64bit currently... leave the rest of the professional line in the G4 for as long as it will scale.

Now I know that I'm always posting about the servers needing to be more like "REAL" servers, but that's just because OSX can run a server really well - and they're absoubtly essential for some key markets... particularly education.

Let the G5 kick into the professional line in a 1-1 1/2 years... untill then use an interim chip like they did with the first G4s. This will save everyone money and let the serer line make a name for itself.
My comic, <a href="http://www.rhcomic.com" target="_blank">Reality High </a> has more hits from AppleInsider than anywhere else. Thanks for supporting your fellow mac artist!
Later Days!
Reply
My comic, <a href="http://www.rhcomic.com" target="_blank">Reality High </a> has more hits from AppleInsider than anywhere else. Thanks for supporting your fellow mac artist!
Later Days!
Reply
post #19 of 127
[quote] - Code size does not increase, data size increases.
- Most data structures won't get any larger, only those which actually use 64-bit ints; but if they need 64-bit ints then they would have had to be that big even on a 32-bit processors. Sloppy programming will contribute a bit of bloat (using larger-than-necessary integers).
<hr></blockquote>

This is not quite true. Code size does increase, especially in a RISC architecture where all instructions have gone from being 32-bits to 64-bits.

Also, data structure size might change even if they originally used 32-bit ints. For one, the chip might not support a 32-bit mode or native 32-bit ints. Also, on most 64 bit architectures, an int in C will be 64 bits just as an int in a 16 bit machine is 16 bits. This is why making a program work on a 64 bit chip usually takes more than just a recompile.
post #20 of 127
[quote]Originally posted by mmaster:
<strong>This is not quite true. Code size does increase, especially in a RISC architecture where all instructions have gone from being 32-bits to 64-bits.

Also, data structure size might change even if they originally used 32-bit ints. For one, the chip might not support a 32-bit mode or native 32-bit ints. Also, on most 64 bit architectures, an int in C will be 64 bits just as an int in a 16 bit machine is 16 bits. This is why making a program work on a 64 bit chip usually takes more than just a recompile.</strong><hr></blockquote>

If you read the PowerPC 64-bit specification you will see that the instruction word remains at 32-bits, so code size does not change.

Most compilers will continue to use 32-bit "int" and "long int" types, while the "long long int" will be the 64-bit type. This is likely to be the case on the Mac in order to remain as compatible as possible with older code. Like I said in my previous message, however, structures which use 64-bit ints will be larger (regardless of exactly which types need to be used to get a 64-bit integer). Your point is probably that code not carefully/explicitly typed will have its data structures grow, and that I would agree with.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #21 of 127
post #22 of 127
[quote]Originally posted by Programmer:
If you read the PowerPC 64-bit specification you will see that the instruction word remains at 32-bits, so code size does not change.
<hr></blockquote>

yeah, I was not thinking very straight and forgot that most 64-bit architectures still use 32-bit instruction words to conserve space and allow easier compatibility with 32 bit version. I even studied the MIPS architecture in school which has both 32bit and 64bit specifications so should have remembered that many of the instructions were shared across the various MIPS I-V specifications. IA-64 did not have to maintain the backwards compatibility and so went with 40-bit sub-instructions.
post #23 of 127
What we have here is a great opportunity for Apple to grab both mind share (for future expansion) and market share (for immediate expansion) with the 64bit jump ... Apple should do is as soon as they can confidently believe they won't screw it up.

Why?

Two reasons:

A - Intel is completely messed up on the issue. (which is why I hope Apple doesn't merely ad FUD on FUD by also rushing to market before they're ready and screwing up their 64bit lead)

B - There's this huge maturation of needs which a desktop 'puter is just about ready to take over, in the area of bio-tech (BLAST etc)with their huge memory needs, & Video Arts (Nothing Real, Maya, FCP) with their huge memory needs - BTW these two fields in themselves converge directly together to give you proper 3D protien folding vizualization, which in itself is a nice bonus, but I digress ...

Mix those two together, add water, then stir in the traditional advantages of UNIX with parallelism and this rockin' POOCH software, and you bake up ....

Racks of 64bit G5 drones in a back room someplace, all wired up with switched Gigabit
and running POOCH on OSX.2.

Ya wanna render? Config the job on your machine, and then farm it out ...

Ya wanna BLAST? Config the job on your machine, and then farm it out ...

Ya wannd do both? Sure, go ahead, POOCH will just config whoever's available, even buddy's computer down the hall if he's not using it ...

You started a job at home? But it's way too big for you to handle? No problem, walk in with your powerBook to a service centre, and send your job to the racks ... just rent the power ... (all you home video junkies think about that one!) ... walk out with a DVD of your results. We're talking kinko's conveniece, clustered super computing power, at the price of two large pizza's ...

What we have here is one hell of a vector processor (altivec), inside a machine that is incredibly easy to cluster (anybody 'round here ever tried clustering Linux using Beowulf? ) ... what's important is the Apple platform itself becomes both a desktop system (it's traditional space), and a whole new method of incredibly powerful modular, scalable computing ... right at the time when it's needed, in exactly the spaces where it's needed, biotech and vid.

They'd be crazy not to do this.

The name for this clustering parallel system?

why "Apple Tree" of course ...

Steve owe's me a latte
In life, as in chess, the moves that hurt the most, are the ones you didn't see ...
Reply
In life, as in chess, the moves that hurt the most, are the ones you didn't see ...
Reply
post #24 of 127
A bit of semantics speculation:

Supposedly, there are a bunch of modifications coming to the G4 line which sound rather similar to all the great new technologies the G5 was going to bring. The only thing I didn't see was 64-bits. Could this be the "generational shift" to distinguish the G4 from the G5? I. e. anything 32-bit will remain officially a G4, while the G5s will be the 64-bit line.

A bit pedestrian compared to most of these other posts, but I'm a chemist. Microprocessor architecture is a ways out of my field.
"Mathematics is the language with which God has written the Universe" - Galileo Galilei
Reply
"Mathematics is the language with which God has written the Universe" - Galileo Galilei
Reply
post #25 of 127
What about companies, like Adobe, who just spent all this time optimizing for OS X and for G4 and altivec, are they going to want to come back again and optimize for 64-bits now?

What if Apple made a uber-pro line with the 64-bit chip, then bought Maya and optimized that for 64-bit, would that run on the G4 towers that are still around. Or are you going to have to optimize all of your software twice, 1 for G4 and AltiVecm and another time for 64-bit.

If you have to do that, I think that's a bad idea.
post #26 of 127
[quote]Originally posted by Billy:
<strong>What about companies, like Adobe, who just spent all this time optimizing for OS X and for G4 and altivec, are they going to want to come back again and optimize for 64-bits now?

What if Apple made a uber-pro line with the 64-bit chip, then bought Maya and optimized that for 64-bit, would that run on the G4 towers that are still around. Or are you going to have to optimize all of your software twice, 1 for G4 and AltiVecm and another time for 64-bit.

If you have to do that, I think that's a bad idea.</strong><hr></blockquote>

The 32-bit software will still run at full speed on a 64-bit machine. I think only very specialized software would take advantage of the 64-bit features, although some apps might ship with two versions of the code (which MacOSX supports via bundling), one 32-bit and one 64-bit. If the code is written well then its just a matter of recompiling and using the 64-bitness in key places where you need it.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #27 of 127
[quote]Originally posted by TJM:
<strong>A bit of semantics speculation:

Supposedly, there are a bunch of modifications coming to the G4 line which sound rather similar to all the great new technologies the G5 was going to bring. The only thing I didn't see was 64-bits. Could this be the "generational shift" to distinguish the G4 from the G5? I. e. anything 32-bit will remain officially a G4, while the G5s will be the 64-bit line.
</strong><hr></blockquote>

I was thinking much the same thing. Not a bad idea really, as it sidesteps the whole notion that the G4 is obsolete... the G5 would be billed as a workstation/server class machine for those who need really advanced techniques. Hopefully there would be a G4 that is almost equivalent except that it is only 32-bit, but has the other advances incorporated in it.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #28 of 127
[quote]Originally posted by Amorph:
<strong>64 bit makes sense for: Gigantic address spaces (per process, per file and per logical volume), not just system addressable RAM), </strong><hr></blockquote>

Agreed for the address space, but since the speed of disk accesses is limited by the disk hardware rather than the CPU, being able to do 64-bit-ops in many cases isn't going to get you too much of a speed increase, because the disk hardware is still many many orders of magnitude slower than the CPU.


[quote]<strong>and accelerating code that, whether by intent or by default, relies heavily on double precision floating point.</strong><hr></blockquote>

All the current 32bit processors already have full 64 bit FPUs. The 32-/64-bit-ness issue only concerns integer registers, and thus doesn't relate to FP code at all (apart from the larger address space, of course).

Bye,
RazzFazz
post #29 of 127
[quote]Originally posted by Outsider:
<strong>The G4 has 36bit addressing but is not considered a 36bit processor. Also the first Alpha 64bit chips has 40-48bit addressing but were still considered 64bit chips.</strong><hr></blockquote>

That's because the important factor here is the virtual adress width, not the physical one. The G4 has 32 bit wide virtual adresses (ignoring segmentation), and all the Alphas have 64 bit wide virtual adresses.

Bye,
RazzFazz
post #30 of 127
[quote]Originally posted by RazzFazz:
<strong>Agreed for the address space, but since the speed of disk accesses is limited by the disk hardware rather than the CPU, being able to do 64-bit-ops in many cases isn't going to get you too much of a speed increase, because the disk hardware is still many many orders of magnitude slower than the CPU.</strong><hr></blockquote>

Uhhhh, yeah.

I was talking about address spaces, not performance. A filesystem is an addressable space. The issue was file and (logical) volume size.

[ 03-30-2002: Message edited by: Amorph ]</p>
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
post #31 of 127
[quote]Originally posted by Amorph:
<strong>
I was talking about address spaces, not performance. A filesystem is an addressable space. The issue was file and (logical) volume size.
</strong><hr></blockquote>

It indeed is, but unlike memory address space, you can use 64-bit-filesystems on 32-bit-CPUs (BeOS' BFS does this, for example, and NTFS allows for files exceeding 4GB too; also, I don't think any current filesystem is limited to 4GB volumes).

The only downside of not having a 64-bit-CPU in that case is that it takes more than one instruction to do a single calculation on a 64-bit-value, but as I stated before, the speed penalty incurred by additional instructions needed to handle those 64-bit-offsets on a 32-bit-CPU doesn't really do a lot of harm as the accompanying disk accesses take multiple orders of magnitude longer anyway.

Bye,
RazzFazz
post #32 of 127
hrrmmmm...

Bearing in mind, always, that I know nothing about this, how hard would it be to give PPC's very good 36bit adressing/functionality? I know that G4's can do this already, but what if they went to an essentially hybrid 32/36 bit design. All the 32bit code we know and love, but some really fast 36 bit memory adressing/data organization. I think 64GB of 'main memory'/'data set' capacity ought to keep everyone happy for a few years to come, no? At least on the desktop. Or, has the industry just decided that 64 is the next logical step? Or is it a question of; 'if you have to redesign the chip anyway, you may as well just fatten the pipe as much as possible?'

??? Remember, I'm totally ignorant of the interio workings of any computer, so go easy on me.
IBL!
Reply
IBL!
Reply
post #33 of 127
[quote]Originally posted by Matsu:
<strong>hrrmmmm...

Bearing in mind, always, that I know nothing about this, how hard would it be to give PPC's very good 36bit adressing/functionality? I know that G4's can do this already, but what if they went to an essentially hybrid 32/36 bit design. All the 32bit code we know and love, but some really fast 36 bit memory adressing/data organization. I think 64GB of 'main memory'/'data set' capacity ought to keep everyone happy for a few years to come, no? At least on the desktop. Or, has the industry just decided that 64 is the next logical step? Or is it a question of; 'if you have to redesign the chip anyway, you may as well just fatten the pipe as much as possible?'

??? Remember, I'm totally ignorant of the interio workings of any computer, so go easy on me.</strong><hr></blockquote>

The main factor in address space size is the number of bits in a pointer. Pointers have to fit in the integer registers (at least on the PowerPC and most other processor designs). Integer registers are typically an even power of 2 bits wide, which is done for a variety of reasons and is a fairly firmly established convention. If they did try to go with a 36-bit integer word the software community would certainly balk at it. Changing the word width is also a significant change, so it is better to make one big change once rather than a series of small leaps. I can easily see applications hitting the 36-bit limit very quickly, whereas a 64-bit address space is probably going to be more than sufficient for a very long time (remember, a 64-bit address space is about 4 billion times larger than a 32-bit address space).
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #34 of 127
[quote]Originally posted by Programmer:
<strong>
Integer registers are typically an even power of 2 bits wide, which is done for a variety of reasons and is a fairly firmly established convention.</strong><hr></blockquote>

Sorry for nitpicking, but this should really only be "a power of two" rather than "an even power of two" - at least I was under the impression that 32 (=2^5) bit wide registers weren't really all that uncommon.

Bye,
RazzFazz

[ 03-31-2002: Message edited by: RazzFazz ]</p>
post #35 of 127
[quote]Sorry for nitpicking, but this should really only be "a power of two" rather than "an even power of two" - at least I was under the impression that 32 (=2^5) bit wide registers weren't really all that uncommon.<hr></blockquote>

While I can't speak for him, I would tend to assume that what he meant by "even power of two" was that the bit-width would be even as a two-based exponential, i.e. no remainder.

[code]2^5 = 32 even
4+ (2^5) = 36 not even</pre><hr></blockquote>
post #36 of 127
[quote]Originally posted by Programmer:
<strong>

I was thinking much the same thing. Not a bad idea really, as it sidesteps the whole notion that the G4 is obsolete... the G5 would be billed as a workstation/server class machine for those who need really advanced techniques. Hopefully there would be a G4 that is almost equivalent except that it is only 32-bit, but has the other advances incorporated in it.</strong><hr></blockquote>

Funnily enough, this fits in nicely with something from the latest Dorsal rumour. S/he wrote that the processor and RAM were on one big daughter card. The main board was mostly just for all the I/O. This arrangement would be very convenient if you were implementing G4 and G5 systems and wanted to keep a somewhat unified motherboard.

Grain of salt and all that but it is an interesting speculative convergence.
post #37 of 127
[quote]Originally posted by RazzFazz:
<strong>

Sorry for nitpicking, but this should really only be "a power of two" rather than "an even power of two" - at least I was under the impression that 32 (=2^5) bit wide registers weren't really all that uncommon.
</strong><hr></blockquote>

By "even" I meant "non-fractional", not "even" as opposed to "odd". Sorry for any confusion.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #38 of 127
I'm perplexed by all the 'rumour' I've read on the net re: 'G5'.

I've checked out loads of links. I don't get the confusion.

As far as a few years back, the G5 was going to be a 32 bit part initially. Unless Mac X is 64 bit why bother...?

This probably fits the 7500 idea into some kind of half 64 bit half brother yikes kind of set up.

Come the end of the year then maybe the full 64 bit implementation of the G5 will be readied for a New Year 2003 intro'. Though maybe due to Apollo's implied 'success' then maybe the '32' bit G5 won't be with us until next year...

I thought Apollo would only ensure that the G4 would top out at 1 gig or just over. I'd have thought 1.2 optimistic. That was what was initially said some time ago...

The confusion seems to come from the fact that Motorola have had a bit of joy (insinuation from both Apple and Moto...) in pushing the G4 ceiling up a bit further. Which means 1.2 as a conservative estimate for the top of the pro line come New York Macworld...leaving a crippled...cacheless 1 gig at the bottom end.

So if you can push the G4 to 1.4 with a few motherboard whistles etc...then that's more cash to be squeezed out of the G4 part.

This seems to push the G5 '32' bit flavour back a little...unless it is offered in a Powermac 'split line' of Apollo G4s and '32' bit G5s...

The problem with the 'conservative' option is that it's all very evolutionary. No surprises.

With a 2.4 gig Pentium imminent, I'm going to take some convincing that a 1.2 Apollo with PC133 can hang against a 2.4 gig Pentium with a Titanium Geforce 4 and 512 meg of 333ddr for half the price!

Looking at it logically, thinks look saunteringly casual on the PPC roadmap.

The 'quiet' seems to imply something more is going to happen. A tweaked motherboard ram and bumped Apollo?

Yet Dorsal seems to be saying that G5s are on their way back to Apple after being tested/seeded to developers...

A two tier strategy?

I hope Apple doesn't lable a motherboard boosted G4 'G5'.

Hmmm. Maybe I do see why there's confusion...

Lemon Bon Bon

<img src="graemlins/smokin.gif" border="0" alt="[Chilling]" />
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
Reply
post #39 of 127
Thread Starter 
[quote]Originally posted by RazzFazz:
<strong>

It indeed is, but unlike memory address space, you can use 64-bit-filesystems on 32-bit-CPUs (BeOS' BFS does this, for example, and NTFS allows for files exceeding 4GB too; also, I don't think any current filesystem is limited to 4GB volumes).

The only downside of not having a 64-bit-CPU in that case is that it takes more than one instruction to do a single calculation on a 64-bit-value, but as I stated before, the speed penalty incurred by additional instructions needed to handle those 64-bit-offsets on a 32-bit-CPU doesn't really do a lot of harm as the accompanying disk accesses take multiple orders of magnitude longer anyway.

Bye,
RazzFazz</strong><hr></blockquote>

So how , do you explain that IBM choose 64 bit CPU for his high end server with the power 3 and Power 4 64bits chips, if the only important thing is the speed of the HD ?
post #40 of 127
Originally posted by powerdoc:
[quote]So how , do you explain that IBM choose 64 bit CPU for his high end server with the power 3 and Power 4 64bits chips, if the only important thing is the speed of the HD ?<hr></blockquote>
1) they are "big irons" which do not just have your standard ATA-66 HD inside but high-performance storage systems

2) the argument was about file systems, "bittiness" and speed. Database calculations profit enormously from 64Bit, for average users, they are in fact inferior to 32 in many situations.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › G5 : 64 bits or 32 bits ?