or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › 32bit / 64bit too much talk.
New Posts  All Forums:Forum Nav:

32bit / 64bit too much talk.

post #1 of 38
Thread Starter 
I have a Question, I keep reading articles on new processors (AMD/Intel)
and getting confused. This does relate to Future Apple hardware.

The articles will make comments about Opteron/new intel processors, not being true 64 bit processors. That all they have done is add 64 bit memory addressing.

So I am not clear on what makes a true 64 bit processor. What does?

Which brings me to the question of the G5. How 64bit is the new G5?

With Intel announcing the adding of 64 bit extensions to there upcoming releases that are very similar to AMD's extensions and so much contradiction in what I have been reading, I am at a loss now.

Please help me clarify my thoughts.


PS. My comparison of the three chips is based soley on the fact that they are all 32 bit / 64 bit processors, or are they?
post #2 of 38
Assuming that a "true 64-bit processor" is one whose instructions only deal with data in 64-bit chunks, there aren't very many of them because they're not very useful.

The new x86-64 CPUs from AMD and (now) Intel "bolted on" 64-bit support because if they offered a chip that used a purely 64-bit instruction set, no existing x86 app would run on it! Is that a feature?

Processor design has moved beyond attempts to implement "pure" anything, because the "pure" models inevitably fail to cope with at least one major aspect of reality, and real-world CPUs can't. It's useful to have lots of instructions that deal with 32-bit data, so CPUs have lots of instructions that deal with 32-bit data. It's just as simple as that.

Just to drive the point home, there's no clear definition of what a "64 bit" processor is in the first place. It used to refer to the register size (a register is a very small, very fast bit of memory designed to hold a "word," which is classically the smallest size the CPU is designed to work with - modern designs, again, are too complex to hold to this), but that was back when floating point units were separate and optional support chips. The 68040 had an 80 bit register for floating point work, and the G4 has 128-bit registers for AltiVec and 64-bit registers for floating point. So that definition has fallen back on the size of the integer registers. In all modern implementations, addresses in memory are stored as integers, so this metric also measures the amount of virtual memory that the CPU can address efficiently. As a convenient coincidence, I'm not aware of any CPU where the integer registers are larger than the FP or vector or other registers, so this shorthand works pretty well in practice.

So, after all that, a "64 bit CPU" is a CPU that uses 64 bit addresses into virtual memory, for all practical purposes. A "pure" 64-bit CPU - one which only deals with data in 64-bit chunks - is a pretty idea with limited practical application. The 970, and the Nocona Xeon, and the Athlon 64, and the Opteron, are hybrid 32/64 bit CPUs because that's what works.
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
post #3 of 38
Thread Starter 
Quote:
Originally posted by Amorph
Assuming that a "true 64-bit processor" is one whose instructions only deal with data in 64-bit chunks, there aren't very many of them because they're not very useful.

A "pure" 64-bit CPU - one which only deals with data in 64-bit chunks - is a pretty idea with limited practical application. The 970, and the Nocona Xeon, and the Athlon 64, and the Opteron, are hybrid 32/64 bit CPUs because that's what works.

So can any of these four chips handle 64 bit data or not?
post #4 of 38
I'd add that a 64-bit processor must have 64-bit-wide registers on which it can perform all ALU operations.

Even that's not perfect, because the Zilog Z8000 could do 32-bit operations and had a 32-bit bus, but it was considered a 16-bit processor. Hmm.

I think it really has something to do with data path widths within the processor, and external data bus (not just address) width to memory.
post #5 of 38
Apple's G5 processor Architecture page notes a few 'features'


2^32 ...... 2^64 - these are exponents...
4.3 billion. Numbers that big are hard to get your head around, but you could compare 32-bit processing to a glass of water, and 64-bit processing to the Niagara falls. This lets the G5 work with larger numbers in the same clock cycle for video effects, scientific and 3D calculations; the 32-bit Pentium must split such numbers across multiple cycles.

Hannibal at ARS has some good technical threads about the G5

and even if the G5 were consider "pure 64", despite the fact there is _no emulation penalty_ for 32-bit code according to IBM (who have had this legacy support designed in to many of the POWER series chips to allow backwards compatibility while expanding 64-bit design), we must remember that OS X.3 as it currently stands isn't truly a "64-bit OS", it's considered a bridge environment (mostly 32 with some 64 beginning to dominate) until 32-bit support requirements diminish and migration to a more 'pure 64 OS" occurs.

not sure if that just confuses the issue.
"I do not fear computers. I fear the lack of them" -Isaac Asimov
Reply
"I do not fear computers. I fear the lack of them" -Isaac Asimov
Reply
post #6 of 38
The real point is that most of the time, the 64 bit-ness of the processor means very little to the average consumer. It's a popular buzzword right now, but not much else. There are some applications where true 64-bit support will matter, but they are not nearly as common as folks' compulsion to dwell on the issue would suggest.

From a hardware design standpoint, it means just about nothing at all. In speculating about the G5 PowerBook, for example, sometimes people talk about what a challenge it is to "fit a 64-bit processor into a laptop." Completely irrelevant; the 64 bit-ness is hidden inside the chip.

The move from 16 bits to 32 bits was critical because you very often need to deal with numbers greater than 2^16. The move from 32 bits to 64 bits is much fuzzier, though the ability to address a humongous amount of memory will be very valuable for some.

My guess is that if the 970 were a 32-bit chip with otherwise the same performance, Apple would still have used it. The 64-bitness gave them a marketing hook that they chose to take advantage of, for better or worse.
post #7 of 38
Quote:
Originally posted by oldmacfan
So can any of these four chips handle 64 bit data or not?

All of them can.
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
post #8 of 38
Amorph,

Am I correct in reading into your post that it's "merely" a chicken and egg problem?

You seem to be saying that there are limited practical applications for a "pure" 64-bit CPU because no one yet develops software for a purely 64-bit CPU?

So is it a matter of backwards-compatibility alone?

I guess I'm asking if, barring backwards-compatibility, barring lack of 64-bit developments tools (assuming), is there anything else that prevents going truly 64-bit?

I seem to get from this that 32-bit is good enough for most things and that only certain things will "ever" be 64-bit?
"The Roots of Violence: wealth without work, pleasure without conscience, knowledge without character, commerce without morality, science without humanity, worship without sacrifice, politics...
Reply
"The Roots of Violence: wealth without work, pleasure without conscience, knowledge without character, commerce without morality, science without humanity, worship without sacrifice, politics...
Reply
post #9 of 38
the clearest illustration I know of the advantages of 64 bit are BLAST word length for DNA Sequencing



parsing larger chunks of data makes a huge difference after a certain threshold

from here

cryptography will benefit from access to vastly larger numbers
certain mathematical, physics and modeling apps will clearly benefit
massive databases will benefit from this as well as larger memory address space
"I do not fear computers. I fear the lack of them" -Isaac Asimov
Reply
"I do not fear computers. I fear the lack of them" -Isaac Asimov
Reply
post #10 of 38
Quote:
Originally posted by johnq
Amorph,

Am I correct in reading into your post that it's "merely" a chicken and egg problem?

You seem to be saying that there are limited practical applications for a "pure" 64-bit CPU because no one yet develops software for a purely 64-bit CPU?

More to the point, there's not much reason to develop software for a purely 64-bit CPU.

The transition from 16-bit to 32-bit was made quickly and eagerly because 16 bits only offers you 65,536 values - usually -32,768 to 32,767. That's 64K of memory, or a painfully small range of integers (FP was a luxury back then). 32-bit offers something over 4 billion possible values, which addresses 4GB of RAM and covers just about every other need pretty well. The only place where more possible values were needed (for precision, not for range) was in floating point, and CPUs just used huge special registers for that - as with the 68040, a 32-bit CPU that could do arithmetic on 80-bit floating point values, or the G4, a 32-bit CPU that can do arithmetic on 64-bit floating point values or 128-bit vectors.

But any application that doesn't need more than 4GB RAM, or doesn't have to deal with more than 4 billion of anything, does not need to be 64 bit. Think about how many apps do not need this, and how many never will. Do you really need to enter 5 billion contacts into Address Book? (You really would have the world at your fingertips, I suppose...)

I should note here that it is possible for a 32-bit application to deal with larger quantities. There are ways around any such limit. You pay a penalty in performance, but then you probably wouldn't expect 100 billion photos to pop up instantaneously anyway. At least, not any time soon.

Quote:
I seem to get from this that 32-bit is good enough for most things and that only certain things will "ever" be 64-bit?

Exactly. And if there's always going to be a significant set of applications that run better as 32-bit applications, why not accommodate them? You could force them to pure 64-bit, but why? It's not like the 32-bit support is an albatross on the neck of CPU developers or anything. A hybrid CPU can run legacy code, can efficiently run applications that are most comfortable as 32 bit, and they can also run applications that are most comfortable at 64 bit. This seems to me like a perfectly acceptable state of affairs.

Put another way, the applications that do require 64 bits wouldn't run a hair better if 32-bit support was eliminated from the 970, and the Nocona, and the Athlon 64, and the Opteron. But 32-bit applications would run slower in many cases.
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
post #11 of 38
Quote:
Originally posted by curiousuburb
the clearest illustration I know of the advantages of 64 bit are BLAST word length for DNA Sequencing



parsing larger chunks of data makes a huge difference after a certain threshold

from here

cryptography will benefit from access to vastly larger numbers
certain mathematical, physics and modeling apps will clearly benefit
massive databases will benefit from this as well as larger memory address space

Careful what you attribute the G5's better performance to... I believe their BLAST implementation uses AltiVec and is fast for that reason (128-bit data chunks).

Cryptography can probably benefit from AltiVec more than 64-bit integers, although I might be mistaken on that one. Some modeling algorithms might benefit, but most use double precision floating point and don't require a "64-bit processor" (which is why the 970 has 2 FPUs). Massive databases which use 64-bit address certainly benefit, and that is the clearest demonstration of the 64-bit advantage that you listed.

Note that the designation "64-bit" has nothing to do with internal data path widths, floating point unit widths, vector unit widths, cache line sizes, external data bus sizes, etc. Those are all implementation details of the particular processor. The designation "64-bit" means the size of the integer registers, and therefore (virtual) memory addresses.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #12 of 38
Thank you Amorph, very clear.

I've slacked since G4 and my interest in this level of detail has become merely armchair-grade. It is fascinating and exciting nonetheless. I appreciate the somewhat rare adult response. I was fearing "WTF!!!!!! Don't you know 32-bit offers something over 4 billion possible values???? NITWIT!!!!" or some-such (not from anyone in particular!)

I'm thinking that us 'regular users' (that is, everyone short of true scientists/doctors/researchers and students thereof) are reaching a zenith of performance? I mean that the other shoes (sic) to drop are bottleneck issues, RAM, drives, fiber, etc. basically everything but CPU?
"The Roots of Violence: wealth without work, pleasure without conscience, knowledge without character, commerce without morality, science without humanity, worship without sacrifice, politics...
Reply
"The Roots of Violence: wealth without work, pleasure without conscience, knowledge without character, commerce without morality, science without humanity, worship without sacrifice, politics...
Reply
post #13 of 38
double post
"The Roots of Violence: wealth without work, pleasure without conscience, knowledge without character, commerce without morality, science without humanity, worship without sacrifice, politics...
Reply
"The Roots of Violence: wealth without work, pleasure without conscience, knowledge without character, commerce without morality, science without humanity, worship without sacrifice, politics...
Reply
post #14 of 38
Quote:
Originally posted by cubist
I'd add that a 64-bit processor must have 64-bit-wide registers on which it can perform all ALU operations.

Even that's not perfect, because the Zilog Z8000 could do 32-bit operations and had a 32-bit bus, but it was considered a 16-bit processor. Hmm.

I think it really has something to do with data path widths within the processor, and external data bus (not just address) width to memory.

I think that you are confusing things. The Zilog Z8000 had 64 segment registers. These extremely flexible registers could be combined into sixteen 16-bit, eight 32-bit, or four 64-bit registers. The Z8000's internal registers were later expanded to 32-bit with a six-stage pipeline. The 32-bit version of the processor was the Z80000.
post #15 of 38
Quote:
Originally posted by curiousuburb
the clearest illustration I know of the advantages of 64 bit are BLAST word length for DNA Sequencing



parsing larger chunks of data makes a huge difference after a certain threshold

from {picture}

cryptography will benefit from access to vastly larger numbers
certain mathematical, physics and modeling apps will clearly benefit
massive databases will benefit from this as well as larger memory address space

It seems that the G5 is a true 64-bit processor with 48-bit (edit: it's actually 42-bit) memory addressing, isn't that right? That means that, should the memory addressor increase to the G5's outright bit support, the G5 could introduce a ceiling of :

18,448,744,073,709,551,616.00 Bytes of addressable data.

That's 18,000 Petabytes.
Or that's 18 million Terabytes.

======
I think that could leave one in an incomprehensible state for a while. Wipe the drool from your lip. Aaaaaah...a man can dream. Imagine if you were to max that out and stick 18,446,744,073.7 (18 Million) sticks of 1GB DDR-RAM to max that beast out...wow.
======

64-bit is there for the future. We might not see a direct need for it now, but it will help us in the long run. I mean, going back to 1970's, there wasn't a need for a 4 digit date register in some computers.

One thing that I always see with computers is that they always have a ceiling or maximum. Making that ceiling just a little bit farther away will help us to accomplish more.

As per the DNA models. That kind of computing is needed by few now, but more later.

Avg. RAM:
1970 : ~000,064,000 (bytes)
2004 : ~512,000,000 (bytes)

If this is a linear graph, and I haven't plotted anything, we should see by 2034,

2034 : 4,096,000,000,000

Computers coming with 4 TB? I don't think that's too much to hope for.

G5's would still be well in as far as memory addressing is concerned. Every 32-bit processor would have a bottle neck, itself (edit: if using virtual addressing)

-walloo.
WILLYWALLOO'S: MostlyMacly: Rumors. Read about the timeline beyond our time.
PENFIFTEENPRODUCTIONS: We like what we do.
Reply
WILLYWALLOO'S: MostlyMacly: Rumors. Read about the timeline beyond our time.
PENFIFTEENPRODUCTIONS: We like what we do.
Reply
post #16 of 38
Quote:
Originally posted by willywalloo
It seems that the G5 is a true 64-bit processor with 48-bit memory addressing, isn't that right? That means that, should the memory addressor increase to the G5's outright bit support, the G5 could introduce a ceiling of :

The G5 is a "true 64-bit processor", but its frontside bus protocol "limits" it to 42-bit physical addresses. This means you're stuck with a lowly 4096 gigabytes of RAM, maximum. Your virtual address space theoretically can be up to 16 billion billion bytes. I say theoretically because there is usually some other subtle factor that comes into play which would prevent that in practice... perhaps the virtual memory page tables are too large to fit in the physically addressable memory, or something silly like that. In any case it is going to be academic for quite some time yet as that much memory would cost a truly astronomical amount.

Quote:
64-bit is there for the future. We might not see a direct need for it now, but it will help us in the long run. I mean, going back to 1970's, there wasn't a need for a 4 digit date register in some computers.

For a lot of things 64-bits will never be needed... one day all software could be written for 64-bit mode, but that will most likely be because 32-bit mode was discarded completely. By that point, however, things will have changed so much that it is all rendered moot anyhow.

What 64-bit is interesting for is a bunch of scientific computing applications, for software that manipulates huge amount of data (i.e. big databases), and for applications that will show up only when this hardware is in many people's hands. This is one of those chicken-and-the-egg problems ... why create software that requires 64-bit hardware when there is no 64-bit hardware? Why create 64-bit hardware when there is no software to use it? Well, they've built the hardware and in the next few years we'll see who can come up with compelling uses for it. Hopefully its built for the Mac first.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #17 of 38
Quote:
Originally posted by willywalloo

Avg. RAM:
1970 : 000,064,000
2004 : 512,000,000

If this is a linear graph, and I haven't plotted anything, we should see by 2034,

2034 : 4,096,000,000,000

Computers coming with 4 TB? I don't think that's too much to hope for.


So, you are suggesting to write an equation of the form

y = a * x + b

where y is the average RAM in bytes, x the time in years and a, b constant parameters. Using your data for the years 1970 and 2004 one finds:

a = 15056941 (bytes/year)
b = -29662109770 (bytes).

So, in year 2034 you will have average RAM:

y( 2034 ) = 963708224 Bytes ~ 963 MB.

With the same law, the suggested amount of 4 TB, will occur at year

x = 274004.

Am I missing something?
post #18 of 38
Quote:
Originally posted by PB
With the same law, the suggested amount of 4 TB, will occur at year

x = 274004.

Will that be "end of summer" 274,004?

Should I wait for MacWorld 274,005 before I buy my next Mac?

"The Roots of Violence: wealth without work, pleasure without conscience, knowledge without character, commerce without morality, science without humanity, worship without sacrifice, politics...
Reply
"The Roots of Violence: wealth without work, pleasure without conscience, knowledge without character, commerce without morality, science without humanity, worship without sacrifice, politics...
Reply
post #19 of 38
Quote:
Originally posted by johnq
Will that be "end of summer" 274,004?

Should I wait for MacWorld 274,005 before I buy my next Mac?




Good idea. But if you need it now, don't wait, buy now .
post #20 of 38
You're looking at it in a linear fashion. With only two data points, you're going from the coordinates of (0,64000) to (34, 512000000). I chose to use "years after 1970" instead of "year" for the x-axis.

Anyway, if you approximate that, it's basically saying that every 30-odd years, RAM increases by about 500,000,000 bytes. If we add two or three more data points and fit something like an exponential curve to it, it might make more sense. We could look at how often the amount of RAM has doubled in the high-end machines of their day.
post #21 of 38
I hate quoting Moore's Law, but since it basically says the transistor count doubles every 18 months you can see that this is definitely an exponential curve. With your numbers 1974 was 2^16, this year is 2^29 so that is 13 doublings in 30 years (instead of the 20 doublings Moore predicted). To reach a terabyte we need 2048 more, which is a further 11 doublings... so that will take another 25 years.


Note that I take issue with your 1974 number. It wasn't until the early 80's that we really had commercial machines with 64K of memory. This would reduce the time period by about 8, meaning there were 13 doublings in 22 years... much closer to Moore's prediction of 14-15 in that time period. If we believe Moore instead of you (no offense) then it'll take only another 16 or so years to reach a terabyte.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #22 of 38
Quote:
Originally posted by Programmer
Note that I take issue with your 1974 number. It wasn't until the early 80's that we really had commercial machines with 64K of memory.

True. If you want to go back to personal computers in '74, the original Altair came with 512 bytes, or 1/2 a kilobyte, of RAM. I'm not sure how that fits into the calculations, though.
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
post #23 of 38
Quote:
Programmer said: For a lot of things 64-bits will never be needed... one day all software could be written for 64-bit mode, but that will most likely be because 32-bit mode was discarded completely.

It's maybe worth pointing out that 64 bit uptake will likely not be driven by technological factors. Marketing will have a big say in the matter, and (imho) will cause a much faster shift.

In this entertaining (it really is enjoyable, but ya gotta have broadband....) presentation, the former chief of proc architecture at Intel pushes this view, as well as lots of other interesting observations..

It's at the bottom of the Inquirer page:

http://www.theinquirer.net/?article=14310

Sorry if y'all are already familiar with it. It's also being discussed over at ars.
post #24 of 38
According to Apple-history.com, various Apple computers had these amounts of RAM:

Apple I (1976) - 8 kb standard, 32 kb max
Apple II (1977) - 4 kb standard, 64 kb max
Apple IIplus (1979) - 48 kb standard, 64 kb max
Apple III (1980) - 128 kb
Apple III+ (1981) - 256 kb
Apple IIe (1983) - 64 kb standard, 128 kb max
Lisa (1983) - 1 MB
Apple IIc/IIc+ (1984) - 128 kb standard, 1 MB max
Apple IIgs (1986) - 256 kb standard, 8 MB max

And the IIgs was the end of the line for the Apples. Also, remember that the original Macintosh's 128 kb of RAM was considered really stingy for its time, and the 512k addressed that. In 1986, the Mac Plus had 1 MB of RAM standard, and that was really considered a lot since the IBM PC could only have 640 kb. The Plus, on the other hand, came with 1 MB standard and could take up to 4 MB. After that, things really took off, with the expandable Mac II series.
post #25 of 38
Quote:
Originally posted by Amorph
True. If you want to go back to personal computers in '74, the original Altair came with 512 bytes, or 1/2 a kilobyte, of RAM. I'm not sure how that fits into the calculations, though.

First of all, please don't take these calculations too seriously because (if nothing else) we're not holding price constant and this is fundamentally a bytes/$ calculation.

Since 512 is 2^9, that is 7 doublings before 64K. As a result we have 20 doublings in 30 years... which just happens to be a doubling every 18 months which is exactly what Moore predicted. Thank you Amorph, I think I'll stop there...
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #26 of 38
An interesting point is that diminishing returns on the practical, real world technical capability of computers may be occurring.
The challenge that remains is the user interface. Something that allows a complete beginner to use the total capacity of their computer without much or any learning curve does not exist. Yet.
Current systems work a lot like artificial limbs. Useful, if needed, but not intuitive.
I'd like a computer to work like my arm.
post #27 of 38
Quote:
Originally posted by shawk
An interesting point is that diminishing returns on the practical, real world technical capability of computers may be occurring.
The challenge that remains is the user interface. Something that allows a complete beginner to use the total capacity of their computer without much or any learning curve does not exist. Yet.
Current systems work a lot like artificial limbs. Useful, if needed, but not intuitive.
I'd like a computer to work like my arm.

Babies take quite a while to learn to use their arms. It is unrealistic to expect to have some new ______ (insert whatever you want in the blank) which requires no learning at all to use. Even direct neural interfaces will have to be learned.

For most decent programs these days the GUI is reasonably well worked out, all you have to do in order to see this is watch somebody experienced with the software doing actual work. In this department I find that the Mac has always been farther ahead. Not perfect yet, but really it is just a matter of refinement.


I think a bigger issue these days is finding new compelling uses for computers. Its been quite a while since the last "killer app" came along, especially one which required bleeding edge hardware. This is why games have taken over the role of pushing the performance envelope (followed closely by featuritis-infected bloat-ware that doesn't get better, it just gets different). The killer app that has been looming for a while, but not fully delivered yet, is the infamous "digital hub".
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #28 of 38
Quote:
Originally posted by Programmer
The G5 is a "true 64-bit processor", but its frontside bus protocol "limits" it to 42-bit physical addresses.

Shoot, 42-bit memory addressing....48 was in my mind from a sound encoding (48 khz). (I'm going to edit that)

2^42 (that's 2 states per bit (1 or 0) to the power of how many bits, 42.

4,398,046,511,104 bits. 4398 Gb (b = bits, B = Bytes)
0,549,755,813,888 bytes. (8 bits to a byte)

Code:


getting it go GB,

0,549,755,813,888 bytes 1 GB
[------------------------ ] [ ------------------- ] = 512 GB RAM
1 1,073,741,824 bytes



Had to do the calc...
-walloo.
WILLYWALLOO'S: MostlyMacly: Rumors. Read about the timeline beyond our time.
PENFIFTEENPRODUCTIONS: We like what we do.
Reply
WILLYWALLOO'S: MostlyMacly: Rumors. Read about the timeline beyond our time.
PENFIFTEENPRODUCTIONS: We like what we do.
Reply
post #29 of 38
42-bit addressing means access to 2^42 bytes, not bits. That is 2^10 times as much as 32-bit addressing allows.

2^10 = 1024
2^32 = 4 GB

therefore 2^42 = 4096 GB of RAM.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #30 of 38
That's something they could do actually - stick in individual bit addressing (some embedded systems have it already). It would be pretty cool for certain applications and with 2^64 addresses, I doubt anyone will care if they can only stick 2097152 Terabytes of ram in their machine instead of 16777216.

To be honest I doubt many people would get upset about only being able to use 512 Gig of ram at the moment instead of 4 Terabytes, although that will change no doubt as new applications are invented (or more likely, as people get used to rendering cinema-quality Final Cut movies on the fly).

Socrates
"There's no chance that the iPhone is going to get any significant market share. No chance" - Steve Ballmer
Reply
"There's no chance that the iPhone is going to get any significant market share. No chance" - Steve Ballmer
Reply
post #31 of 38
One thing is sure, we will not see 128 bit CPU (128 bit registers). In an another thread someone did a calculation of what should be the physical size of a 2^128 memory . I forget the details, but basically it could not enter in any desktop computer.

The future is multicore chip. All big players seems to turn this way.
post #32 of 38
The extra address space does not come for free. Dealing with a sparse virtual address space of that size requires some imaginative software to keep the memory blocks sane. Right now Darwin uses a pretty clever skip-list algorithm to keep performance up while being able to address (a little more than) 32 bits of address space. If you actually wanted to address 42 bits, you're expanding the space by enough to have a real performance impact unless you develop new memory management algorithms.

Anyway, my point is that while the CPU can switch between 32 bit and 64 bit with no performance penalty to instruction execution speed, all kinds of things, from having to load twice as much data per operation to dealing with the extra address space, can slow down the system if it's made truly "64 bit clean". If you don't need 64 bits (and most people don't), it's probably better not to have it right now.

That being said, we're essentially financing Apple's R&D for the machines and software they'll make 5-8 years from now when 64 bits actually will be useful and the slowdown will be worth it. And optimizing for the G5 will offset and possibly overtake the performance loss, yielding a net gain... but it still would probably have been faster as a 32 bit system. It's almost funny to have *users* clamoring for Apple to go pure 64 bits, since it will essentially just hurt them while benefiting Apple down the road.
post #33 of 38
Apollo 11 anniversary and all...

Humans went to the Moon with 8KB (filling a room at NASA)
Later Apollo flights carried portable programmable calculators for backup.

Google found NASA's Office of Logic Design Apollo Guidance Computer Historical References as well as this student paper PDF detailing plenty of historical computer specs, and a few future spec(ulation)s

Quote:
from History.Nasa.Gov
Choosing a computer for Shuttle was an interesting task in itself. The logical choice at first appeared to be the Autonetics D-216, a 16-bit computer with 16 kilowords of memory that cost approximately $85,000 each. This was the computer being used in the North American B-1A prototypes, but the small word size and limited memory led Space Shuttle officials to continue their search. A meeting on 5 October 1972 to discuss proposed modifications to the DFBW F-8 brought to light two computers that could possibly be used on Shuttle. The 32-bit Singer-Kearfott SKC-2000 had floating-point arithmetic (unusual in those days) and could be expanded to 24 kilowords of memory. Its major drawbacks were that it used 430 watts of power, weighed 90 pounds, and cost $185,000 each. During November 1972, the Shuttle program "discovered" the IBM AP-101, a variant of the same 4Pi computer that had flown on Skylab. This 32-bit machine had 32 kilowords of memory and a floating-point instruction set, consumed 370 watts of power, weighed slightly less than 50 pounds, and cost $87,000 each.8

It should be noted that no off-the-shelf microprocessors (no Z80s, 80x86s, 680x0s, etc.) were then available, and large-scale integrated circuit technology was emerging but years away from maturity. Since little, if anything, was known about the effects of lightning or radiation on high-density solid-state circuitry, ferrite-core memory was the only reasonably available choice for the orbiter computers. Therefore, memory size was limited by the power, weight, and heat constraints associated with core memory.

For modern comparison, Spirit and Opportunity rovers each pack 256MB of Flash and 128MB of Hardened RAM (which had early hiccoughs with file structure exceeding limits the RAM could manage - since solved by not letting the file count get too high before d/l and purging RAM). Each rover runs a radiation hardened version of the RS/6000 chip.

Cassini packs a Solid State Recorder with a capacity of 4 Gigabits before it needs purging.
"I do not fear computers. I fear the lack of them" -Isaac Asimov
Reply
"I do not fear computers. I fear the lack of them" -Isaac Asimov
Reply
post #34 of 38
Now I know I'm getting old. IN my first computer course in college they passed memory around - in a cup. Little magnetic disks, or "bits", ready for hardwiring. Programming, on the largest Xerox computer available, was with punch cards and student's programs averaged 10 - 15 cards, in Fortran.

Now I'm waiting for the G5 iMac. Almost went with the 20' iMac in January, but decided to wait for the simple reason that I thought OS X, and some programs, would move more to that over the next few years. I'll also probably get a gig of memory and be irritated if the G5 is not 2.0, with a 1.0 FSB. (In my defense, this will probably be the last home computer I buy before retiring.)

The advancements in computers has been far faster than any other field. Why do they do it? Because they can. It is amazing, but this explosion has been financed by Bubba. Old Bubba goes into the discount store (or calls Dell) and walks out with a $500 - $600 computer. There are a lot of Bubbas and the money has been flowing in for developing more power - which sends Bubba back for another computer in 2 - 3 years. The Bubba factor (and its impact on economies of scale) is as amazing as the speed of technology development.

Today computers are cheaper and there are more Bubbas in the market. It used to be that Bubba lived in the US, UK or some other advanced economy. Now there are Bubbas everywhere from Boston to Bangkok. More Bubbas mean more money for R&D and things are going to continue to explode.

I probably won't live to see all of the technology that today's engineers are dreaming about, but I've had one hell of a ride.
Ken
Reply
Ken
Reply
post #35 of 38
Quote:
Originally posted by kenaustus
....

The advancements in computers has been far faster than any other field. Why do they do it? Because they can. It is amazing, but this explosion has been financed by Bubba. Old Bubba goes into the discount store (or calls Dell) and walks out with a $500 - $600 computer. There are a lot of Bubbas and the money has been flowing in for developing more power - which sends Bubba back for another computer in 2 - 3 years. The Bubba factor (and its impact on economies of scale) is as amazing as the speed of technology development.

Today computers are cheaper and there are more Bubbas in the market. It used to be that Bubba lived in the US, UK or some other advanced economy. Now there are Bubbas everywhere from Boston to Bangkok. More Bubbas mean more money for R&D and things are going to continue to explode.

I probably won't live to see all of the technology that today's engineers are dreaming about, but I've had one hell of a ride.

I disagree. Bubba is not unimportant, but he does not move the market, except for games. The games market is dominated not so much by Bubba, but by his children. The rest of the personal computer market is dominated by businesses. The microcomputer makes computer power available to small businesses and even businesses run out of the home, a resource available only to large businesses when you were in college. When you were in college, even large firms may have had only one computer. Walk around an office flat today. You may see more computers there than in your entire neighborhood. On the software side, Microsoft has achieved its dominance by targeting big business. You simply don't see anything like an equivalent to Microsoft in the small business or home market arena.
post #36 of 38
Quote:
Originally posted by Amorph
More to the point, there's not much reason to develop software for a purely 64-bit CPU.

Maybe for a "purely 64-bit CPU" but there are many reasons to develop for 64 bit machines in general.
Quote:

The transition from 16-bit to 32-bit was made quickly and eagerly because 16 bits only offers you 65,536 values - usually -32,768 to 32,767. That's 64K of memory, or a painfully small range of integers (FP was a luxury back then). 32-bit offers something over 4 billion possible values, which addresses 4GB of RAM and covers just about every other need pretty well. The only place where more possible values were needed (for precision, not for range) was in floating point, and CPUs just used huge special registers for that - as with the 68040, a 32-bit CPU that could do arithmetic on 80-bit floating point values, or the G4, a 32-bit CPU that can do arithmetic on 64-bit floating point values or 128-bit vectors.

There are a huge number of problems that do not fit well into the 32 bit register. While it is certainly true that there are far fewer problems that need a full 64 bit register, 64 bits is the next logical register size.
Quote:

But any application that doesn't need more than 4GB RAM, or doesn't have to deal with more than 4 billion of anything, does not need to be 64 bit. Think about how many apps do not need this, and how many never will. Do you really need to enter 5 billion contacts into Address Book? (You really would have the world at your fingertips, I suppose...)

Instead of thinking about the apps that could or couldn't use 64 bits one should focus instead on the capabilities that ths extends to the OS. Addressable memory is huge, in more ways than one, and has allowed Apple to expand the range of addressable memory for 32 bit apps. Even at this early stage 64 bit processors have had a positive impact on 32 software. In any event on PPC this whole discussion is not worth anybodies time as there is currently no penalty at all for running 32 bit software on the processor. I suspect that this will remain so into the future, thus applications that benefit from a 64 bit address space will continue to work along side those that don't.
Quote:

I should note here that it is possible for a 32-bit application to deal with larger quantities. There are ways around any such limit. You pay a penalty in performance, but then you probably wouldn't expect 100 billion photos to pop up instantaneously anyway. At least, not any time soon.

It is not impossible to tax the current address space on 32 bit machines with photoediting software. This does not take into account other apps one may want to run. Sure some of that taxing is the result of poor programming but even then if one wants to work on extremely large digital images performace can become an issue. Some of th efactors that lead to these problems can be addressed via a larger address space.
Quote:



Exactly. And if there's always going to be a significant set of applications that run better as 32-bit applications, why not accommodate them? You could force them to pure 64-bit, but why? It's not like the 32-bit support is an albatross on the neck of CPU developers or anything. A hybrid CPU can run legacy code, can efficiently run applications that are most comfortable as 32 bit, and they can also run applications that are most comfortable at 64 bit. This seems to me like a perfectly acceptable state of affairs.

Put another way, the applications that do require 64 bits wouldn't run a hair better if 32-bit support was eliminated from the 970, and the Nocona, and the Athlon 64, and the Opteron. But 32-bit applications would run slower in many cases.

It is interesting that many 32 bit applications have seen speed ups on AMD i64 hardware. In any event we did not give up 8 bit and 16 bit support when moving to 32 bit hardware. There is good reason to support such data types. The interesting thing with PPC is that by starting out with a 32 bit address range and planning for 64 bits at the very begining, the PPC does not have to support an excessive number of addressing modes that some processors do. This results in a much smaller chip.

As to the issue of apps running better if 32 bit addressing was eliminated that is and open question. If the elmination of the logic for 32 bit support simplfies things it could allow for speed improvements. Further for a given process technology you all of a sudden have more transitors to do new things with. For some of the processors mentioned there could be an argument made that elimination of all those extra addressing modes would lead to a much faster design.


Dave
post #37 of 38
Quote:
Originally posted by Booga
If you actually wanted to address 42 bits, you're expanding the space by enough to have a real performance impact unless you develop new memory management algorithms.

This is not really a huge problem as software technology growth with the capability to realize these large address spaces in hardware.
Quote:

Anyway, my point is that while the CPU can switch between 32 bit and 64 bit with no performance penalty to instruction execution speed, all kinds of things, from having to load twice as much data per operation to dealing with the extra address space, can slow down the system if it's made truly "64 bit clean". If you don't need 64 bits (and most people don't), it's probably better not to have it right now.

This is incredible first you say there is not performance penalty, which is true, then you go on about it might be a problem, which it isn't. To top it off you say it is "probably better not to have it right now" which makes no sense at all, having 64 bit capability does not impact your 32 bit applicaitons at all on PPC.

There are only advantage to having a 64 bit capable platform at this stage. The world isn't ready to take full advantage of that platform but it isn't sitting still either.
Quote:

That being said, we're essentially financing Apple's R&D for the machines and software they'll make 5-8 years from now when 64 bits actually will be useful and the slowdown will be worth it. And optimizing for the G5 will offset and possibly overtake the performance loss, yielding a net gain... but it still would probably have been faster as a 32 bit system. It's almost funny to have *users* clamoring for Apple to go pure 64 bits, since it will essentially just hurt them while benefiting Apple down the road.

The idea that there is a huge performance loss for 64 bit applications or an OS that fully supports 64 bit has me puzzled. For some applications 64 bits provides a huge boost right off the bat. For those applications that simply don't need that address range support there is no disadvantage. The reality is that it is a huge advnatage for the OS to be able to manage all that addressable memory.

Maybe I'm missing something here but some of the stuff I've seen in this thread relative to 64 bit on PPC is really strange.

Thanks
Dave
post #38 of 38
Aaah! Thread necromancy! Run away!

Quote:
Originally posted by wizard69
Maybe for a "purely 64-bit CPU" but there are many reasons to develop for 64 bit machines in general.

You'll get no argument from me on that point.

My specific goal in this thread was to debunk the idea that there was anything virtuous, or even meaningful, about a "pure 64 bit" CPU. I'm not pooh-poohing 64 bit support in the CPU at all. I've worked on 64 bit machines for nearly a decade now, so I'm quite familiar with their advantages.

However, in the current landscape (and for some time to come), hybrid designs make the most sense.

Quote:
As to the issue of apps running better if 32 bit addressing was eliminated that is and open question.

It's impossible to answer in the general case, but it can be quite dramatic in particular instances. Try accessing memory in 32-bit increments on an early Alpha. Your performance will go straight to hell.

Quote:
If the elmination of the logic for 32 bit support simplfies things it could allow for speed improvements. Further for a given process technology you all of a sudden have more transitors to do new things with. For some of the processors mentioned there could be an argument made that elimination of all those extra addressing modes would lead to a much faster design.

I sincerely doubt that the improvement would be of any significant magnitude. In a CPU the size of the 970fx, we're talking about a fairly trivial transistor count to support 32-bit instructions (I don't have numbers right now - and I'd appreciate it if someone can dredge some up -, but I'd guess it requires a single-digit percentage of the total). This will get more true, not less true, as CPUs get more and more ambitious. Furthermore, CPUs have to function in the real world, and the real world has an enormous 32-bit legacy. There's absolutely no reason to abandon or even compromise support for that legacy any time soon, especially with the cost in transistors as low as it is.
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › 32bit / 64bit too much talk.