S'ok. Use smilies or proper XML sarcasm tags <sarcasm> </sarcasm> to denote sarcasm. One line sentences can be mistaken pretty easily for serious comments by someone who doesn't know what they are talking about (there are plenty of those here already and we don't need more).
No, I meant that five years down the road, the OS will be a full 64 bit OS, and not be backwards compatible for 32 bit machines. Of course the OS would still run 32 bit apps.
Thanks for the clarification; I misunderstood. However, I'll still have to disagree with your guesses about Apple only shipping 64-bit Macs in five years, and that OS X will then only work on 64-bit Macs. Taking the last one first, I think Apple would like to keep that extra revenue from selling OS updates to those with older, 32-bit Macs. And from what I understand, it is not that difficult to make an install CD for both 64 and 32 bit CPUs. Likely, you know more about these technical details than I do.
Regarding all 64-bit Macs in five years, I don't think so. It would be more like 10 or 20 years in my opinion. The 970 and those to follow it are not appropriate for lower end Macs. This series of chips will always cost more and dissipated more heat than a chip designed specifically for the low end market. For many years, it will also cost less to make that chip 32 bits. I believe the Mojave those to follow it will be that low end 32-bit chip. IBM is designing it for the embedded market too from what I hear. It will be lower cost and lower power.
Thanks for the clarification; I misunderstood. However, I'll still have to disagree with your guesses about Apple only shipping 64-bit Macs in five years, and that OS X will then only work on 64-bit Macs. Taking the last one first, I think Apple would like to keep that extra revenue from selling OS updates to those with older, 32-bit Macs. And from what I understand, it is not that difficult to make an install CD for both 64 and 32 bit CPUs. Likely, you know more about these technical details than I do.
Regarding all 64-bit Macs in five years, I don't think so. It would be more like 10 or 20 years in my opinion. The 970 and those to follow it are not appropriate for lower end Macs. This series of chips will always cost more and dissipated more heat than a chip designed specifically for the low end market. For many years, it will also cost less to make that chip 32 bits. I believe the Mojave those to follow it will be that low end 32-bit chip. IBM is designing it for the embedded market too from what I hear. It will be lower cost and lower power.
Unless Moto gets their act together, which I really doubt, IBM's 970 is the only way out for Apple. The 7457 doesn't appear to be coming any time soon.
Thanks for the clarification; I misunderstood. However, I'll still have to disagree with your guesses about Apple only shipping 64-bit Macs in five years, and that OS X will then only work on 64-bit Macs. Taking the last one first, I think Apple would like to keep that extra revenue from selling OS updates to those with older, 32-bit Macs. And from what I understand, it is not that difficult to make an install CD for both 64 and 32 bit CPUs. Likely, you know more about these technical details than I do.
I think that Apple would more likely use newer OS versions as an incentive for people to upgrade their hardware. "OS 11.2 only runs on 64 bit CPUs and has all these cool features? Well, it must be time for me to abandon my Dual G4!". Apple does use the OS as a cudgel to force users to buy new hardware.
Quote:
Regarding all 64-bit Macs in five years, I don't think so. It would be more like 10 or 20 years in my opinion. The 970 and those to follow it are not appropriate for lower end Macs. This series of chips will always cost more and dissipated more heat than a chip designed specifically for the low end market. For many years, it will also cost less to make that chip 32 bits. I believe the Mojave those to follow it will be that low end 32-bit chip. IBM is designing it for the embedded market too from what I hear. It will be lower cost and lower power.
The problem is that it takes money to continue to develop newer 32 bit CPUs. I don't see Apple doing this unless they think that their 64 bit CPUs are inadequate to a certain task (e.g. permanently incapable of going into a notebook). Is IBM willing to develop a 32 bit CPU years down the road? (I guess this is asking what kind of future does the 750VX have). In five years when every machine has 4GB of RAM, as a base configuration, how long is it going to be until Apple just decides to go 64 bit all the way? Why couldn't IBM design a low power 64 bit CPU? There is no technical reason.
Think parallel computation and locally based references, you get almost the same addressing benefits as the 128-bit address spaces, but aren't penalized with the requirements to maintain track on the holes in the sparse matrix, or forced to deal with all information in 128-bit chunks.
Sparseness can be good; address bits are good for addressing more than DRAM. An address width much wider in bits than plausible physical memory allows the use of virtual memory tricks for a number of useful things. Consider:
Currently, modern OSes try to treat requests to open a disk file as a request to map the file into memory; this allows the virtual memory manager to perform some optimization on process memory use, and a number of other interesting tricks. However, disk files can be huge; larger than the entire 32-bit address space of a 32-bit processor, and certainly larger than any of the unused holes you're likely to see in any given process' address space after you've taken out chunks for the code, data, kernel overlay, a couple of dozen shared libraries... So, the IO library has to be willing to either fall back on the traditional strategy for huge files, or it has to try to map huge files in pieces (and hope the application doesn't do something pathological like seeking back and forth across some convenient boundary). With a 128-bit address space and 64-bit addressed files, however, you have plenty of unused space to map in multiple disk files with no cleverness needed.
Or consider this: a virtual-memory manager which maps other computers' address spaces into your own (and vice versa), to form a giant shared-memory NUMA parallel processor. If virtual addresses are only "a little" wider than potential physical addresses, a lot of trickery is needed to do the mapping; but with 128-bit addressing and (say) 36 bits of physical address per machine, you could embed the IP address of each mapped computer in its virtual address range and still have enough unused virtual address bits to keep out of the way of disk file trickery.
On the other hand, with IPv6 coming (and 128-bit IPv6 addresses) I guess we'll need 256 bit virtual addresses to do that without cleverness.
But in general -- address bits can address more than just physical RAM, and as "spare" address bits become more plentiful, more intriguing uses of virtual addressing will become possible.
I feel compelled to note that AltiVec really isn't a 128 bit unit for the purposes of this discussion, because the 128 bit wide registers are just a particular way of parallelizing 32 bit and smaller values. In terms of the actual computations it's capable of, AltiVec is 32 bit - although I don't know what you'd consider vector permute. It does operate on all 128 bits as a unit, but it's also a semantic no-op.
Aha! (I was going to ask how, but there it is in the reply
Yes, I meant "the real not equal" ≠ , you know, the one used in the obscure language "Math"
YevgenyAlthough machines will have 4GB RAM soon, that's still not the same thing as every application on the machine being better served by 64-bit addressing. Of course the programs that can use more RAM will do so. But the programs that can't really shouldn't even try. 5 years from now there will still be 32-bit applications that run _substantially_ faster on identical hardware in a '32-bit mode'. There will also be 64-bit apps that _crush_ their 32-bit facsimiles. It isn't clear if the chips will still be there, but both 'modes' have their uses.
Look at it this way maybe.
The "old mac os" has 32k text limits. Now we're talking 4GB limits. Is there _any_ sane reason for something like 'NSString' to be forced to use 64-bit addressing? What % of _your_ text strings are longer than 4GB? How about the 'time' commands? How many of them will be >4GB in five years? The same number! How can it be?!?
Amorph You can perform 'left-shift-8' and 'right-shift-8' which are multiply and divide (with masking) with permute -> I think it is a non-general-purpose 128 bit unit. Still a _co_processor though, so still not part of the discussion.
Aha! (I was going to ask how, but there it is in the reply
Yes, I meant "the real not equal" ≠ , you know, the one used in the obscure language "Math"
I have heard of math, but have yet to find a good compiler for it . I only made it three years into my original major (physics) before I decided to leave and become a programmer (I took it as a hint to change majors when a grad student TA had to admit that he didn't know how to solve the math problems in our physics homework). Never take a class called "methodology of math physics". Actually, my last quarter of school, I found out that I am really good at abstract math. Too bad I was about to graduate
Quote:
Look at it this way maybe.
The "old mac os" has 32k text limits. Now we're talking 4GB limits. Is there _any_ sane reason for something like 'NSString' to be forced to use 64-bit addressing? What % of _your_ text strings are longer than 4GB? How about the 'time' commands? How many of them will be >4GB in five years? The same number! How can it be?!?
Address size doesn't really have much to do with strings. NSString (from what I can see) is just an object wrapper over an immutable string. I guess you could use a 32bit or 64 bit integer for the character index in the string (e.g. get me the ith character where "i" is a 32 bit number). Of course, most people don't need 32 bit strings. Who has a >32k character immutable string resource? Most people could get along just fine with 16 bits, but because machines have been 32 bit for so long (ok, at least on the Mac side since system 7), we all use 32 bit numbers for the index even though it is obviously ridiculous to have a 4 billion character string. I think that this string example actually shows how the "bitness" of the CPU creeps into places where you would think that it is just excessive. If CPUs go 64 bit for a while then you will start to see 64 bit APIs developing. Mind you, this will take quite some time because Apple would have to be willing to say that "this piece of code can't run on a G4 because they are too slow". For example, the pixlet codec that Steve demoed can't run on G3's because it uses 48 bits per pixel and G3's are too slow. New APIs aren't always backwards compatible.
Kickaha and Amorph couldn't moderate themselves out of a paper bag. Abdicate responsibility and succumb to idiocy. Two years of letting a member make personal attacks against others, then stepping aside when someone won't put up with it. Not only that but go ahead and shut down my posting priviledges but not the one making the attacks. Not even the common decency to abide by their warning (afer three days of absorbing personal attacks with no mods in sight), just shut my posting down and then say it might happen later if a certian line is crossed. Bullshit flag is flying, I won't abide by lying and coddling of liars who go off-site, create accounts differing in a single letter from my handle with the express purpose to decieve and then claim here that I did it. Everyone be warned, kim kap sol is a lying, deceitful poster.
Now I guess they should have banned me rather than just shut off posting priviledges, because kickaha and Amorph definitely aren't going to like being called to task when they thought they had it all ignored *cough* *cough* I mean under control. Just a couple o' tools.
Don't worry, as soon as my work resetting my posts is done I'll disappear forever.
I think that Apple would more likely use newer OS versions as an incentive for people to upgrade their hardware. "OS 11.2 only runs on 64 bit CPUs and has all these cool features? Well, it must be time for me to abandon my Dual G4!". Apple does use the OS as a cudgel to force users to buy new hardware.
Okay, I agree, but it may not happen for five years after the last 32-bit Mac is produced. For example, Apple stopped making the beige G3 about five years ago, and I hear the beige G3 may not be able to run Panther.
Quote:
The problem is that it takes money to continue to develop newer 32 bit CPUs. I don't see Apple doing this unless they think that their 64 bit CPUs are inadequate to a certain task (e.g. permanently incapable of going into a notebook). Is IBM willing to develop a 32 bit CPU years down the road? (I guess this is asking what kind of future does the 750VX have). In five years when every machine has 4GB of RAM, as a base configuration, how long is it going to be until Apple just decides to go 64 bit all the way? Why couldn't IBM design a low power 64 bit CPU? There is no technical reason.
These CPU have a variety of uses, and are not developed just for Apple. Apple may have been a big factor in starting up the 970 project at IBM, but IBM has plans for using this chip too. Likewise Mojave may make a great chip for low end Macs, but IBM likely plans to market this chip to the vast embedded market too. As long as there is a strong market for a 32-bit low cost, low power CPU, IBM will probably make it. True, at some point there will be no cost or power advantage to staying 32-bits, so these chips will become 64 bits too. It's just a question of when, and I think it may take ten or more years.
Edit Add: Well, I actually can see some reasons it may happen sooner. With the PPC, there is no penalty at all running a 32-bit application on a 64-bit CPU. So Apple and IBM may go for 64 bits in the low end chips, regardless of a slight increase in cost and power. It may become a marketing advantage to have 64-bit CPUs, even if most customers do not take advantage of it.
These CPU have a variety of uses, and are not developed just for Apple. Apple may have been a big factor in starting up the 970 project at IBM, but IBM has plans for using this chip too. Likewise Mojave may make a great chip for low end Macs, but IBM likely plans to market this chip to the vast embedded market too. As long as there is a strong market for a 32-bit low cost, low power CPU, IBM will probably make it. True, at some point there will be no cost or power advantage to staying 32-bits, so these chips will become 64 bits too. It's just a question of when, and I think it may take ten or more years.
Unfortunately a low power low cost 32 bit CPU that has alot of other uses may not be a CPU that you want to use in a computer. It might make a great printer CPU. For example, the G4 is a really good low cost low power CPU for embedded applications. This is why it is still using MaxBus. Moto isn't sinking any additional money into keeping the G4 usable for Apple. Unless a company is willing to keep making a 32 bit desktop CPU, then said CPU will be as out of place in a Mac as the G4. There is a difference between desktop and embedded CPUs.
Besides if Apple were to go mostly G5's, it would help with the economy of scale thing and lower their costs for all their machines (or more likely, increase Apple's profit margins).
Unfortunately a low power low cost 32 bit CPU that has alot of other uses may not be a CPU that you want to use in a computer. It might make a great printer CPU. For example, the G4 is a really good low cost low power CPU for embedded applications. This is why it is still using MaxBus. Moto isn't sinking any additional money into keeping the G4 usable for Apple. Unless a company is willing to keep making a 32 bit desktop CPU, then said CPU will be as out of place in a Mac as the G4. There is a difference between desktop and embedded CPUs. . .
This is Mojave I am talking about, the 750 VX. This is what some people are calling the G3 with AltiVec, which makes it a G4 but let's bypass that discussion. It is being developed with different tradeoffs than the 970. Cost and power will be kept low. With IBM working closely with Apple, I feel confident that Apple is a prime customer and their needs for a desktop CPU will be addressed in this chip. The fact that IBM can make it useful in the embedded market is to IBM's benefit and will not detract from Apple's needs, in my opinion. Mojave should be an ideal CPU for the iBook, eMac and whatever else Apple comes up with in the low end. Likely it will keep 32-bit Macs around for a long time, or until IBM can produce a 64-bit CPU just as cheap and stingy with Watts.
This is Mojave I am talking about, the 750 VX. This is what some people are calling the G3 with AltiVec, which makes it a G4 but let's bypass that discussion. It is being developed with different tradeoffs than the 970. Cost and power will be kept low. With IBM working closely with Apple, I feel confident that Apple is a prime customer and their needs for a desktop CPU will be addressed in this chip. The fact that IBM can make it useful in the embedded market is to IBM's benefit and will not detract from Apple's needs, in my opinion. Mojave should be an ideal CPU for the iBook, eMac and whatever else Apple comes up with in the low end. Likely it will keep 32-bit Macs around for a long time, or until IBM can produce a 64-bit CPU just as cheap and stingy with Watts.
I think that the 750VX will probably be a future iBook chip. It is supposed to be available a few months before when it seems the die shrunk G5 will be available. Of course, 1.0 GHz 90nm G5's would also be very cheap since they are essentially the scraps of the wafer. Obviously, if Apple wants to go with the 750VX, then they can go for it.
P.S. G3 + Altivec + MERSI = G4. G3 + Altivec = G3 with Altivec, not a G4. It is a small point I know, but if the G4's hadn't had MERSI, then Apple would have been doomed (MERSI lets you use more than one CPU in a box)
. . . P.S. G3 + Altivec + MERSI = G4. G3 + Altivec = G3 with Altivec, not a G4. It is a small point I know, but if the G4's hadn't had MERSI, then Apple would have been doomed (MERSI lets you use more than one CPU in a box)
The 750 VX may be multiprocessor capable according to several rumor sources. But even if it is not, Apple can call it a G4 if they wish too. We might argue about technical points of the processor and what makes it a G3 or a G4. The only thing a customer cares about is what software it can run. The 750 VX will run all software that calls for a G4 in the system requirements. So it's a G4 to the customer. If a customer buys a single processor G4 Mac today, I doubt he or she cares about its multiprocessor capability.
By the way, I usually read SMP for this capability. I'm not familiar with the term MERSI.
The 750 VX may be multiprocessor capable according to several rumor sources. But even if it is not, Apple can call it a G4 if they wish too. We might argue about technical points of the processor and what makes it a G3 or a G4. The only thing a customer cares about is what software it can run. The 750 VX will run all software that calls for a G4 in the system requirements. So it's a G4 to the customer. If a customer buys a single processor G4 Mac today, I doubt he or she cares about its multiprocessor capability.
By the way, I usually read SMP for this capability. I'm not familiar with the term MERSI.
A lot of Wintel people still see a G4, heck, a G5, and say it's too slow because it doesn't have as high a clockspeed. You're right in saying that they don't care about multiprocessor capabilities.
A lot of Wintel people still see a G4, heck, a G5, and say it's too slow because it doesn't have as high a clockspeed. You're right in saying that they don't care about multiprocessor capabilities.
Unless they are power users and know better. Of course, with the debut of the Centrino (which runs at a lower GHz but gets more done per clock) they will have to change their minds. As for "Joe eMachines End User", yes, they don't have a clue and will do whatever the Best Buy salesman tells them to do.
Comments
Originally posted by CubeDude
Um, I was being sarcastic.
S'ok. Use smilies or proper XML sarcasm tags <sarcasm> </sarcasm> to denote sarcasm. One line sentences can be mistaken pretty easily for serious comments by someone who doesn't know what they are talking about (there are plenty of those here already and we don't need more).
Originally posted by CubeDude
Um, I was being sarcastic.
You mean I can stop banging my head on my desk? Really?
(Looks around with a dazed look).
Just to make sure, one more time:
Doubling exponents (as in CPU-bitness) <> Doubling size (as in RAM).
Edit added: Sheesh, the real 'does not equal sign' is illegal. Pheh.
Originally posted by Nevyn
You mean I can stop banging my head on my desk? Really?
(Looks around with a dazed look).
Just to make sure, one more time:
Doubling exponents (as in CPU-bitness) <> Doubling size (as in RAM).
Edit added: Sheesh, the real 'does not equal sign' is illegal. Pheh.
<> means not equal in Pascal, VB
!= means not equal in C, C++, Java, C#
≠ == <> == !=
Originally posted by Yevgeny
No, I meant that five years down the road, the OS will be a full 64 bit OS, and not be backwards compatible for 32 bit machines. Of course the OS would still run 32 bit apps.
Thanks for the clarification; I misunderstood. However, I'll still have to disagree with your guesses about Apple only shipping 64-bit Macs in five years, and that OS X will then only work on 64-bit Macs. Taking the last one first, I think Apple would like to keep that extra revenue from selling OS updates to those with older, 32-bit Macs. And from what I understand, it is not that difficult to make an install CD for both 64 and 32 bit CPUs. Likely, you know more about these technical details than I do.
Regarding all 64-bit Macs in five years, I don't think so. It would be more like 10 or 20 years in my opinion. The 970 and those to follow it are not appropriate for lower end Macs. This series of chips will always cost more and dissipated more heat than a chip designed specifically for the low end market. For many years, it will also cost less to make that chip 32 bits. I believe the Mojave those to follow it will be that low end 32-bit chip. IBM is designing it for the embedded market too from what I hear. It will be lower cost and lower power.
Originally posted by snoopy
Thanks for the clarification; I misunderstood. However, I'll still have to disagree with your guesses about Apple only shipping 64-bit Macs in five years, and that OS X will then only work on 64-bit Macs. Taking the last one first, I think Apple would like to keep that extra revenue from selling OS updates to those with older, 32-bit Macs. And from what I understand, it is not that difficult to make an install CD for both 64 and 32 bit CPUs. Likely, you know more about these technical details than I do.
Regarding all 64-bit Macs in five years, I don't think so. It would be more like 10 or 20 years in my opinion. The 970 and those to follow it are not appropriate for lower end Macs. This series of chips will always cost more and dissipated more heat than a chip designed specifically for the low end market. For many years, it will also cost less to make that chip 32 bits. I believe the Mojave those to follow it will be that low end 32-bit chip. IBM is designing it for the embedded market too from what I hear. It will be lower cost and lower power.
Unless Moto gets their act together, which I really doubt, IBM's 970 is the only way out for Apple. The 7457 doesn't appear to be coming any time soon.
Originally posted by snoopy
Thanks for the clarification; I misunderstood. However, I'll still have to disagree with your guesses about Apple only shipping 64-bit Macs in five years, and that OS X will then only work on 64-bit Macs. Taking the last one first, I think Apple would like to keep that extra revenue from selling OS updates to those with older, 32-bit Macs. And from what I understand, it is not that difficult to make an install CD for both 64 and 32 bit CPUs. Likely, you know more about these technical details than I do.
I think that Apple would more likely use newer OS versions as an incentive for people to upgrade their hardware. "OS 11.2 only runs on 64 bit CPUs and has all these cool features? Well, it must be time for me to abandon my Dual G4!". Apple does use the OS as a cudgel to force users to buy new hardware.
Regarding all 64-bit Macs in five years, I don't think so. It would be more like 10 or 20 years in my opinion. The 970 and those to follow it are not appropriate for lower end Macs. This series of chips will always cost more and dissipated more heat than a chip designed specifically for the low end market. For many years, it will also cost less to make that chip 32 bits. I believe the Mojave those to follow it will be that low end 32-bit chip. IBM is designing it for the embedded market too from what I hear. It will be lower cost and lower power.
The problem is that it takes money to continue to develop newer 32 bit CPUs. I don't see Apple doing this unless they think that their 64 bit CPUs are inadequate to a certain task (e.g. permanently incapable of going into a notebook). Is IBM willing to develop a 32 bit CPU years down the road? (I guess this is asking what kind of future does the 750VX have). In five years when every machine has 4GB of RAM, as a base configuration, how long is it going to be until Apple just decides to go 64 bit all the way? Why couldn't IBM design a low power 64 bit CPU? There is no technical reason.
Originally posted by AirSluf
Think parallel computation and locally based references, you get almost the same addressing benefits as the 128-bit address spaces, but aren't penalized with the requirements to maintain track on the holes in the sparse matrix, or forced to deal with all information in 128-bit chunks.
Sparseness can be good; address bits are good for addressing more than DRAM. An address width much wider in bits than plausible physical memory allows the use of virtual memory tricks for a number of useful things. Consider:
Currently, modern OSes try to treat requests to open a disk file as a request to map the file into memory; this allows the virtual memory manager to perform some optimization on process memory use, and a number of other interesting tricks. However, disk files can be huge; larger than the entire 32-bit address space of a 32-bit processor, and certainly larger than any of the unused holes you're likely to see in any given process' address space after you've taken out chunks for the code, data, kernel overlay, a couple of dozen shared libraries... So, the IO library has to be willing to either fall back on the traditional strategy for huge files, or it has to try to map huge files in pieces (and hope the application doesn't do something pathological like seeking back and forth across some convenient boundary). With a 128-bit address space and 64-bit addressed files, however, you have plenty of unused space to map in multiple disk files with no cleverness needed.
Or consider this: a virtual-memory manager which maps other computers' address spaces into your own (and vice versa), to form a giant shared-memory NUMA parallel processor. If virtual addresses are only "a little" wider than potential physical addresses, a lot of trickery is needed to do the mapping; but with 128-bit addressing and (say) 36 bits of physical address per machine, you could embed the IP address of each mapped computer in its virtual address range and still have enough unused virtual address bits to keep out of the way of disk file trickery.
On the other hand, with IPv6 coming (and 128-bit IPv6 addresses) I guess we'll need 256 bit virtual addresses to do that without cleverness.
But in general -- address bits can address more than just physical RAM, and as "spare" address bits become more plentiful, more intriguing uses of virtual addressing will become possible.
Originally posted by dfiler
I think he meant:
≠ == <> == !=
Aha! (I was going to ask how, but there it is in the reply
Yes, I meant "the real not equal" ≠ , you know, the one used in the obscure language "Math"
YevgenyAlthough machines will have 4GB RAM soon, that's still not the same thing as every application on the machine being better served by 64-bit addressing. Of course the programs that can use more RAM will do so. But the programs that can't really shouldn't even try. 5 years from now there will still be 32-bit applications that run _substantially_ faster on identical hardware in a '32-bit mode'. There will also be 64-bit apps that _crush_ their 32-bit facsimiles. It isn't clear if the chips will still be there, but both 'modes' have their uses.
Look at it this way maybe.
The "old mac os" has 32k text limits. Now we're talking 4GB limits. Is there _any_ sane reason for something like 'NSString' to be forced to use 64-bit addressing? What % of _your_ text strings are longer than 4GB? How about the 'time' commands? How many of them will be >4GB in five years? The same number! How can it be?!?
Amorph You can perform 'left-shift-8' and 'right-shift-8' which are multiply and divide (with masking) with permute -> I think it is a non-general-purpose 128 bit unit. Still a _co_processor though, so still not part of the discussion.
Aha! (I was going to ask how, but there it is in the reply
Yes, I meant "the real not equal" ≠ , you know, the one used in the obscure language "Math"
I have heard of math, but have yet to find a good compiler for it
Look at it this way maybe.
The "old mac os" has 32k text limits. Now we're talking 4GB limits. Is there _any_ sane reason for something like 'NSString' to be forced to use 64-bit addressing? What % of _your_ text strings are longer than 4GB? How about the 'time' commands? How many of them will be >4GB in five years? The same number! How can it be?!?
Address size doesn't really have much to do with strings. NSString (from what I can see) is just an object wrapper over an immutable string. I guess you could use a 32bit or 64 bit integer for the character index in the string (e.g. get me the ith character where "i" is a 32 bit number). Of course, most people don't need 32 bit strings. Who has a >32k character immutable string resource? Most people could get along just fine with 16 bits, but because machines have been 32 bit for so long (ok, at least on the Mac side since system 7), we all use 32 bit numbers for the index even though it is obviously ridiculous to have a 4 billion character string. I think that this string example actually shows how the "bitness" of the CPU creeps into places where you would think that it is just excessive. If CPUs go 64 bit for a while then you will start to see 64 bit APIs developing. Mind you, this will take quite some time because Apple would have to be willing to say that "this piece of code can't run on a G4 because they are too slow". For example, the pixlet codec that Steve demoed can't run on G3's because it uses 48 bits per pixel and G3's are too slow. New APIs aren't always backwards compatible.
Now I guess they should have banned me rather than just shut off posting priviledges, because kickaha and Amorph definitely aren't going to like being called to task when they thought they had it all ignored *cough* *cough* I mean under control. Just a couple o' tools.
Don't worry, as soon as my work resetting my posts is done I'll disappear forever.
Originally posted by Yevgeny
I think that Apple would more likely use newer OS versions as an incentive for people to upgrade their hardware. "OS 11.2 only runs on 64 bit CPUs and has all these cool features? Well, it must be time for me to abandon my Dual G4!". Apple does use the OS as a cudgel to force users to buy new hardware.
Okay, I agree, but it may not happen for five years after the last 32-bit Mac is produced. For example, Apple stopped making the beige G3 about five years ago, and I hear the beige G3 may not be able to run Panther.
The problem is that it takes money to continue to develop newer 32 bit CPUs. I don't see Apple doing this unless they think that their 64 bit CPUs are inadequate to a certain task (e.g. permanently incapable of going into a notebook). Is IBM willing to develop a 32 bit CPU years down the road? (I guess this is asking what kind of future does the 750VX have). In five years when every machine has 4GB of RAM, as a base configuration, how long is it going to be until Apple just decides to go 64 bit all the way? Why couldn't IBM design a low power 64 bit CPU? There is no technical reason.
These CPU have a variety of uses, and are not developed just for Apple. Apple may have been a big factor in starting up the 970 project at IBM, but IBM has plans for using this chip too. Likewise Mojave may make a great chip for low end Macs, but IBM likely plans to market this chip to the vast embedded market too. As long as there is a strong market for a 32-bit low cost, low power CPU, IBM will probably make it. True, at some point there will be no cost or power advantage to staying 32-bits, so these chips will become 64 bits too. It's just a question of when, and I think it may take ten or more years.
Edit Add: Well, I actually can see some reasons it may happen sooner. With the PPC, there is no penalty at all running a 32-bit application on a 64-bit CPU. So Apple and IBM may go for 64 bits in the low end chips, regardless of a slight increase in cost and power. It may become a marketing advantage to have 64-bit CPUs, even if most customers do not take advantage of it.
Originally posted by snoopy
These CPU have a variety of uses, and are not developed just for Apple. Apple may have been a big factor in starting up the 970 project at IBM, but IBM has plans for using this chip too. Likewise Mojave may make a great chip for low end Macs, but IBM likely plans to market this chip to the vast embedded market too. As long as there is a strong market for a 32-bit low cost, low power CPU, IBM will probably make it. True, at some point there will be no cost or power advantage to staying 32-bits, so these chips will become 64 bits too. It's just a question of when, and I think it may take ten or more years.
Unfortunately a low power low cost 32 bit CPU that has alot of other uses may not be a CPU that you want to use in a computer. It might make a great printer CPU. For example, the G4 is a really good low cost low power CPU for embedded applications. This is why it is still using MaxBus. Moto isn't sinking any additional money into keeping the G4 usable for Apple. Unless a company is willing to keep making a 32 bit desktop CPU, then said CPU will be as out of place in a Mac as the G4. There is a difference between desktop and embedded CPUs.
Besides if Apple were to go mostly G5's, it would help with the economy of scale thing and lower their costs for all their machines (or more likely, increase Apple's profit margins).
Originally posted by Yevgeny
Unfortunately a low power low cost 32 bit CPU that has alot of other uses may not be a CPU that you want to use in a computer. It might make a great printer CPU. For example, the G4 is a really good low cost low power CPU for embedded applications. This is why it is still using MaxBus. Moto isn't sinking any additional money into keeping the G4 usable for Apple. Unless a company is willing to keep making a 32 bit desktop CPU, then said CPU will be as out of place in a Mac as the G4. There is a difference between desktop and embedded CPUs. . .
This is Mojave I am talking about, the 750 VX. This is what some people are calling the G3 with AltiVec, which makes it a G4 but let's bypass that discussion. It is being developed with different tradeoffs than the 970. Cost and power will be kept low. With IBM working closely with Apple, I feel confident that Apple is a prime customer and their needs for a desktop CPU will be addressed in this chip. The fact that IBM can make it useful in the embedded market is to IBM's benefit and will not detract from Apple's needs, in my opinion. Mojave should be an ideal CPU for the iBook, eMac and whatever else Apple comes up with in the low end. Likely it will keep 32-bit Macs around for a long time, or until IBM can produce a 64-bit CPU just as cheap and stingy with Watts.
Originally posted by snoopy
This is Mojave I am talking about, the 750 VX. This is what some people are calling the G3 with AltiVec, which makes it a G4 but let's bypass that discussion. It is being developed with different tradeoffs than the 970. Cost and power will be kept low. With IBM working closely with Apple, I feel confident that Apple is a prime customer and their needs for a desktop CPU will be addressed in this chip. The fact that IBM can make it useful in the embedded market is to IBM's benefit and will not detract from Apple's needs, in my opinion. Mojave should be an ideal CPU for the iBook, eMac and whatever else Apple comes up with in the low end. Likely it will keep 32-bit Macs around for a long time, or until IBM can produce a 64-bit CPU just as cheap and stingy with Watts.
I think that the 750VX will probably be a future iBook chip. It is supposed to be available a few months before when it seems the die shrunk G5 will be available. Of course, 1.0 GHz 90nm G5's would also be very cheap since they are essentially the scraps of the wafer. Obviously, if Apple wants to go with the 750VX, then they can go for it.
P.S. G3 + Altivec + MERSI = G4. G3 + Altivec = G3 with Altivec, not a G4. It is a small point I know, but if the G4's hadn't had MERSI, then Apple would have been doomed (MERSI lets you use more than one CPU in a box)
Originally posted by Yevgeny
. . . P.S. G3 + Altivec + MERSI = G4. G3 + Altivec = G3 with Altivec, not a G4. It is a small point I know, but if the G4's hadn't had MERSI, then Apple would have been doomed (MERSI lets you use more than one CPU in a box)
The 750 VX may be multiprocessor capable according to several rumor sources. But even if it is not, Apple can call it a G4 if they wish too. We might argue about technical points of the processor and what makes it a G3 or a G4. The only thing a customer cares about is what software it can run. The 750 VX will run all software that calls for a G4 in the system requirements. So it's a G4 to the customer. If a customer buys a single processor G4 Mac today, I doubt he or she cares about its multiprocessor capability.
By the way, I usually read SMP for this capability. I'm not familiar with the term MERSI.
Originally posted by snoopy
The 750 VX may be multiprocessor capable according to several rumor sources. But even if it is not, Apple can call it a G4 if they wish too. We might argue about technical points of the processor and what makes it a G3 or a G4. The only thing a customer cares about is what software it can run. The 750 VX will run all software that calls for a G4 in the system requirements. So it's a G4 to the customer. If a customer buys a single processor G4 Mac today, I doubt he or she cares about its multiprocessor capability.
By the way, I usually read SMP for this capability. I'm not familiar with the term MERSI.
A lot of Wintel people still see a G4, heck, a G5, and say it's too slow because it doesn't have as high a clockspeed. You're right in saying that they don't care about multiprocessor capabilities.
Originally posted by CubeDude
A lot of Wintel people still see a G4, heck, a G5, and say it's too slow because it doesn't have as high a clockspeed. You're right in saying that they don't care about multiprocessor capabilities.
Unless they are power users and know better. Of course, with the debut of the Centrino (which runs at a lower GHz but gets more done per clock) they will have to change their minds. As for "Joe eMachines End User", yes, they don't have a clue and will do whatever the Best Buy salesman tells them to do.