I think 64-bit support in the OS would be for (a) the server market, and (b) the geek & academic markets. I know I can't wait to play on a 64-bit machine. These are important markets to Apple, especially in order to build cool stuff for the future when all their machines are 64-bit. The sooner they get support out there for it, the better.
Im curious, Im not a programmer or chip guru by any means, but would the 970 chip basically be the often rumored G5 or whatever, or is that a wrong term. There are just too many threads to read on this here. .
Motorola was supposed to be developing (then consequently stopped developing) a MPC8500 chip known as a G5. Their plans changed and are focused on using said chip and it's iterations for embedded appliances. No more desktop chips, apparently.
970 is IBM's latest 64-bit chip derived from the Power4 series of server CPUs.
Hmmm....X1? I liiiike it... But seriously, folks, what will the naming scheme be? I doubt that it will be a "Powermac 970". Apple learned that nothing confuses the consumer more than giving products monikers like "6052i". Heck, BMW should take note. But Porsche names their flagship the "911"...I'll shut up now. Personally, I'm against the whole "Xtreme" thing, but X1 might be OK...as long as it kicks ass. I hope that Apple sees how their machines are doing badly even in Photoshop, and that they are working to improve.
Naw, it should feature a Panther ripping a Dell workstation into many, teenie, ickle pieces, and have the dell's guts pouring out onto the floor, all bllody and...I'll stop now. Either that, or something with a Pixar tie-in...
Im curious, Im not a programmer or chip guru by any means, but would the 970 chip basically be the often rumored G5 or whatever, or is that a wrong term. There are just too many threads to read on this here. .
Well, nobody really knows for sure ... but so far as anybody's been able to guess, it probably goes something like this (I will be corrected, but given the degree of speculation about this, that's to be expected, so here goes):
Motorola got some sort of deal to come out with THEIR version of a G5 waaaay back when.
Apparently, prototypes where created and released over a year ago now.
Apparently, these prototypes where mind blowingly fast.
For some reason, the whole thing got canned, nobody knows for sure why. Maybe it was a permanent cancellation, maybe moto simply said, we can't afford to build a whole new fab in this economic climate just to make these monsters cost effective, who knows, either way, Moto's G5 didn't make an appearance in 2002.
While all that was going on ...
IBM was working on scaling down their incredibly powerful Power4 chip ...
Apple seems to have gotten a hand in the making of the chip ...
IBM came out with the 970, which is way faster than the G4 ... and considering the benefits, it can easily carry the "G5" moniker even though it may not have originally been intended to fly that flag.
There's still a vague vague vague hope floating around someplace that Moto has worked out some sort of deal with AMD, such that Moto - who apparently designed the original "G5" chip - might work out an agreement with AMD to fab it ... so there is a very vague chance that we might get a Moto G5 still, or perhaps a Moto G5 and IBM 970 G5 at the same time, but it's looking less and less possible by the day.
Oh yea..I always wanted to ask this..What does the G in the G3 , 4... stand for anyway? anyone?
[QUOTE]Originally posted by OverToasty
[B]Well, nobody really knows for sure ... but so far as anybody's been able to guess, it probably goes something like this (I will be corrected, but given the degree of speculation about this, that's to be expected, so here goes):
(especially while still trying to get people to move to OS X -- I cannot believe there are holdouts).
--SNIP--
Some of us are holding out, waiting for the 970 to take the X plunge. I keep hearing how X is too slow on the old G3s, and I think my rev A G3/266 just won't be able to handle it. Plus, why spend $130 on a new OS when I will soon be buying a new Mac that will have X on it?
32 bits of addressing (4GB) is only a few minutes worth of uncompressed video. Compression is fine, but for scrubbing, or game playing, where you need random access to much data, there is nothing like uncompressed video.
OpenGL is a great example of this. A 64 bit verison of OpenGL could easily make a 3D game have access to very high resolution rooms, environments, etc.
A single picture of Earth at 100m resolution is an image of about 400,000 x 800,000 pixels in size, which is over 1000 GB. One way of storing this image is to have 1000s of sub images, with an elaborate load/unload mechanism for all the tiles, the other very simple way is to just use a single 64 bit memory mapped object for the graphic. Then the OS just loads the parts you need into real memory as required.
It is easy to think of multi GB images/image sets, look around the room you are in and imagine a scene where each object bigger than a pea has 6 or more texture maps taken for it. All at 72 dpi. That is one room, and it would be many GB of textures to describe. Imagine an airport.
32 bit is at the end of the road. 4GB of RAM costs less than a decent processor.
Given that my app (and 99.9% of the apps out there) would not benefit even in the slightest by being 64 bit (and indeed, the performance could slightly degrade, and my app would use slightly more memory), no, I wouldn't feel bad about having a 32 bit app to sell.
But hey, whatever -- selling a "64 bit" machine is something I expect Apple to leverage in their marketing campaign, and perhaps it'll work to the advantage for a change.
I guess the point I was trying to make, and obviously didn't do a very good job at it, was that when OS X is solidly 64-bit and other companies are introducing competing 64-bit apps, I would think a software company would not want to be labeled as "behind the times", whether the app sees a performance gain or not.
That was my point behind the 16-bit to 32-bit advance as well. I guess I clicked submit before the thought was clearly written out. An editor I am not alas....
32 bits of addressing (4GB) is only a few minutes worth of uncompressed video. Compression is fine, but for scrubbing, or game playing, where you need random access to much data, there is nothing like uncompressed video.
OpenGL is a great example of this. A 64 bit verison of OpenGL could easily make a 3D game have access to very high resolution rooms, environments, etc.
A single picture of Earth at 100m resolution is an image of about 400,000 x 800,000 pixels in size, which is over 1000 GB. One way of storing this image is to have 1000s of sub images, with an elaborate load/unload mechanism for all the tiles, the other very simple way is to just use a single 64 bit memory mapped object for the graphic. Then the OS just loads the parts you need into real memory as required.
It is easy to think of multi GB images/image sets, look around the room you are in and imagine a scene where each object bigger than a pea has 6 or more texture maps taken for it. All at 72 dpi. That is one room, and it would be many GB of textures to describe. Imagine an airport.
32 bit is at the end of the road. 4GB of RAM costs less than a decent processor.
Remember though that video is simply frames. A crapload of frames, but frames nonetheless. It is my understanding that when rendering such things, that the 4GB limit isn't hit. A frame of video is only a couple MB so I am not sure that is necessarily the case. I could be wrong though. Anyone know for sure?
Remember though that video is simply frames. A crapload of frames, but frames nonetheless. It is my understanding that when rendering such things, that the 4GB limit isn't hit. A frame of video is only a couple MB so I am not sure that is necessarily the case. I could be wrong though. Anyone know for sure?
Yes, you are correct. Video systems are all designed to stream data in and out of memory efficiently so it simply isn't necessary to have them all loaded and 4GB is a lot of frames. Having more than 4GB doesn't help you as much as you might think while editing because it takes a long time to load and save more than 4GB of data, and you really don't want to keep that much data in memory if it has been changed and needs to be saved. So 64-bit will be a bit of an advantage, but not a huge and revolutionary leap forward in video editing.
Remember though that video is simply frames. A crapload of frames, but frames nonetheless. It is my understanding that when rendering such things, that the 4GB limit isn't hit. A frame of video is only a couple MB so I am not sure that is necessarily the case. I could be wrong though. Anyone know for sure?
A 1080i HDTV (16:9) image will consume 4 MB per decompressed frame assuming 32-bit color. A two-hour HDTV video will consume about 28 GB, if decompressed. That is not where 64-bit addressing will play a significant role.
There is, however, a very mundane but important benefit for 64-bit addressing. That is in mapping your hard disk. A 32-bit OS can directly address only 4 billion sectors on a hard disk. A 64-bit OS allows 1.8E19 sectors to be addressed directly. That is enough space to map every conduction electron in a semiconductor.
There is, however, a very mundane but important benefit for 64-bit addressing. That is in mapping your hard disk. A 32-bit OS can directly address only 4 billion sectors on a hard disk. A 64-bit OS allows 1.8E19 sectors to be addressed directly. That is enough space to map every conduction electron in a semiconductor.
This doesn't require 64-bit hardware, however. Any 32-bit machine can do 64-bit arithmetic, its just less efficient. Since processors are many orders of magnitude faster than disks, however, this inefficiency is irrelevent. 64-bit hardware is only required for memory addressing, not disk sector addresses.
Comments
Originally posted by hypoluxa
Im curious, Im not a programmer or chip guru by any means, but would the 970 chip basically be the often rumored G5 or whatever, or is that a wrong term. There are just too many threads to read on this here. .
Motorola was supposed to be developing (then consequently stopped developing) a MPC8500 chip known as a G5. Their plans changed and are focused on using said chip and it's iterations for embedded appliances. No more desktop chips, apparently.
970 is IBM's latest 64-bit chip derived from the Power4 series of server CPUs.
Originally posted by Vvmp
Not sure what the actual PPC 970 will be called, but the marketing campaign should be along the lines of "SHOCK & AWE!!!!"
Originally posted by Vvmp
Not sure what the actual PPC 970 will be called, but the marketing campaign should be along the lines of "SHOCK & AWE!!!!"
What? Baseless, without proper justification and a boring media event?
Originally posted by hypoluxa
Im curious, Im not a programmer or chip guru by any means, but would the 970 chip basically be the often rumored G5 or whatever, or is that a wrong term. There are just too many threads to read on this here. .
Well, nobody really knows for sure ... but so far as anybody's been able to guess, it probably goes something like this (I will be corrected, but given the degree of speculation about this, that's to be expected, so here goes):
Motorola got some sort of deal to come out with THEIR version of a G5 waaaay back when.
Apparently, prototypes where created and released over a year ago now.
Apparently, these prototypes where mind blowingly fast.
For some reason, the whole thing got canned, nobody knows for sure why. Maybe it was a permanent cancellation, maybe moto simply said, we can't afford to build a whole new fab in this economic climate just to make these monsters cost effective, who knows, either way, Moto's G5 didn't make an appearance in 2002.
While all that was going on ...
IBM was working on scaling down their incredibly powerful Power4 chip ...
Apple seems to have gotten a hand in the making of the chip ...
IBM came out with the 970, which is way faster than the G4 ... and considering the benefits, it can easily carry the "G5" moniker even though it may not have originally been intended to fly that flag.
There's still a vague vague vague hope floating around someplace that Moto has worked out some sort of deal with AMD, such that Moto - who apparently designed the original "G5" chip - might work out an agreement with AMD to fab it ... so there is a very vague chance that we might get a Moto G5 still, or perhaps a Moto G5 and IBM 970 G5 at the same time, but it's looking less and less possible by the day.
OK, let the corrections begin!
[QUOTE]Originally posted by OverToasty
[B]Well, nobody really knows for sure ... but so far as anybody's been able to guess, it probably goes something like this (I will be corrected, but given the degree of speculation about this, that's to be expected, so here goes):
Originally posted by hypoluxa
Oh yea..I always wanted to ask this..What does the G in the G3 , 4... stand for anyway? anyone?
Gee
"From now on I will be known a Homer...JAY Simpson!"
Originally posted by atomicham
--SNIP--
(especially while still trying to get people to move to OS X -- I cannot believe there are holdouts).
--SNIP--
Some of us are holding out, waiting for the 970 to take the X plunge. I keep hearing how X is too slow on the old G3s, and I think my rev A G3/266 just won't be able to handle it. Plus, why spend $130 on a new OS when I will soon be buying a new Mac that will have X on it?
Dual 2.5 GHz 970 and 30-inch Cinema Display!
32 bits of addressing (4GB) is only a few minutes worth of uncompressed video. Compression is fine, but for scrubbing, or game playing, where you need random access to much data, there is nothing like uncompressed video.
OpenGL is a great example of this. A 64 bit verison of OpenGL could easily make a 3D game have access to very high resolution rooms, environments, etc.
A single picture of Earth at 100m resolution is an image of about 400,000 x 800,000 pixels in size, which is over 1000 GB. One way of storing this image is to have 1000s of sub images, with an elaborate load/unload mechanism for all the tiles, the other very simple way is to just use a single 64 bit memory mapped object for the graphic. Then the OS just loads the parts you need into real memory as required.
It is easy to think of multi GB images/image sets, look around the room you are in and imagine a scene where each object bigger than a pea has 6 or more texture maps taken for it. All at 72 dpi. That is one room, and it would be many GB of textures to describe. Imagine an airport.
32 bit is at the end of the road. 4GB of RAM costs less than a decent processor.
Originally posted by moki
Given that my app (and 99.9% of the apps out there) would not benefit even in the slightest by being 64 bit (and indeed, the performance could slightly degrade, and my app would use slightly more memory), no, I wouldn't feel bad about having a 32 bit app to sell.
But hey, whatever -- selling a "64 bit" machine is something I expect Apple to leverage in their marketing campaign, and perhaps it'll work to the advantage for a change.
I guess the point I was trying to make, and obviously didn't do a very good job at it, was that when OS X is solidly 64-bit and other companies are introducing competing 64-bit apps, I would think a software company would not want to be labeled as "behind the times", whether the app sees a performance gain or not.
That was my point behind the 16-bit to 32-bit advance as well. I guess I clicked submit before the thought was clearly written out. An editor I am not alas....
Originally posted by knobs
64 bits biggest win? Video.
32 bits of addressing (4GB) is only a few minutes worth of uncompressed video. Compression is fine, but for scrubbing, or game playing, where you need random access to much data, there is nothing like uncompressed video.
OpenGL is a great example of this. A 64 bit verison of OpenGL could easily make a 3D game have access to very high resolution rooms, environments, etc.
A single picture of Earth at 100m resolution is an image of about 400,000 x 800,000 pixels in size, which is over 1000 GB. One way of storing this image is to have 1000s of sub images, with an elaborate load/unload mechanism for all the tiles, the other very simple way is to just use a single 64 bit memory mapped object for the graphic. Then the OS just loads the parts you need into real memory as required.
It is easy to think of multi GB images/image sets, look around the room you are in and imagine a scene where each object bigger than a pea has 6 or more texture maps taken for it. All at 72 dpi. That is one room, and it would be many GB of textures to describe. Imagine an airport.
32 bit is at the end of the road. 4GB of RAM costs less than a decent processor.
Remember though that video is simply frames. A crapload of frames, but frames nonetheless. It is my understanding that when rendering such things, that the 4GB limit isn't hit. A frame of video is only a couple MB so I am not sure that is necessarily the case. I could be wrong though. Anyone know for sure?
Originally posted by Rhumgod
Remember though that video is simply frames. A crapload of frames, but frames nonetheless. It is my understanding that when rendering such things, that the 4GB limit isn't hit. A frame of video is only a couple MB so I am not sure that is necessarily the case. I could be wrong though. Anyone know for sure?
Yes, you are correct. Video systems are all designed to stream data in and out of memory efficiently so it simply isn't necessary to have them all loaded and 4GB is a lot of frames. Having more than 4GB doesn't help you as much as you might think while editing because it takes a long time to load and save more than 4GB of data, and you really don't want to keep that much data in memory if it has been changed and needs to be saved. So 64-bit will be a bit of an advantage, but not a huge and revolutionary leap forward in video editing.
Originally posted by Rhumgod
Remember though that video is simply frames. A crapload of frames, but frames nonetheless. It is my understanding that when rendering such things, that the 4GB limit isn't hit. A frame of video is only a couple MB so I am not sure that is necessarily the case. I could be wrong though. Anyone know for sure?
A 1080i HDTV (16:9) image will consume 4 MB per decompressed frame assuming 32-bit color. A two-hour HDTV video will consume about 28 GB, if decompressed. That is not where 64-bit addressing will play a significant role.
There is, however, a very mundane but important benefit for 64-bit addressing. That is in mapping your hard disk. A 32-bit OS can directly address only 4 billion sectors on a hard disk. A 64-bit OS allows 1.8E19 sectors to be addressed directly. That is enough space to map every conduction electron in a semiconductor.
Originally posted by knobs
64 bits biggest win? Video.
Databases and most of all scientific/engineering modelling. Most scientific models rely on high precision calculations and a lot of memory.
Originally posted by Mr. Me
There is, however, a very mundane but important benefit for 64-bit addressing. That is in mapping your hard disk. A 32-bit OS can directly address only 4 billion sectors on a hard disk. A 64-bit OS allows 1.8E19 sectors to be addressed directly. That is enough space to map every conduction electron in a semiconductor.
This doesn't require 64-bit hardware, however. Any 32-bit machine can do 64-bit arithmetic, its just less efficient. Since processors are many orders of magnitude faster than disks, however, this inefficiency is irrelevent. 64-bit hardware is only required for memory addressing, not disk sector addresses.