Yeah, I just saw an IBM .pdf simplified diagram that included the companion chip, and it hit me that they'd have to have one for their own use. Apple could either use IBM's or design their own depending on which suited them best. I still wonder if Apple pi will show up?</strong><hr></blockquote>
[quote]Originally posted by Transcendental Octothorpe:
<strong>Oh, anyone know what a VSP interface is?</strong><hr></blockquote>
I just realized that we were completely barking up the wrong tree with the VSP thing. In this context it does not mean "Virtual Single Processor" at all... it means "Vector Signal Processor". Big difference.
For those that remember this is probably those DSP-like features in the chipset that Moki was talking about last summer. Four vector units much like you find in GPUs, game consoles, or other dedicated vector processors. Sitting in the chipset they would have direct high speed access to the system memory and the AGP bus. Audio, Quartz rendering, OpenGL, QuickTime, and custom signal processing algorithms should all be possible on these VSPs.
Apple has tried this kind of thing before way back in the 68040 days with the DSPs in the 660av and 840av. Unfortunately back then they weren't very powerful and media processing was much less common. Things are quite different now. The main thing is can Apple commit to keeping these things in their machines going forward? The '040 AV machines were like orphaned children. Integrating it into the chipset keeps the component count (and cost) down, and lets Apple tailor them specifically for Apple's needs. There is also a lot of technology floating around these days for keeping the code running on these things independent of the hardware details (i.e. OpenGL's shader technologies). The integrated nature of the beast might allow it to show up in the low-end as well as the high-end, allowing substantially improved performance from G4-based computers.
<strong>From my understanding of the situation, it was Apple's job to build the companion chip. As of the end of 2001, they were behind.</strong><hr></blockquote>
I have a sneaking suspicion that IBM is under-promising on the 970, and Apple hasn't promised anything. I know what you're saying about not delivering the bleeding edge, my point is simply that 1.8 GHz may very well not be the bleeding edge of the 970.</strong><hr></blockquote>
If I recall correctly, when IBM anounced the Gecko chip with Nintendo its speed was 400 mhz, but the shipping system have a 485 mhz chip in them acording to Nintendo. This is a precident for IBM delivering more than promised. Also IBM said that the 970 will scale rapidly to 2 Ghz with a die shrink, and actual real world tests of the chips might be able to reliably handle better than what they have anounced, just as the G4 reliably handled less speed than was promised. I'm sure that IBM and Apple do not want to repeat Moto's mistake there.
Just a quick question here to clear something up for me. How do these MHz / GHz ratings come about in the first place? Do these fabs have a computer without a CPU standing by that runs the equivalent of iBench with these processors, or do they have a seperate machine they drop the CPU into that says, "Yup, it's a 1.6." How do they know what a chip runs at and how much can they get away with lying about? Like a chip in their machine is actually a 1.4, but they say, "Screw it, let's call it a 1.6", just like hard drive manufacturers can get away with calling an 18.62 GB hard drive a 20 Gig (like the one in my iBook). I'm just curious how these numbers actually come about. Is there someone out there that can explain this stuff without needing an advanced degree in quantum mechanics? Thanks.
<strong>...just like hard drive manufacturers can get away with calling an 18.62 GB hard drive a 20 Gig (like the one in my iBook). I'm just curious how these numbers actually come about. Is there someone out there that can explain this stuff without needing an advanced degree in quantum mechanics? Thanks. </strong><hr></blockquote>
Hmm, one at a time, I guess:
HD size: This one really is a hoodwink. HD manufactures do (sort of) fib about the size of their units . They quote GB as "billions of bytes" (so 1GB = 1,000,000,000 bytes) instead of 2^30 (1,073,741,824 bytes), which is what a GB means to any programmer or engineer. Its just like 1K in computer world really means 1024 (2^10), not 1000. That is the standard, and has been for decades.
<strong>Just a quick question here to clear something up for me. How do these MHz / GHz ratings come about in the first place? Do these fabs have a computer without a CPU standing by that runs the equivalent of iBench with these processors, or do they have a seperate machine they drop the CPU into that says, "Yup, it's a 1.6." How do they know what a chip runs at and how much can they get away with lying about? Like a chip in their machine is actually a 1.4, but they say, "Screw it, let's call it a 1.6"...</strong><hr></blockquote>
Now, about CPU clock speed: it depends what you are talking about. PowerPCs (and Intel) use an actual clock speed that is measured with automated testing equipment. The parts have sets of data run through them at varying speeds, and some parts (within the same production batch) will run those data sets correctly at faster speeds than other chips. This is called "speed binning" and is done with memory as well.
If you are talking about SPEC numbers, that is a test done in software measuring how fast that CPU can perform certain calculations.
(please forgive any forthcoming condescending tone)
Shoot, well while we're at it, a few notes about semantics:
A CPU is not "a 1.4 GHz". Nor does "this chip have 1.6GHz".
The chip runs at 1.6GHz.
Also:
A chip does not "have 42 Watts". It uses energy, which can be quantified in Watts.
This chip uses or draws or dissapates 42 Watts.
I've heard people here and elsewhere comment "not only does this chip run cooler, it also draws less battery, making it perfect for powerbooks!" While true, it is pointing out the obvious. Energy usage (for CPUs, at least) is always the same as the dissapated heat. Conservation of energy and all that. A lower wattage device will always produce less heat and draw less energy than a higher wattage part. They aren't two seperate goals, both of course both are good .
[quote]Originally posted by Transcendental Octothorpe:
<strong>
Hey, sorry. I was just in informative mode.
I'll try to never be informative again. </strong><hr></blockquote>
I wasn't trying to be abrasive, just pointing out I think a lot of computer terminology is for a small portion of the computer buying public. As long as it's fast, yada, yada. Sorry if I appeared disgruntled
[quote]Originally posted by Transcendental Octothorpe:
<strong>
Hmm, one at a time, I guess:
HD size: This one really is a hoodwink. HD manufactures do (sort of) fib about the size of their units . They quote GB as "billions of bytes" (so 1GB = 1,000,000,000 bytes) instead of 2^30 (1,073,741,824 bytes), which is what a GB means to any programmer or engineer. Its just like 1K in computer world really means 1024 (2^10), not 1000. That is the standard, and has been for decades.</strong><hr></blockquote>
Close but very wrong. HD manufacturers and most ISPs actually have it correct as 1GB is 1 000 000 000 bytes. It relates to the prefix Giga, which is formally defined as 10^9. That really is the standard as defined by the SI system (don't they teach people that in high school? ).
The problem arises that in the early days of computers there were no prefixes for binary numbers so computers just stole the nearest SI unit and said, "That's close enough." Problem is as the orders of magnitude increase the errors get substantially larger and it no longer is close enough.
So where is this leading? Well there is in fact another set of prefixes for binary numbers that you can read all about over <a href="http://www.romulus2.com/articles/guides/misc/bitsbytes.shtml" target="_blank">here.</a> Problem is the system makers seem to refuse to use them but at the end of the day the HD manufacturers have it right so 3 cheers for the HD manufacturers
I wasn't wrong. It is a standard in any computer related field to use powers of 2 in increments of 10.
Of course you are right the the real Scientific standard is that mega = 10^6 etc., but we are in base two here, not base 10, and the convention as I have stated it has been in place since at least the 60's.
[Edit, I just read your second link. Interesting, but it'll be a while before they change the accepted jargon of the industry. I still mantain that the HD manufature's notation is intentionally misleading, and not some noble attempt to be faithful to SI units.]
But, as others have said: who cares? We all just want a kick ass comp
(and I believe that we will be getting one from Apple before fall)
[ Edit: DOH! you'd think I'd know mega from giga. <img src="embarrassed.gif" border="0"> ]
From that article: [quote]In the longer term, though, IBM has even grander ambitions. It's possible that a special link could join two four-processor blade servers into an eight-processor system. The approach would use a variant of IBM's EXA "Summit" chipset, which can use high-speed cables to link four-processor groups into eight-, 12- and 16-processor x440 servers, Benck said in an interview.
"We have the technology to create blades that plug in to create larger symmetrical multiprocessor servers," he said.<hr></blockquote>moki and others have talked about IBM may be sandbagging a bit, and others have talked about technologies to treat multiple chips as a single chip. I wonder if this is important and whether it fits into Apple's plans.
The potential problem with Mac's bearing 970's is not with IBM getting the chip out, it's whether Apple has the design competence to create the companion chip and motherboard that the 970 needs.
It's not a trivial undertaking and there are real questions about whether Apple will be capable of creating the technology to use the 970 effectively.
[Hoping for the October 2003, expecting March 2004]
Comments
<strong>
Yeah, I just saw an IBM .pdf simplified diagram that included the companion chip, and it hit me that they'd have to have one for their own use. Apple could either use IBM's or design their own depending on which suited them best. I still wonder if Apple pi will show up?</strong><hr></blockquote>
Yup
<strong>Oh, anyone know what a VSP interface is?</strong><hr></blockquote>
I just realized that we were completely barking up the wrong tree with the VSP thing. In this context it does not mean "Virtual Single Processor" at all... it means "Vector Signal Processor". Big difference.
For those that remember this is probably those DSP-like features in the chipset that Moki was talking about last summer. Four vector units much like you find in GPUs, game consoles, or other dedicated vector processors. Sitting in the chipset they would have direct high speed access to the system memory and the AGP bus. Audio, Quartz rendering, OpenGL, QuickTime, and custom signal processing algorithms should all be possible on these VSPs.
Apple has tried this kind of thing before way back in the 68040 days with the DSPs in the 660av and 840av. Unfortunately back then they weren't very powerful and media processing was much less common. Things are quite different now. The main thing is can Apple commit to keeping these things in their machines going forward? The '040 AV machines were like orphaned children. Integrating it into the chipset keeps the component count (and cost) down, and lets Apple tailor them specifically for Apple's needs. There is also a lot of technology floating around these days for keeping the code running on these things independent of the hardware details (i.e. OpenGL's shader technologies). The integrated nature of the beast might allow it to show up in the low-end as well as the high-end, allowing substantially improved performance from G4-based computers.
Could be very cool things in store...
<strong>From my understanding of the situation, it was Apple's job to build the companion chip. As of the end of 2001, they were behind.</strong><hr></blockquote>
Well, that was a year ago.
<strong>
I have a sneaking suspicion that IBM is under-promising on the 970, and Apple hasn't promised anything. I know what you're saying about not delivering the bleeding edge, my point is simply that 1.8 GHz may very well not be the bleeding edge of the 970.</strong><hr></blockquote>
If I recall correctly, when IBM anounced the Gecko chip with Nintendo its speed was 400 mhz, but the shipping system have a 485 mhz chip in them acording to Nintendo. This is a precident for IBM delivering more than promised. Also IBM said that the 970 will scale rapidly to 2 Ghz with a die shrink, and actual real world tests of the chips might be able to reliably handle better than what they have anounced, just as the G4 reliably handled less speed than was promised. I'm sure that IBM and Apple do not want to repeat Moto's mistake there.
<strong>...just like hard drive manufacturers can get away with calling an 18.62 GB hard drive a 20 Gig (like the one in my iBook). I'm just curious how these numbers actually come about. Is there someone out there that can explain this stuff without needing an advanced degree in quantum mechanics? Thanks.
Hmm, one at a time, I guess:
HD size: This one really is a hoodwink. HD manufactures do (sort of) fib about the size of their units
See <a href="http://www.itd.umich.edu/~doc/Digest/1195/feat03side01.html" target="_blank">this</a> for more info.
[ 02-12-2003: Message edited by: Transcendental Octothorpe ]</p>
<strong>Just a quick question here to clear something up for me. How do these MHz / GHz ratings come about in the first place? Do these fabs have a computer without a CPU standing by that runs the equivalent of iBench with these processors, or do they have a seperate machine they drop the CPU into that says, "Yup, it's a 1.6." How do they know what a chip runs at and how much can they get away with lying about? Like a chip in their machine is actually a 1.4, but they say, "Screw it, let's call it a 1.6"...</strong><hr></blockquote>
Now, about CPU clock speed: it depends what you are talking about. PowerPCs (and Intel) use an actual clock speed that is measured with automated testing equipment. The parts have sets of data run through them at varying speeds, and some parts (within the same production batch) will run those data sets correctly at faster speeds than other chips. This is called "speed binning" and is done with memory as well.
If you are talking about SPEC numbers, that is a test done in software measuring how fast that CPU can perform certain calculations.
If you are talking about AMD's numbers, that's another <a href="http://archive.infoworld.com/articles/hn/xml/01/10/15/011015hnathlonxp.xml" target="_blank">story</a>.
[ 02-12-2003: Message edited by: Transcendental Octothorpe ]</p>
Shoot, well while we're at it, a few notes about semantics:
A CPU is not "a 1.4 GHz". Nor does "this chip have 1.6GHz".
The chip runs at 1.6GHz.
Also:
A chip does not "have 42 Watts". It uses energy, which can be quantified in Watts.
This chip uses or draws or dissapates 42 Watts.
I've heard people here and elsewhere comment "not only does this chip run cooler, it also draws less battery, making it perfect for powerbooks!" While true, it is pointing out the obvious. Energy usage (for CPUs, at least) is always the same as the dissapated heat. Conservation of energy and all that. A lower wattage device will always produce less heat and draw less energy than a higher wattage part. They aren't two seperate goals, both of course both are good
Correct as necessary...
[ 02-12-2003: Message edited by: Transcendental Octothorpe ]</p>
A car is powered by a V8 engine.
So why don't you go tell the millions of car-freaks out there that their grammar is all wrong?
<strong>So why don't you go tell the millions of car-freaks out there that their grammar is all wrong?</strong><hr></blockquote>
Really, who gives a sh!t. Not I. As long as it does what I want it to, and doesn't slow me down in the process.
Nice to see you back, JYD? On vacation in sunny Aruba or something?
<strong>
Really, who gives a sh!t. Not I. As long as it does what I want it to, and doesn't slow me down in the process.
Nice to see you back, JYD? On vacation in sunny Aruba or something?
Hey, sorry. I was just in informative mode.
I'll try to never be informative again.
<strong>
Hey, sorry. I was just in informative mode.
I'll try to never be informative again.
I wasn't trying to be abrasive, just pointing out I think a lot of computer terminology is for a small portion of the computer buying public. As long as it's fast, yada, yada. Sorry if I appeared disgruntled
<a href="http://news.com.com/2100-1001-984353.html?tag=fd_top" target="_blank">http://news.com.com/2100-1001-984353.html?tag=fd_top</a>
<strong>
Hmm, one at a time, I guess:
HD size: This one really is a hoodwink. HD manufactures do (sort of) fib about the size of their units
Close but very wrong. HD manufacturers and most ISPs actually have it correct as 1GB is 1 000 000 000 bytes. It relates to the prefix Giga, which is formally defined as 10^9. That really is the standard as defined by the SI system (don't they teach people that in high school?
The problem arises that in the early days of computers there were no prefixes for binary numbers so computers just stole the nearest SI unit and said, "That's close enough." Problem is as the orders of magnitude increase the errors get substantially larger and it no longer is close enough.
So where is this leading? Well there is in fact another set of prefixes for binary numbers that you can read all about over <a href="http://www.romulus2.com/articles/guides/misc/bitsbytes.shtml" target="_blank">here.</a> Problem is the system makers seem to refuse to use them but at the end of the day the HD manufacturers have it right so 3 cheers for the HD manufacturers
Edit: <a href="http://physics.nist.gov/cuu/Units/binary.html" target="_blank">Or another link.</a>
[ 02-12-2003: Message edited by: Telomar ]</p>
<strong>...</strong><hr></blockquote>
<img src="graemlins/oyvey.gif" border="0" alt="[oyvey]" />
I wasn't wrong. It is a standard in any computer related field to use powers of 2 in increments of 10.
Of course you are right the the real Scientific standard is that mega = 10^6 etc., but we are in base two here, not base 10, and the convention as I have stated it has been in place since at least the 60's.
[Edit, I just read your second link. Interesting, but it'll be a while before they change the accepted jargon of the industry. I still mantain that the HD manufature's notation is intentionally misleading, and not some noble attempt to be faithful to SI units.]
But, as others have said: who cares? We all just want a kick ass comp
(and I believe that we will be getting one from Apple before fall)
[ Edit: DOH! you'd think I'd know mega from giga. <img src="embarrassed.gif" border="0"> ]
[ 02-12-2003: Message edited by: Transcendental Octothorpe ]</p>
[
Of course you are right the the real Scientific standard is that mega = 10^9 etc., <hr></blockquote>
Oh, Boy! Now it's my turn to be hyper-picky!
kilo = 10^3
mega = 10^6
giga = 10^9
tera = 10^12
<strong>Just another story to confirm the arrival date of the 970. A NEWS article on new Blade Servers coming from IBM using the PPC 970.
<a href="http://news.com.com/2100-1001-984353.html?tag=fd_top" target="_blank">http://news.com.com/2100-1001-984353.html?tag=fd_top</a></strong><hr></blockquote>
From that article: [quote]In the longer term, though, IBM has even grander ambitions. It's possible that a special link could join two four-processor blade servers into an eight-processor system. The approach would use a variant of IBM's EXA "Summit" chipset, which can use high-speed cables to link four-processor groups into eight-, 12- and 16-processor x440 servers, Benck said in an interview.
"We have the technology to create blades that plug in to create larger symmetrical multiprocessor servers," he said.<hr></blockquote>moki and others have talked about IBM may be sandbagging a bit, and others have talked about technologies to treat multiple chips as a single chip. I wonder if this is important and whether it fits into Apple's plans.
<strong>Just another story to confirm the arrival date of the 970. A NEWS article on new Blade Servers coming from IBM using the PPC 970.
<a href="http://news.com.com/2100-1001-984353.html?tag=fd_top" target="_blank">http://news.com.com/2100-1001-984353.html?tag=fd_top</a></strong><hr></blockquote>
I like this line-
[quote] ...will use the PowerPC 970 processor. The 1.8GHz processor is expected to arrive later this year, IBM said. <hr></blockquote>
So i think they bodes well for getting the 970 this year and not next
[ 02-12-2003: Message edited by: KidRed ]</p>
It's not a trivial undertaking and there are real questions about whether Apple will be capable of creating the technology to use the 970 effectively.
[Hoping for the October 2003, expecting March 2004]
[ 02-13-2003: Message edited by: Tom West ]</p>