Now the "multiple clients of the memory system", I buy. I wish I knew if that was part of "Apple PI" or not. Unfortunately, my sources don't give details, but instead refer to secondary sources that talk about "PI". Sure wish that they would talk to me.
Thanks for making me think about it more!
Most of the systems I work with suffer from issues that arise from multiple processors, DMA engines, or GPUs. The differences in speed between all of the various buses means that some amount of buffering is going to have to happen anyhow, so an L3 becomes a natural extension of that. Whether it is cost-effective or not is a job for the engineers building the chipset.
I may have just missed his posts but I haven't seen Moki on these boards for a while. No comments on the MB rumors for dates and benchmarks. You don't suppose he has a test machine and is under a NDA do you?
No, he doesn't (but he wishes he did)
As for the benchmarks, they are fabricated. As for the release at the end of May... don't hold your breath.
WWDC, on the other hand, will no doubt be interesting.
So true! But look at the picture for the ImpactRT 3100 http://www.mc.com/press_room/image_l...tegory=Systems and you'll see a motherboard of sorts leaning against the machine. That MB looks mighty familiar: wasn't something like that shopped around as a futute Apple PM MB or the IBM 970 blade MB a few month back? Anybody who has these old spy pictures might want to take a look?
I'm sure this has been covered before, but if Apple sticks to 32-bit processors in their laptops/low end Macs, would a 64-bit OS work on them? If not, what is the likelihood of Apple sustaining two OS's like that? I was under the impression that Jobs didn't like that (e.g. Newton OS, Rhapsody/OS X Server).
I'm sure this has been covered before, but if Apple sticks to 32-bit processors in their laptops/low end Macs, would a 64-bit OS work on them? If not, what is the likelihood of Apple sustaining two OS's like that? I was under the impression that Jobs didn't like that (e.g. Newton OS, Rhapsody/OS X Server).
Since there is a great deal of common code, I would suspect a scenario similar to the "fat binaries" that allowed previous migrations. The installer will examine the hardware and install the processor specific code that is appropriate to the 32 or 64 bit machines. Not all of the code will be different. I am sure programmer or one of the other real experts can provide a better explanation of how it might be done.
Since there is a great deal of common code, I would suspect a scenario similar to the "fat binaries" that allowed previous migrations. The installer will examine the hardware and install the processor specific code that is appropriate to the 32 or 64 bit machines. Not all of the code will be different. I am sure programmer or one of the other real experts can provide a better explanation of how it might be done.
The 32-bit APIs in the OS have to be maintained for all the existing software anyhow, and the other differences are quite minor in terms of how much code is involved. Maintaining both should not be an issue.
Since there is a great deal of common code, I would suspect a scenario similar to the "fat binaries" that allowed previous migrations. The installer will examine the hardware and install the processor specific code that is appropriate to the 32 or 64 bit machines. Not all of the code will be different. I am sure programmer or one of the other real experts can provide a better explanation of how it might be done.
I think different software installs could cause a lot of complaints if people ever try to upgrade from the 32-bit version to a 64-bit mac or downgrade vice-versa. I forsee a much simpler "fat binaries" scenario. If the hardware is there, the 64-bit extensions would activate. If not, they will remain dormant. It's the same with the Bluetooth System Preferences.pane, and a host of other OS X thingies.
I'm guessing it could be something like a kernel extension (I'm not an expert) that dynamically loads on boot up depending on the hardware attached. Think of how plugging a virgin 20" Apple Cinema Display into the ADC connector on your Mac automatically boots at 1680 x 1050 resolution. Also, think of Altivec enhanced code. It doesn't run at Altivec speeds on a G3, but it still runs, don't it?
Apple will no doubt come up with an elegant solution, not just because they can, but because it saves them loads of cash that would otherwise be wasted on tech support calls.
We received word that two large shipments of Power PC 970 processors went to Foxconn in Taiwan, under a purchase order from Apple computer. Twenty thousand 1.4Ghz PPC 970's and forty thousand 1.6Ghz PPC 970's have already arrived in their hands. IBM's inventory contains fifty thousand 1.8 Ghz PPC 970's, of which forty thousand are destined for Foxconn tomorrow (Wednesday).
IBM has listed as pending 2Ghz parts as well, which means that it will be in inventory within a month if their fab in East Fishkill produces sufficient volumes of them, and from what we hear they should be in stock by mid-June. Apple has stated that they need a minimum of 40 thousand in order to make a production run, and from what we understand this is for dual processors because normally their production runs are twenty thousand units. It is not IBM's policy to comment on other vendor's unreleased products. We have also been briefed that the PPC 970 will come in 2.3 and 2.5Ghz configurations by the end of the year, and as well some preliminary specs on the upcoming 980 processor, which is a Power 5 derivative
Yeah, I have a feeling these things are coming alot faster than many expect. IBM will produce any volume that Apple wants, and do it promptly. They have the experience and the resources to back it up.
Yeah, I have a feeling these things are coming alot faster than many expect. IBM will produce any volume that Apple wants, and do it promptly. They have the experience and the resources to back it up.
If you believe this story is true then it says that IBM cannot produce sufficient quantities currently to meet Apple's needs.
Anyone got any ideas how one can find out IBM's inventory levels?
You are IN a state - not at one. So that would be:
'Unfortunately, that is the state our society is in.'
</nitpick>
Kroehl
<nonsense alert>
You seem to be trying to fit some word count into a sentence to make some sense grammatically. Unfortunately what you really need to do is rewrite the sentence to make some sense to start with:
"Unfortunately, that is the state of our society."
You seem to be trying to fit some word count into a sentence to make some sense grammatically. Unfortunately what you really need to do is rewrite the sentence to make some sense to start with:
"Unfortunately, that is the state of our society."
I poked around on their site downloading & reading things...
Although it says that their 'blade' has RapidIO and a G4 on it, it doesn't ever say that the _G4_ has RapidIO. RapidIO is a scaleable bus that can be used for chip-other-than-CPU to talk to some-other-chip if that's the way it is set up. (<-Look, a preposition at the end of a run-on sentence! Saved someone a post or two.)
So I'm not convinced that the PPC chips on there necessarily have RapidIO. Particularly in a blade situation, where you might like a bus/backplane for the separate blades to talk to/amongst themselves.
Comments
Originally posted by MacJedai
Now the "multiple clients of the memory system", I buy. I wish I knew if that was part of "Apple PI" or not. Unfortunately, my sources don't give details, but instead refer to secondary sources that talk about "PI". Sure wish that they would talk to me.
Thanks for making me think about it more!
Most of the systems I work with suffer from issues that arise from multiple processors, DMA engines, or GPUs. The differences in speed between all of the various buses means that some amount of buffering is going to have to happen anyhow, so an L3 becomes a natural extension of that. Whether it is cost-effective or not is a job for the engineers building the chipset.
Originally posted by Kurt
I may have just missed his posts but I haven't seen Moki on these boards for a while. No comments on the MB rumors for dates and benchmarks. You don't suppose he has a test machine and is under a NDA do you?
No, he doesn't (but he wishes he did)
As for the benchmarks, they are fabricated. As for the release at the end of May... don't hold your breath.
WWDC, on the other hand, will no doubt be interesting.
Originally posted by barbarella
So true! But look at the picture for the ImpactRT 3100 http://www.mc.com/press_room/image_l...tegory=Systems and you'll see a motherboard of sorts leaning against the machine. That MB looks mighty familiar: wasn't something like that shopped around as a futute Apple PM MB or the IBM 970 blade MB a few month back? Anybody who has these old spy pictures might want to take a look?
Not even close:
Mercury blade:
IBM Blade:
I'm sure this has been covered before, but if Apple sticks to 32-bit processors in their laptops/low end Macs, would a 64-bit OS work on them? If not, what is the likelihood of Apple sustaining two OS's like that? I was under the impression that Jobs didn't like that (e.g. Newton OS, Rhapsody/OS X Server).
Originally posted by MozillaMan
Wait....
I'm sure this has been covered before, but if Apple sticks to 32-bit processors in their laptops/low end Macs, would a 64-bit OS work on them? If not, what is the likelihood of Apple sustaining two OS's like that? I was under the impression that Jobs didn't like that (e.g. Newton OS, Rhapsody/OS X Server).
Since there is a great deal of common code, I would suspect a scenario similar to the "fat binaries" that allowed previous migrations. The installer will examine the hardware and install the processor specific code that is appropriate to the 32 or 64 bit machines. Not all of the code will be different. I am sure programmer or one of the other real experts can provide a better explanation of how it might be done.
Originally posted by Shaktai
Since there is a great deal of common code, I would suspect a scenario similar to the "fat binaries" that allowed previous migrations. The installer will examine the hardware and install the processor specific code that is appropriate to the 32 or 64 bit machines. Not all of the code will be different. I am sure programmer or one of the other real experts can provide a better explanation of how it might be done.
The 32-bit APIs in the OS have to be maintained for all the existing software anyhow, and the other differences are quite minor in terms of how much code is involved. Maintaining both should not be an issue.
Originally posted by moki
No, he doesn't (but he wishes he did)
As for the benchmarks, they are fabricated. As for the release at the end of May... don't hold your breath.
WWDC, on the other hand, will no doubt be interesting.
Hey, its Moki! Been busy?
I still say August-September for the hardware and the OS.
Originally posted by Shaktai
Since there is a great deal of common code, I would suspect a scenario similar to the "fat binaries" that allowed previous migrations. The installer will examine the hardware and install the processor specific code that is appropriate to the 32 or 64 bit machines. Not all of the code will be different. I am sure programmer or one of the other real experts can provide a better explanation of how it might be done.
I think different software installs could cause a lot of complaints if people ever try to upgrade from the 32-bit version to a 64-bit mac or downgrade vice-versa. I forsee a much simpler "fat binaries" scenario. If the hardware is there, the 64-bit extensions would activate. If not, they will remain dormant. It's the same with the Bluetooth System Preferences.pane, and a host of other OS X thingies.
I'm guessing it could be something like a kernel extension (I'm not an expert) that dynamically loads on boot up depending on the hardware attached. Think of how plugging a virgin 20" Apple Cinema Display into the ADC connector on your Mac automatically boots at 1680 x 1050 resolution. Also, think of Altivec enhanced code. It doesn't run at Altivec speeds on a G3, but it still runs, don't it?
Apple will no doubt come up with an elegant solution, not just because they can, but because it saves them loads of cash that would otherwise be wasted on tech support calls.
Originally posted by moki
No, he doesn't (but he wishes he did)
Oh, well. I thought it might be true. You were quiet on these forums for a while.
If you think the benchmarks are bogus, do you think they are close to real life or are they too high?
Thanks, glad to see you back.
posted today over at looprumors:
We received word that two large shipments of Power PC 970 processors went to Foxconn in Taiwan, under a purchase order from Apple computer. Twenty thousand 1.4Ghz PPC 970's and forty thousand 1.6Ghz PPC 970's have already arrived in their hands. IBM's inventory contains fifty thousand 1.8 Ghz PPC 970's, of which forty thousand are destined for Foxconn tomorrow (Wednesday).
IBM has listed as pending 2Ghz parts as well, which means that it will be in inventory within a month if their fab in East Fishkill produces sufficient volumes of them, and from what we hear they should be in stock by mid-June. Apple has stated that they need a minimum of 40 thousand in order to make a production run, and from what we understand this is for dual processors because normally their production runs are twenty thousand units. It is not IBM's policy to comment on other vendor's unreleased products. We have also been briefed that the PPC 970 will come in 2.3 and 2.5Ghz configurations by the end of the year, and as well some preliminary specs on the upcoming 980 processor, which is a Power 5 derivative
Can you imagine the smirk on Steve's face when he starts walking over to the Dell setting next to the Mac?
It will be a great day! His keynote will be like a Churchill WW2 speech!
"History will be kind to me for I intend to write it. "
Originally posted by elizaeffect
Yeah, I have a feeling these things are coming alot faster than many expect. IBM will produce any volume that Apple wants, and do it promptly. They have the experience and the resources to back it up.
If you believe this story is true then it says that IBM cannot produce sufficient quantities currently to meet Apple's needs.
Anyone got any ideas how one can find out IBM's inventory levels?
Originally posted by kroehl
<nitpick>
You are IN a state - not at one. So that would be:
'Unfortunately, that is the state our society is in.'
</nitpick>
Kroehl
<nonsense alert>
You seem to be trying to fit some word count into a sentence to make some sense grammatically. Unfortunately what you really need to do is rewrite the sentence to make some sense to start with:
"Unfortunately, that is the state of our society."
</nonsense alert>
Originally posted by Clive
<nonsense alert>
You seem to be trying to fit some word count into a sentence to make some sense grammatically. Unfortunately what you really need to do is rewrite the sentence to make some sense to start with:
"Unfortunately, that is the state of our society."
</nonsense alert>
Whose society, sorry don't live in that world...
Originally posted by Bigc
Whose society, sorry don't live in that world...
Clearly not the one where the lifeforms have green heads and smoking things dangling from their monoline mouths.
Originally posted by Clive
Clearly not the one where the lifeforms have green heads and smoking things dangling from their monoline mouths.
Au contraire, that is the world I live in.
I poked around on their site downloading & reading things...
Although it says that their 'blade' has RapidIO and a G4 on it, it doesn't ever say that the _G4_ has RapidIO. RapidIO is a scaleable bus that can be used for chip-other-than-CPU to talk to some-other-chip if that's the way it is set up. (<-Look, a preposition at the end of a run-on sentence! Saved someone a post or two.)
So I'm not convinced that the PPC chips on there necessarily have RapidIO. Particularly in a blade situation, where you might like a bus/backplane for the separate blades to talk to/amongst themselves.