Ppc 7448d

2

Comments

  • Reply 21 of 54
    wizard69wizard69 Posts: 13,377member
    Quote:

    Originally posted by Amorph

    Remember, the 7448D would essentially be a different packaging of the same architecture used in the dual G4 PowerMacs. There's nothing new or interesting about hanging two G4 cores off the same bus. This packaging would simply reduce the cost of implementing a dual G4.







    I'm not sure reduce is the word here, rather like a dual for free. Sure FreeScale may charge a bit more for a dual core chip but I don't think it will be as bad as some think.



    What is interesting is that this has the potential to be better than any of the old dual G4's. We have agreement that not many would complain about that much of a performance increase in say a Mini.

    Quote:



    In the latter days of the PowerMac G4, duallies were about 40% faster on average than singles. If this chip is replacing single G4s, I don't see anyone complaining about a 40% average jump at the same clock speed (and much higher gains in specific circumstances), and the more "smooth" feeling of multitasking under SMP.



    Given the potential to increase the cache along wth the FSB and core clock rates I don't see many people complaining either.

    Quote:

    If they simultaneously bump MaxBus up to 200MHz, even better. At the very least, it can tide Apple over until IBM and Freescale roll out the low-power designs they're currently working on.



    Well this is what I'm wondering about. That is what does Apple think of the 32 bit market and where do they expect to go with that market? If 32 bit is a short term market then Apple is very likely to go with something like this in products soon. On the otherhand the long term stradegy should be to adopt new technology fast.
  • Reply 22 of 54
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by wizard69

    I'm not sure reduce is the word here, rather like a dual for free. Sure FreeScale may charge a bit more for a dual core chip but I don't think it will be as bad as some think.



    I was being conservative. I doubt it'll be free, but it will be significantly less expensive.



    Quote:

    Well this is what I'm wondering about. That is what does Apple think of the 32 bit market and where do they expect to go with that market? If 32 bit is a short term market then Apple is very likely to go with something like this in products soon. On the otherhand the long term stradegy should be to adopt new technology fast.



    The "32 bit market" consists of 99.999% of the Mac's existing application base. I don't see any rush to 64 bits in the development community. When hardware support for pure 64 bit code is useful, it's indispensable. But in the common case, even for many professional applications, it's not.



    I expect IBM to lead the charge to 64 bit simply because they offer hardware that's high-end and specialized enough that support for pure 64 bit code is not an option. This certainly won't hurt Apple at all, as long as the chips they're interested in continue to support hybrid 32/64 bit support, as the 970 does. But speaking as a developer, I see vanishingly little pressure on Apple to go all 64 bit in the near term (i.e., the next few years). They might, but if so it'll be a result of the hardware support anticipating demand, not the other way round.
  • Reply 23 of 54
    programmerprogrammer Posts: 3,457member
    A dual-core 7448 might actually perform quite a bit better than 2 7455s on a shared MPX bus. The L2 cache is purported to be 1MB per core, and the on-chip bus would handle most of the snooping traffic. The MPX bus on the motherboard would no longer have to handle 2 processors and therefore might have some opportunities for optimization (i.e. at least 200 MHz, if not more). It has to handle the load from 2 G4s, but at least that load will be better coordinated through the same bus interface and none of the inter-core traffic will be there. If they achieve a decent clock rate this thing could actually be reasonably fast, especially on programs with relatively small working sets and lots of branchy integer code. Wizard was right about it being a win for the mini, at least.



    The 3rd party upgrade companies will love it as well. Maybe I'd upgrade by dual 1 GHz G4 to a quad 2 GHz G4.
  • Reply 24 of 54
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by Programmer

    A dual-core 7448 might actually perform quite a bit better than 2 7455s on a shared MPX bus.



    I don't doubt it. But I picked the 40% number precisely because even though it's a lowball estimate even for two 7455s sharing a MaxBus, it's a remarkable jump for an upgrade, far more than we've seen from clockspeed increases in a very long time.



    Quote:

    Wizard was right about it being a win for the mini, at least.



    No question. It shouldn't be long before it appears in iBooks, either. Mmmm, dual core 12" iBook...
  • Reply 25 of 54
    I don't see this hitting mini's and iBooks until there is something faster in the powerBooks. Say maybe a year? Apple is going to want any "new" chip in a pro machine with its higher margins first anyway, if they can help it.
  • Reply 26 of 54
    wizard69wizard69 Posts: 13,377member
    Quote:

    Originally posted by Amorph

    I was being conservative. I doubt it'll be free, but it will be significantly less expensive.







    It will probally depend on how agressive Apple and Freescale want to be with sales and market share increases. Especially for implementation in the Mini. Apple could very well lead the market in small form factor machine for a good part of a year if they could get a dual core chip into that machine. It looks like it will take Intel that long to get a dual core Cnetrino out.

    Quote:





    The "32 bit market" consists of 99.999% of the Mac's existing application base. I don't see any rush to 64 bits in the development community. When hardware support for pure 64 bit code is useful, it's indispensable. But in the common case, even for many professional applications, it's not.



    It is not for many applications that is true. From the system stand point having acccess to all that memory can be very useful.



    As to applications in general I suspect we will see a split where some simply never get ported to the 64 bit IA. On the other hand there will soon be (is?) a lot of new development going on with respect to PPC64 bit that may get ported back to the MAC. The reality of all gaming platforms moving to 64 bit can not be underestimated. The balance of new applications to come out for 64 bit systems could shift dramatically to 64 bit.



    In the end what I see is 32 bit HARDWARE being more of a problem to support for Apple than its worth. Especially of the costs end up being as cheap or cheaper than the 32 bit hardware. With PPC software is not an issue due to it running transparently on 64 bit hardware.

    Quote:



    I expect IBM to lead the charge to 64 bit simply because they offer hardware that's high-end and specialized enough that support for pure 64 bit code is not an option. This certainly won't hurt Apple at all, as long as the chips they're interested in continue to support hybrid 32/64 bit support, as the 970 does. But speaking as a developer, I see vanishingly little pressure on Apple to go all 64 bit in the near term (i.e., the next few years). They might, but if so it'll be a result of the hardware support anticipating demand, not the other way round.



    Actually I never tought of it as IBM leading the charge, rather AMD has really had an impact with respect to 64 bit hardware at the user level. AMD has an advantage on that architecture because the extensions do provide performance increases unrelated to the fact that the processor is now 64 bit. On the PPC side the 32 bit software runs for the most part as fast as its 64 bit variant. PPC really only takes advantage of 64 bit in large memory systems and at the system level. Especially in the case of OS/X at the moment.



    An all 64 bit move by Apple in the near future, it presupposes 32/64bit support for some time in OS/X - if not forever. The presure on Apple will likely only come from certain developers that can realize a performance increase over their 32 bit applications. That and the users that can really benefit from a 64 bit environment, which is likely to be much more significant than the developer demand.



    A great deal of the user demand is likely to come from the almost unlimited need for more RAM. Maybe not on a per application basis but certainly for the over all system. In any event Apples current 32/64 bit OS/X is an advantage to many users of 32 bit applications in that they now have access to a larger 32 bit space. Which brings us back to the idea that 64 bit is a system enhancement.



    I really do wonder if Apple has made projections on how long they expect to be selling 32 bit hardware. I suspect if we see them using things like the 7448 in the Mini instead of a more highly integrated e600 then we will know that the expectation is that 32 bit won't be around long.



    Dave
  • Reply 27 of 54
    wizard69wizard69 Posts: 13,377member
    Well with the Mini I think it would almost be an imperative especially if the unit is selling as well as it appears to be. The issue is simply that Apple needs to acquire market share and hardware capable of multiprocessing will be significant from the marketing standpoint very soon. An SMP Mini is a way for Apple to stay ahead of the curve here. Of course they could do this with 'a' highly integreated e600 running at much higher clock also. The 'a' was in quotes with the idea that an e600 is coming that would be better suited to Apples needs than what we currently know about.



    I still see the PowerBooks going 64 bit as soon as Apple can get working hardware. There really wouldn't be a comparison performance wise and I really don't think Apple gets that wrapped up in these discusion about speed of the portables impacting each other. It is far easier to distinquish the machines by other features. Interesting enough the processor for this Powerbook is likely to be something only rumored about also. My geuss is something PPE derived but at this point there is far to little information to even worry about it.



    Either way the future looks bright for Apple fans.



    Dave





    Quote:

    Originally posted by ChevalierMalFet

    I don't see this hitting mini's and iBooks until there is something faster in the powerBooks. Say maybe a year? Apple is going to want any "new" chip in a pro machine with its higher margins first anyway, if they can help it.



  • Reply 28 of 54
    wizard69wizard69 Posts: 13,377member
    Quote:

    Originally posted by Programmer

    A dual-core 7448 might actually perform quite a bit better than 2 7455s on a shared MPX bus.



    If they are able to run the internal MPX bus much faster than the external that would be nice also. The key to success here is certainly in the way that the interace to the three devices (core1, core2, and the bus interface unit) is handled.

    Quote:



    The L2 cache is purported to be 1MB per core, and the on-chip bus would handle most of the snooping traffic. The MPX bus on the motherboard would no longer have to handle 2 processors and therefore might have some opportunities for optimization (i.e. at least 200 MHz, if not more).



    There seems to be enough information floating about to indicate that 200MHz is doable. Still not as much as I'd like to see for modern hardware, but certainly a quantifiable increase that would impact the system.

    Quote:

    It has to handle the load from 2 G4s, but at least that load will be better coordinated through the same bus interface and none of the inter-core traffic will be there. If they achieve a decent clock rate this thing could actually be reasonably fast, especially on programs with relatively small working sets and lots of branchy integer code.



    The fear I would have is that any clock rate increase in the core would quickly saturate the gains made in the FSB. This would probeally do well up to about 2.2 GHz, taking into account the much larger caches and general purpose work loads.

    Quote:

    Wizard was right about it being a win for the mini, at least.



    Hopefully Apple sees the Mini as an important part of their business going forward and understnads the need for SMP hardware in that level of equipment.

    Quote:

    The 3rd party upgrade companies will love it as well. Maybe I'd upgrade by dual 1 GHz G4 to a quad 2 GHz G4.



    It is interesting that when the 7448 was annnounced there did not seem to be much interest in the possibility of a dual core unit even though it was obvious that it was possible techically due to othe e600 announcements. I geuss it is a question of engineering value, I'd like to think that a 7448D makes sense but the rest of the e600 product line has me thinking otherwise. The reality is that Apple could build a Mini for next year with PCI-Express Video on an e600 platform. That seems compelling right there, so it would appear that the life span of a 7448D would be tied to upgrades and usage outside of Apple.



    Dave
  • Reply 29 of 54
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by wizard69

    If they are able to run the internal MPX bus much faster than the external that would be nice also. The key to success here is certainly in the way that the interace to the three devices (core1, core2, and the bus interface unit) is handled.



    MaxBus wouldn't be on board in this design. The onboard MaxBus connects the cores to an onboard memory controller, which this (speculative) design doesn't have.



    Programmer's claim is based on the assumption? hope? that Freescale moved the inter-core communications (snoop/snarf, etc.) on die, and it's only using MaxBus to talk to the northbridge. If they didn't, then this design will still perform better (because of the bigger cache on die) and cost less.



    Freescale's main problem as I see it isn't so much ambition as it is talent. Motorola management treated its senior CPU engineers poorly enough that they all jumped ship and went to Intel right around the release of the 7450. Motorola, and now Freescale, is limping along with the remaining crew. They seem to have gotten comfortable and experienced with this core by now, and if Freescale is able to reassert itself as both competitive and a good place to work then they'll be able to start attracting talent again. (The odds that they'll ever get their old team back are essentially negligible: Intel is a famously good company to work for.)



    Quote:

    The fear I would have is that any clock rate increase in the core would quickly saturate the gains made in the FSB. This would probeally do well up to about 2.2 GHz, taking into account the much larger caches and general purpose work loads.



    I've been saying this for a while now: The main reason Freescale has been slow with clock speed updates is that there's hardly any point. The CPU/bus clock ratio is already right near the maximum for balanced performance.



    Once MaxBus goes on die, I think we'll be surprised at how well the e600 core scales. It will never threaten the P4 or Cell, but it should be able to acquit itself nicely. Freescale has spent a lot of time hand-tooling it to run very efficiently.



    Quote:

    Hopefully Apple sees the Mini as an important part of their business going forward and understnads the need for SMP hardware in that level of equipment.



    Speaking as one of the more vocal "headless Mac" naysayers, pre-MacWorld: If they don't, I will personally fly out to Cupertino and smack Jobs around until he does understand it. I haven't seen this level of interest in a Mac since... well, I don't know when. Even the iMac was mostly seen as energizing the core base and reviving the company.



    Quote:

    The reality is that Apple could build a Mini for next year with PCI-Express Video on an e600 platform. That seems compelling right there, so it would appear that the life span of a 7448D would be tied to upgrades and usage outside of Apple.



    On the other hand, depending on the exact implementation, the cost of dropping a dual-core 7448 into an existing G4 motherboard might be low enough to make it worthwhile even for only one or two upgrade cycles. That would also give Apple time to really nail the next-generation motherboard design, assuming that they use Freescale's 86xx CPUs.



    I don't see why they wouldn't, actually, given that Freescale's performance/watt numbers look pretty damn good right now.
  • Reply 30 of 54
    shawkshawk Posts: 116member
    Another potentially interesting application might be for HDTV.

    Say on demand from a sattelite.

    Say using H264.



    Maybe in a Mac Mini case.

    Say with a dual DVI and 128 meg graphics card.



    Say, what ever happened to the rumored 44" LCD with fast pixel switching?

    Or, for that matter the 60" LCD?

    Oh yeah, wasn't there some Apple HDTV projector rumored?

    And, what ever happened to that small hand held controller that was mistaken for an iPhone?



    Not that I know anything.
  • Reply 31 of 54
    matsumatsu Posts: 6,558member
    Quote:

    Originally posted by Amorph





    Speaking as one of the more vocal "headless Mac" naysayers, pre-MacWorld: If they don't, I will personally fly out to Cupertino and smack Jobs around until he does understand it. I haven't seen this level of interest in a Mac since... well, I don't know when. Even the iMac was mostly seen as energizing the core base and reviving the company.





    Interesting times indeed. I know schools that are looking at the mini for labs, even though the eMac was supposed to be the product for that space. Lots of people I know are looking at it to replace aging towers -- believe it or not there are still schools out their soldiering along with beige towers and B&W G3s!



    In any case, the main problem with Apple's AIO's is that they cost too much, and when you actually look at it, mini's cost too much too -- they just create a better immediate impression, 499 vs 1K+.



    If the mini does really well, we might see the demise of the iMac AIO to be replaced with a more functional headless/cube/mini, one with a G5, better GPUs, and 2 RAM slots and a desktop HDD that you can actually get at!
  • Reply 32 of 54
    wmfwmf Posts: 1,164member
    Quote:

    Originally posted by shawk

    Another potentially interesting application might be for HDTV.

    Say on demand from a sattelite.

    Say using H264.



    Maybe in a Mac Mini case.

    Say with a dual DVI and 128 meg graphics card.




    That's total overkill. An HDTV set top box is more likely to use a chip like the Sigma Designs SMP8630, which can do everything on one chip. (Of course, you can't run OS X on it.)
  • Reply 33 of 54
    webmailwebmail Posts: 639member
    Your "friend" is talking out of his ass.





    Quote:

    Originally posted by Smircle

    Absolutely, and I won't be able to provide any further details, not least due to Apples legal stormtroopers forcing rumor sites to hand over logfiles and the like. I am by no means sure this has any meaning to future Apple hardware - after all, the guy might be talking out of his ass, the 7448D might be an abortive project for some technical reason, thermal characteristics might prove inadequate for a notebook computer, Apple might go with IBM, etc.



    I wouldn't have bothered posting but for my gut-feeling that Apple would love to take the wind out of intels dual-core plans by introducing the first dual notebook computer in early summer. And having a drop-in replacement for the current single-core chips (some minor modifications to the RAM controller and cooling might be necessary, but much less than for a 8641D or "Antares" 970gx) would surely make the decision easier. If the chip is what my mate claims, Apple could go with 7448 for the 12" powerbook (power/heat concerns) and 7448D for the 15 and 17" with one mainboard design.



    Anyhow, I sure hope someone with connections into Freescale and/or Apple takes the hint and pumps his sources for confirmation.




  • Reply 34 of 54
    programmerprogrammer Posts: 3,457member
    Quote:

    Originally posted by Amorph

    Programmer's claim is based on the assumption? hope? that Freescale moved the inter-core communications (snoop/snarf, etc.) on die, and it's only using MaxBus to talk to the northbridge. If they didn't, then this design will still perform better (because of the bigger cache on die) and cost less.



    Well since this hypothetical chip was described as pin compatible and dual core, that means there is only one MPX bus connection coming off of the chip. To be able to do that they'd pretty much have to share one set of external bus drivers between two cores, and that would pretty much necessitate some form of on-chip arbitration. I'd be astonished if they got that far and didn't allow direct core-to-core communications.



    On the other hand I don't actually believe that the thing exists, so its all moot anyhow.
  • Reply 35 of 54
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by Programmer

    Well since this hypothetical chip was described as pin compatible and dual core, that means there is only one MPX bus connection coming off of the chip. To be able to do that they'd pretty much have to share one set of external bus drivers between two cores, and that would pretty much necessitate some form of on-chip arbitration. I'd be astonished if they got that far and didn't allow direct core-to-core communications.



    I can't answer this substantively without straying even farther into uncharted (for me) waters, but I don't believe any such measure was necessary for two single-core G4s on a single MaxBus. MaxBus is all set up to handle low-latency synchronization between up to 8 (single-core) CPUs.



    I figured that the simplest thing to do is conventional, MaxBus-enabled SMP between two cores that happen to share the same die. There's not much point wasting a lot of engineering on this design if it exists, given that MaxBus is a dead end.



    Quote:

    On the other hand I don't actually believe that the thing exists, so its all moot anyhow.



    I'm not convinced that it does either, but it's an interesting speculative exercize.
  • Reply 36 of 54
    programmerprogrammer Posts: 3,457member
    Quote:

    Originally posted by Amorph

    I can't answer this substantively without straying even farther into uncharted (for me) waters, but I don't believe any such measure was necessary for two single-core G4s on a single MaxBus. MaxBus is all set up to handle low-latency synchronization between up to 8 (single-core) CPUs.



    I figured that the simplest thing to do is conventional, MaxBus-enabled SMP between two cores that happen to share the same die. There's not much point wasting a lot of engineering on this design if it exists, given that MaxBus is a dead end.




    You're right about MPX supporting multiple processors... but things get a little tricky when you put two of those processors on one chip running at 10x the bus clock and expect them to share the pins connecting them to the motherboard bus. The part of the core that drives those pins has to be shared, so now you have 2 cores connected to a functional unit driving the external pins. That bus interface unit can run at the external bus' clock rate and each core has the logic to send at 10% of its clock, or you can run the BIU at the core's rate and have it deal with the clock difference. Solving it in once place seems easier, so now you have an on-chip MPX bus (presumably) running at the chip's full clock rate and one of the three things on this bus is a new device that you have to design mostly from scratch anyhow (an MPX - MPX bridge). If you're designing such a thing and want to keep it simple, what better way to do that than leave off the support for forwarding all the snoop/snarf traffic onto the external bus? Now you have a dual core chip which doesn't share any of its core-to-core traffic with the outside world, and you saved yourself a bunch of work because it was easier to design.



    See?
  • Reply 37 of 54
    thttht Posts: 5,421member
    You guys should note that the 8641D does not have the MPX bus running at the e600 core clock. Rather, it runs at "up to 667 MHz" and has a "MPX Coherency Module" to bridge the e600 core to the processor's on-chip I/O bus, or whatever that bus that connects all of the I/O together is.



    If there is a 7448D, the easiest solution, probably, is to take the 8641D, and drop all of the SoC features (on-chip memory controller, PCIe, ethernet, et al) except for the 2 e600 cores and the MPX coherency module. Said MPX coherency module would presumably bridge the external <200 MHz MPX bus to whatever the internal MPX bus clock is. Considering the "up to 667 MHz" in the 8641D, a hypothetical 7448D's internal MPX bus isn't going to be higher than 667 MHz.
  • Reply 38 of 54
    mattyjmattyj Posts: 898member
    That would be damn nice in a laptop, that's all I can say.
  • Reply 39 of 54
    1337_5l4xx0r1337_5l4xx0r Posts: 1,558member
    The fact that people are getting excited about a chip that may have up to a 200mhz bus is pretty sad, IMHO.



    A 200mhz bus, divided by two (for two cores), minus whatever overhead is associated with sending data to the appropriate cores.



    That's F%$king sad.



    I'm not sure what sort of performance you're all expecting from a 2 Ghz chip on an effective 100mhz bus (remember, that's hypothetical, it may be a 167mhz bus divided by two cores!).



    Five years on, and G4s are still bandwidth starved. Ridiculous.
  • Reply 40 of 54
    wmfwmf Posts: 1,164member
    Quote:

    Originally posted by 1337_5L4Xx0R

    I'm not sure what sort of performance you're all expecting from a 2 Ghz chip on an effective 100mhz bus...



    We're expecting anything better than the current PowerBooks. If the alternative is being stuck at 1.67GHz for 18 months, I'll take a starved processor any day.
Sign In or Register to comment.