Finally an interesting G5 story

13468922

Comments

  • Reply 101 of 440
    [quote]Originally posted by blue2kdave:

    <strong>In his words, it "will get the Internet back to what it was originally intended to be."</strong><hr></blockquote>

    What, a way to provide the military with redundant command and control infrastructures?
  • Reply 102 of 440
    Back on topic to the G5 (or G4)!

    The G5 is scrapped, the G4 7457 has poor yields above 1.3 GHz and Motorola has problem migration to the 130nm process. The same Motorola that together with Philips will become the first to manufacture on 90 nm according to them selfs.



    The via dolorosa that started Autumn of 1999 will hopefully end Autumn of 2003.



    If by divine intervention Motorola is blessed with the ability to actully manufactire what they design (go forth and multipy this time in in silico instead of carbon) We will have G4s running at 2 GHz in January then the 970 will slowly be adopted in the servers and high end towers.



    However, if the G4 progresses along the line of the last 3 years track record of a speed increase from 450 to 1250 MHz. That is 40% increase/year or 3% a month

    The break of the 2 GHz barrier will then intead be in April 2004. So January at 1.35 GHz and 1.6 in the summer.
  • Reply 103 of 440
    algolalgol Posts: 833member
    Some people have been talking about Apple licensing their Server version of Mac OS 10.2 to IBM. After thinking about this for awhile I have decided that this strategy would bring great success for Apple. I also agree with those who say Apple should adopt the 970 in as many macs as possible as to secure additional sales for their partnership with IBM.



    \tIf the people who claim apple received samples of the 970 as early as 2000 are correct I would doubt we would have to wait until 2Q-03 for the 970. I would not doubt a release of the 970 Powermacs at MWSF at all.





    Also I found this at <a href="http://www.spymac.com"; target="_blank">www.spymac.com</a> Just part of the article you can go to the site to read the whole thing if you want.



    "Due to massive resources being put towards R&D along with the economic slump, [Apple] has not been able to give raises to deserving employees for the last 2 years," writes our source after hearing about the announcement. "They're struggling to maintain their innovation without having to cut jobs."



    Jobs attempts to remain positive throughout the disappointing memo, and states his enthusiasm for the upcoming year.



    "[2003 will] have the best new products I've ever seen," concludes Jobs.



    Thought this adds to the 970 rumors early in the year as being true.
  • Reply 104 of 440
    [quote]Originally posted by JCG:

    <strong>ArchAngel, I agree with a lot of what you say. I personally think that Apple should license their OS to IBM for PowerPC corperate sales. This would include servers and desktops, but not 3rd party PowerPC computers. There is a little overlap in product between IBM and Apple, but they could work around and with this with design.

    </strong><hr></blockquote>



    I like your thoughts along the lines of Apple and IBM collaborating more than simply buying and selling chips. But I think to really understand what has to be done, we really have to clarify a few things.



    When we talk about corporate computer sales, we have to qualify the markets. Think about it, the average corporation has a ton of workers writing memos and sending email all day with an occassional spreadsheet or so. Dell makes perfect sense for these type of shops because it is cost effective. A 1.5 GHz PIII and a decent amount of memory looks great to VP of Finance and honestly that is all such a worker needs. Velocity engine and SMP are wasted on such a worker.



    Now consider publishing, Genetic, Chemical Engineering and Video markets then Velocity engine/ VMX and unix begin to look good. Apple's big pitch for the 970 is really for the corporation/family that has hybrid needs.



    Apple could in my opinion run a campaign on the mere fact that while the Mac may cost more than any one wintel computer, it might not cost more than the three computers and monitors it's being used to replace. Okay, it might =).



    Many corporations are like ant or bee hives but they don't know it. Get all drones Dells. Then for the worker that needs to do everything that the drones do and interact with the drones, but also video, number crunching, gene splicing, or coding they need OSX, because it meets a multitude of needs and plays well in a WinTel environment. This is one reason many serious programmers are getting Mac gear because they can summon a terminal window and engage PICO or GCC, then end the session, sign out, and their spouse or children are in OSX getting recipes, doing homework, watching DVDs which can be hell in linux, or whatever, then when everyone is gone to bed they can go back and code in JAVA, C, Cocoa, or write Pearl or some other type of script and stop for a minute and read a word document in MS Word no less.



    That is a hell of a lot of intergration. While not everyone needs a mac, with the 970 a mac can be nearly all things to all people.



    As for IBM, linux on top the 970 makes sense for the complete opposite end of the spectrum. The person or corp that does not need a serious GUI, or MS Word, Powerpoint, DVD (the easy way mind you) for that matter, or just really really likes terminal windows, is going to be perfectly at home on IBM's 970 platform. In my opinion, Unix never made sense on Intel hardware anyway. Unix and MIPS or Risc has always been the way of things. Thus while linux has done wonderfully on intel low spec hardware, it is capable of far more, but it needs some serious iron to accomplish the truly heavy lifting it can do. That's why the 970 from IBM is the perfect solution. I'm assuming due to IBM's link to Apple that the 970 is going to be comfortably affordable. Of course, that is one tall assumption.



    In clarifying the markets, it has to be accepted that Wintel really makes sense for one end of the buying spectrum (low spec. cheap hardware, solely Graphic Driven OS due to mundane, routine and non-processor intensive task, no true code developing capability), Macitosh can comfortably take the middle (higher spec hard ware, unix [serious terminal window] underneath, true coding can be done, OSX [serious GUI] on top due to routine as well as processor intensive or seriously intensive task required) and then you have linux on 970 by IBM (for corporations that need cheaper server alternatives to the Power4/other Big Iron solutions and none of the restrictions of Windows or the hermit/He-Man that only uses his own code and would rather spend more time coding his own drivers and programs (to get his dvd player working with the OS and then so he can watch the freaking DVDs) than doing the things that said code are being written for.



    Obviously, the overlap or middleground that Apple has created is brilliant. The Mac on 970 combo is compelling because the Mac can now play the middle well, but also function in hived environments and do it well.



    The markets such as I see them, Apple and IBM would not be competing head on. What they both can do is establish a clear message about the capabilities of the 970, 980 and whatever, and build a serious brand behind that chip family. Honestly, I think it would be in their best interest to co-market each other's products and send each other customers based on needs. For instance, IBM and Apple could develop a product grid that encompasses all of the 9XX family of products from both companies and then train their sales staff that regardless the company the client initially approaches, they recommend products based on needs as identified by the grid.



    This ends up looking like a client approaches IBM, they need a several low end servers and feel the 9XX family is what they need. IBM could sell the servers and one or 2 High Spec macs because they could use the Macs to administer the headless servers, whilest sending e-mail or creating docs and flow charts in MS Office and interacting with the rest of the company. Assuming that Apple and Oracle get a decent client running on the Mac, the same guy could even interface with company purchasing system all from the same Macintosh.



    As for the XServe, if the client is not really tech saavy and doesn't have time to learn/train on linux, but wants servers up and running, they get XServe. Same for Apple. They have a seriously tech astute customer who knows linux and needs some Macs and some serves, sell them the Macs (office productivity and server adiminstration), but sell them the IBM 9XX servers.



    I could be completely out to lunch on this, but that could work.



    [ 11-27-2002: Message edited by: ArkAngel ]</p>
  • Reply 105 of 440
    zazzaz Posts: 177member
    Well, here is bit of news I thought was interesting:



    [quote]Looking at Intel's CPU roadmap, it's interesting to note that Intel will not be scaling clock speeds next year as rapidly as they did in 2002. This makes a lot of sense considering how hot the new 3.06GHz Pentium 4 is running. Intel's current roadmap shows them not breaking the 3.06GHz barrier until the second quarter with the 3.2GHz Pentium 4. The last two quarters of 2003 are currently listed as &gt; 3.20GHz and &gt; 3.40GHz, mostly because Intel isn't exactly sure how high they can push their 0.13-micron Northwood cores.



    The relatively slow (compared to this year) clock speed ramp next year gives AMD a chance to regain some of their lost performance ground. The slow clockspeed ramp also puts pressure on Intel to introduce their 90nm Prescott core as soon as possible, however currently it is scheduled for a Q4 2003 release.



    According to Intel's latest roadmap (current as of last Friday), Prescott will debut at least at 3.20GHz and will be made available with a 1MB L2 cache.



    In order to make up for a lack of clock speed improvements, Intel will be introducing the 800MHz FSB (200MHz quad-pumped) on the 0.13-micron Northwood Pentium 4 processors before Prescott's release. The release will happen in Q2 2003 and instead of offering higher speed CPUs, Intel will go back and offer 800MHz FSB versions of CPUs as slow as 2.4GHz. These new 800MHz CPUs will also have Hyper-Threading support, which should make them very attractive purchases. Prescott will obviously support the 800MHz FSB as well. <hr></blockquote>



    Taken from Anandtech.com article from Comdex 2002 <a href="http://www.anandtech.com/showdoc.html?i=1752"; target="_blank">here.</a>



    If our hero the 970 kicks in and does as well as it should, Apple/IBM may not have that long of a road to hoe.



    A sudden neck and neck performance race would be very good for Apple. Coupled with X, it's potential speed benefits and 'other significant announcements in 2003' as we keep being teased with by Apple, it may really be "he most important year in Apple's history"



    AMD, while somewhat important in the PC universe, is actually wholly irrelevant to Apple. Apple is compared and more closely priced with Intel. That is their perceived 'performance nemesis'. If that is where they are judged, that is where they need to market.



    Any how, Intel Parked @ 3.06 until Q3, and only improving to 3.4 end of 2003 is better than the 4.8 horror show we worry about.



    {edit in fear of the Grammar Gestapo}



    [ 11-27-2002: Message edited by: zaz ]</p>
  • Reply 106 of 440
    kidredkidred Posts: 2,402member
    Yea but I could honestly care less if Intel was at 5.ghz right now. Mainly because we've reached a threashold, at least the majority of users who are close to being completely satisfied with speed. There are those who need speed, and will always need speed but for the most part we are fine with the path we are currently on. I have a dual gig and work intensively in photoshop and dreamweaver, my old G4 450 was dragging along but now, I can go quite a while with the speed I have and I think most others with newer 1.ghz+ machines can as well.



    'Course, I'm still gonna get the 970 just for latest-tech kicks
  • Reply 107 of 440
    I haven't read ALL the posts, so I'm not sure if anyone has said this yet, but I REALLY don't think this set of rumours has any merit.



    It seems to me that it 'connects the dots' or 'fills in the blanks' for a bunch of rumours we've already heard - like the g5 was cancelled at motorola - and attempts to explain why. The rest of the stuff about future processor speeds, Mac on Intel, a 2U server, etc... - all of that is completely plausible speculation, but doesn't seem to be rooted in anything more than an attempt to see how far a hoax can go.



    I agree with other skeptics here that it's just far too much knowledge for one person to know and frankly, it's just ties up some hanging questions (ie, where the hell is to moto G5) a little too conveniently. The original author of this rumour might make it more palatable in the future by releasing this bunk in stages so our BS alerts are taken by surprise!



    anyway, i think it's all bunk.



    tM
  • Reply 108 of 440
    outsideroutsider Posts: 6,008member
    I wonder how soon IBM will have a PPC 975 or 970FX that bumps the L2 to a full 1MB after the initial release of the 970.
  • Reply 109 of 440
    [quote]Originally posted by Outsider:

    <strong>I wonder how soon IBM will have a PPC 975 or 970FX that bumps the L2 to a full 1MB after the initial release of the 970.</strong><hr></blockquote>

    Oh, prolly never. Large caches are poor substitutes for high bandwidth. The 970 is designed around a fast FSB so spending large amounts of real estate on cache offers less benefit than say, a second core.
  • Reply 110 of 440
    just add more CPU's as I have said in my sig since 1999
  • Reply 111 of 440
    ...not to belabor it...but you Montana city slickers don't know nothing 'bout farmin' 'parently...



    [quote]If our hero the 970 kicks in and does as well as it should, Apple/IBM may not have that long of a road to hoe. <hr></blockquote>



    ....I bleeve round here, we do all our hoeing on rows.



  • Reply 112 of 440
    stoostoo Posts: 1,490member
    Is there a "Pentium 5" in the works? (next generation IA32 CPU)
  • Reply 113 of 440
    [quote]Originally posted by Tomb of the Unknown:

    <strong>

    Oh, prolly never. Large caches are poor substitutes for high bandwidth. The 970 is designed around a fast FSB so spending large amounts of real estate on cache offers less benefit than say, a second core.</strong><hr></blockquote>



    Total Agreement. I wouldn't be surprised to see a second core when they go to 90nm.
  • Reply 114 of 440
    snoopysnoopy Posts: 1,901member
    [quote]Originally posted by MacJedai:

    <strong>



    Total Agreement. I wouldn't be surprised to see a second core when they go to 90nm.



    </strong><hr></blockquote>



    I've been wondering about the wisdom of multiple cores. Maybe this was already covered by someone who knows a great deal about processors, but here are my concerns.



    1) It might get sticky trying to share one SIMD engine, so do we need two of these also on one chip?



    2) Two cores will take almost twice the power of one. Sure, if you have dual processor chips it takes twice the power, but you have two separate processors chips and packages to dissipate the heat, so it is not difficult to cool the chip. You have to move more air, but chip temperature is not as high as it would be with two cores on one chip. The other alternative is a much larger, better heat dissipating package. A package like this increases cost, which is the next concern.



    3) Chip cost is higher, since the die is larger, fewer can be made on a wafer and yields will be lower.



    I wonder whether it might not be wise to stick with single cores, and just use more of them. This avoids the cost increasing reasons given in 3 above, plus price goes down with the number purchased. It also makes heat management easier. All in all, there may be nothing to be gained from two cores. Even performance may not differ much for chips with the high speed processor bus of the 970.
  • Reply 115 of 440
    [quote]Originally posted by phishy:

    <strong>Marklar is even more of a going concern than ever. ... Likely it will be released in the event that Microsoft chooses to stop developing for the Mac platform altogether.</strong><hr></blockquote>



    They should rename it "Hail Mary."
  • Reply 116 of 440
    [quote]Originally posted by snoopy:

    <strong>



    I've been wondering about the wisdom of multiple cores. Maybe this was already covered by someone who knows a great deal about processors, but here are my concerns.



    1) It might get sticky trying to share one SIMD engine, so do we need two of these also on one chip?



    2) Two cores will take almost twice the power of one. Sure, if you have dual processor chips it takes twice the power, but you have two separate processors chips and packages to dissipate the heat, so it is not difficult to cool the chip. You have to move more air, but chip temperature is not as high as it would be with two cores on one chip. The other alternative is a much larger, better heat dissipating package. A package like this increases cost, which is the next concern.



    3) Chip cost is higher, since the die is larger, fewer can be made on a wafer and yields will be lower.



    I wonder whether it might not be wise to stick with single cores, and just use more of them. This avoids the cost increasing reasons given in 3 above, plus price goes down with the number purchased. It also makes heat management easier. All in all, there may be nothing to be gained from two cores. Even performance may not differ much for chips with the high speed processor bus of the 970.</strong><hr></blockquote>



    1) Why share? Just put two full 970's on one die. L2/L3 cache could be shared, and the bus interface is shared.

    2) As feature size goes down, transistor counts go up and the chip designers need to figure out how to use this extra density. To some extent it makes sense to stay small, but there is a lower limit on the physical size of the die from a practicality point of view as well as a power dissipation perspective. Multiple cores leverages existing design work, keeps the complexity down, allows asynchronous operation of different parts of the die, etc.

    3) This isn't true if adding cores is done when a die shrink happens -- they might be able to keep nearly the same die size but have two (or more) cores.



    Throwing transistors at a single core has diminishing returns. Processors seem to do pretty well with 50-60 million transistors, and if you have a multithreaded OS then I suspect you'd do better having 4 cores instead of one 200 million transistor core. In a couple of years we're going to be looking at 1 billion transistors -- which will be 16-32 cores per die. If they are all multithreaded then one die might be able to support 32-64 simultaneous threads of execution. Pretty darn cool and its going to change the way software is built.
  • Reply 117 of 440
    telomartelomar Posts: 1,804member
    Keep in mind every major processor architecture currently has a roadmap to multi-core funtionality. AMD is planning it around 2004 when they move to a 90 nm process. Intel is looking at 4 core Itaniums when it hits the 1 billion transistor mark. Sun has plans for dual core processors and IBMs inclinations towards multiple cores are pretty well known.



    As the maufacturing processes have advanced multiple cores have become more appealing than trying to push single cores ever upwards. Dual cores can in theory be cheaper than two separate processors as there are parts of the chip that can be shared. It also simplifies motherboard design so costs can be recouped in other areas.



    As for your questions it would use 1 SIMD unit for each core.



    Heat would be within manageable regions especially if it comes with an upgrade in the manufacturing process.



    Keep in mind a 2 PPC 970 chips are around 100 million transistors. A dual core would come in closer to the 80 million mark. The actual core doesn't account for everything on the chip so from the size perspective you'd likely end up ahead.



    Where a problem does arise is when a single core fails, which leads to lower yields but you can just sell them in lower-end products.



    At the end of the day adding additional cores to a chip holds more value for money than trying to scale the computer's frequency on to infinity. There's just a point where multiple cores offers better performance benefits.



    [ 11-28-2002: Message edited by: Telomar ]</p>
  • Reply 118 of 440
    Good posts about the hardware side. But let us not forget the software side. Whe Be was around everyone was impressed with their performance, particularly with Multi-media. This was in a large part due to it's Multi-threading and MP support.



    As the idea of the digital hub becomes more ingrained into the computer market PC's will be required to do more and more at the same time, from playing MP3's and Video to word processing and rednering. This is going to make multi-threading and multi-processors more important than raw speed in the near future. At the same time market forces are demanding less expensive computers.



    Now as I see it, the multi-core chips fit here better than single core chips. They might cost a little more to produce and design. However they require less space and components on the motherboard in the end product (saving cost). Also, since the processors reside on the same chip they communicate with each other more efficiently than if they were communicating through an external "junction."
  • Reply 119 of 440
    [quote]Originally posted by Programmer:

    <strong>If they are all multithreaded then one die might be able to support 32-64 simultaneous threads of execution. Pretty darn cool and its going to change the way software is built.</strong><hr></blockquote>

    By this I presume you are referring to simultaneous multithreading or hyperthreading right? How would that work with the 970 which groups instructions in an attempt to optimize pipeline throughput? Would a dual-core 970 use HT instead of the current group/branch predict strategy? Or could you HT grouped instructions?



    (So many technologies, so little time...)
  • Reply 120 of 440
    cliveclive Posts: 720member
    [quote]Originally posted by xype:

    <strong>Of course it might be that _you_ know better about him than I do. Yeah, that will be it.</strong><hr></blockquote>



    Of course I do, Xype's mate, from Innsbruck.



    [ 11-28-2002: Message edited by: Clive ]</p>
Sign In or Register to comment.