Apple to fire up Penryn-based Mac Pros

11415171920

Comments

  • Reply 321 of 398
    Quote:
    Originally Posted by Lemon Bon Bon. View Post


    That's incredible. Intel must be sandbagging on the Penryn. ... Shouldn't we really be at 4 gig by now anyhow? The design of the cpu is much better and the process is much smaller and efficient...



    ...so how come speed grades which are essentially the same as the last generation?





    The high rate demo was no doubt a hand-picked chip, and it hasn't been running all that long. At 45nm and below issues of longevity are going to become serious, and running at high power/heat levels will impact chip lifetime in a major way. I expect that Intel is getting some small yield of chips that run crazy fast, and they are probably sandbagging a bit because they want to maximize their yields (and thus profits). That will leave them in a position 6-12 months from now to ramp the clock rates, as long as they are confident that their next generation processors won't force them to drop back down again. Doing this lets them introduce "Super ultra extreme extreme" versions at stupidly high prices (and margins).







    Woops, sorry about the double post -- don't even know how it happened.
  • Reply 322 of 398
    Quote:

    The high rate demo was no doubt a hand-picked chip, and it hasn't been running all that long. At 45nm and below issues of longevity are going to become serious, and running at high power/heat levels will impact chip lifetime in a major way. I expect that Intel is getting some small yield of chips that run crazy fast, and they are probably sandbagging a bit because they want to maximize their yields (and thus profits). That will leave them in a position 6-12 months from now to ramp the clock rates, as long as they are confident that their next generation processors won't force them to drop back down again. Doing this lets them introduce "Super ultra extreme extreme" versions at stupidly high prices (and margins).



    Hmm. Well, AMD are in no position to push them. The race to 1 gig days are over for AMD at the moment? Also, there is RND to claw back. But...I can't see Intel getting to 3.4-4.0 gig on Penryn when Nehalem is imminent? Why not just push those speeds when Nehalem launches.



    I suppose if Nehalem is launching late 2008...then there's room for a Penryn bump to 3.4/3.6/3.8 half term 2008.



    Nehalem sounds like the more exciting chip because of the memory controller stuff.



    I'll leave it to the sage-like Programmer to speculate on what kind of performance improvements we'll get over Penryn. But for the most part, Penryn, outside of SS4 instructions is a mere improvement over the current stuff. Cooler, I suppose.



    I'm not sure if I can wait. If Apple are going all Octo for the Penryn bump and you can get a 9900 card from Nv. What's Nehalem going to offer worth waiting a year for?



    I hear rumours of an 8 core version of Nehalem? Which means a dual octo? 16 core render beast? Will we REALLY see that inside 2008?



    Lemon Bon Bon.
  • Reply 323 of 398
    Quote:

    Woops, sorry about the double post -- don't even know how it happened.



    Seeing as it is your 1st time, we'll let you off...



    Lemon Bon Bon.
  • Reply 324 of 398
    mjteixmjteix Posts: 563member
    Quote:
    Originally Posted by Lemon Bon Bon. View Post


    I hear rumours of an 8 core version of Nehalem? Which means a dual octo? 16 core render beast? Will we REALLY see that inside 2008?



    Lemon Bon Bon.



    8-cores CPUs will be for MP servers at launch.

    While some cpus are expected late 2008 (bloomfield/gainestown),

    other are expected mid 2009 (Havendale/Lynnfield and Auburndale/Clarksfield)

    or late 2009 (8-cores Beckton).

    Expect up to 64 cores (up to 128 threads) servers in 2010 (XServe "Ultra").







    For the Mac Pro, it will certainly use dual quad-core Gainestowns (16 threads) with DDR3 RAM and QPI interface to the IOH.







    For the mobile/hybrids: high-end quad Clarksfield (45/55W) or low-end dual-core+gpu core Auburndale (35/45W), with integrated memory controller, DDR3 RAM, PCIe Gen2 and DMI interface to the PCH (Platform controller hub, with USB, SATA, Enet, Wifi, etc.).



    While those TDP seem higher than currents and penryn ones, remember than there will be no more north bridge chip (almost included in the cpu).



  • Reply 325 of 398
    Quote:
    Originally Posted by Lemon Bon Bon. View Post


    But...I can't see Intel getting to 3.4-4.0 gig on Penryn when Nehalem is imminent? Why not just push those speeds when Nehalem launches.



    Its going to purely be a function of how well they are yielding because, as you point out, it doesn't look like it'll be driven by competition from AMD. The mid-3 GHz range seems achievable though.



    Quote:

    Nehalem sounds like the more exciting chip because of the memory controller stuff.



    The fact that its a whole new microarchitecture is the exciting part. Everybody has been clamoring for on-chip memory controllers for ages, but really I'm not sure how big a win it is in terms of performance in desktop systems. It should be a significant cost and TDP win since there are fewer parts, and it should reduce the amount of L2 cache they are obliged to include... but in terms of real-world performance due to the on-die memory controller alone, I don't think it'll live up to the hype around the issue. The new chip interconnect will be a nice addition at the MacPro level -- finally a competitor to HyperTransport... but again, on most software is it going to be a big win? The return of HyperThreading is also interesting for software that needs lots of threads, but really... with 4+ cores already are most users going to see a difference? Perhaps as software evolves over the next few years, but on the day these things ship I doubt it.



    The GPU core tightly coupled to the CPU may be the most interesting technology. Intel has come a long way with their integrated GPUs, and I would guess that this thing is the next generation after the current X3100 and it will benefit from high speed interconnects with the CPU and the CPU's on-chip memory controller. There is no doubt that all the high-end GPU fans will slag this mercilessly just like they do the X3100 and its predecessors... but the fact remains that for the iMac and MacBook this will be a big step up. Possibly a very big step up.



    Quote:

    What's Nehalem going to offer worth waiting a year for?



    Yeah, that's my point. With Penyrn the transition to 45nm is a win, along with various architectural refinements... Nehalem is going to bring a bunch of new forward-looking stuff, but its not clear how big an improvement will be delivered on day 1.





    Addendum: I just came across an Intel page discussing Nehalem, and the stuff it talks about is mostly of interest to Intel and PC makers. It is aimed at created a chip family that scales from the low end to the high end, from the low-power to the high-power. There is no discussion of specific performance enhancements or new technologies that should make a buyer wait for it. Laptops and AIO machines will benefit the most, it looks like.
  • Reply 326 of 398
    2009 and only one x16 2.0 slot + up 8 x1 slots and no link for a nvidia chip set in the desktop cpu. I hope you can use 4 of them in a x4 slot. Also no video ram only link?



    ATI is working on board video with it's OWN RAM in a upcomeing chip set.



    By then we may have amd cpu with built video + a cross fire link to a video card.



    As well as nvidia chipsets with build in video + a sli link to a video card.
  • Reply 327 of 398
    jeffdmjeffdm Posts: 12,951member
    Quote:
    Originally Posted by Joe_the_dragon View Post


    2009 and only one x16 2.0 slot + up 8 x1 slots and no link for a nvidia chip set in the desktop cpu. I hope you can use 4 of them in a x4 slot. Also no video ram only link?



    ATI is working on board video with it's OWN RAM in a upcomeing chip set.



    By then we may have amd cpu with built video + a cross fire link to a video card.



    As well as nvidia chipsets with build in video + a sli link to a video card.



    Why? Do you use SLI? Even now, I get the feeling that SLI is one of those flagship things that not many people really use. I know you like to harp on lanes, but on every practical calculation, it's really hard to use up the PCIe bandwidth that's in the Mac Pro. Even the "x16" video cards barely use half of that, the benefit of more than x8 is very marginal.
  • Reply 328 of 398
    Quote:
    Originally Posted by Joe_the_dragon View Post


    2009 and only one x16 2.0 slot + up 8 x1 slots and no link for a nvidia chip set in the desktop cpu. I hope you can use 4 of them in a x4 slot. Also no video ram only link?



    ATI is working on board video with it's OWN RAM in a upcomeing chip set.



    By then we may have amd cpu with built video + a cross fire link to a video card.



    As well as nvidia chipsets with build in video + a sli link to a video card.





    Are you only looking at the bottom image in mjteix's message? That's the low end CPU and chipset. No need for dedicate memory, multiple x16 lanes, etc. And don't get all fired up over dedicated video memory, it has its disadvantages. A unified memory model is more flexible and avoids some of the bottlenecks around moving data to/from VRAM.
  • Reply 329 of 398
    onlookeronlooker Posts: 5,252member
    Quote:
    Originally Posted by JeffDM View Post


    Why? Do you use SLI? Even now, I get the feeling that SLI is one of those flagship things that not many people really use. I know you like to harp on lanes, but on every practical calculation, it's really hard to use up the PCIe bandwidth that's in the Mac Pro. Even the "x16" video cards barely use half of that, the benefit of more than x8 is very marginal.



    A lot of people use SLI, Although you obviously can't under the Mac OS due to the lack of a driver for it, but you will definitely need a X16 lane with any New Nvidia Card from the 8600 series and up, or a Quadro, any X2 Card (Especially a Quadro) GeForce 7950 GX2 (is a prime example), And there will be manufacturers that will be making GX2 cards for the 8800 series also. The 8800 Ultra will easily out power an 8X lane. Actually that's what you should be asking. What will out power an 8X PCI-E Lane? Answer: Many cards.
  • Reply 330 of 398
    onlookeronlooker Posts: 5,252member
    And for those who do like to game NVidia just broke new ground, and in turn puts Apple two generations behind in graphics capabilities.



    Using three GPU's the 3-way NVIDIA SLI takes extreme gaming to a whole new level.

    Friday, 14 December 2007



    Extreme gaming just got a whole lot better. NVIDIA has released and extended its SLI technology, which enables the use of multiple graphics processing units (GPUs) on a single computer, allowing up to three GeForce graphics cards to be used in a single machine.



    Now the graphics-intensive titles, such as Call of Duty 4, Company of Heroes Opposing Fronts, Enemy Territory: Quake Wars, and Unreal Tournament 3, can be played at the highest resolution possible, with all the graphics settings cranked to the max, and antialiasing applied for the first time. Also, for the serious user in production, the rendering speed generated out of the use of the SLI is nothing less than brilliant.



    NVIDIA’s new 3-way SLI delivers up to a 2.8x performance increase over a single GPU system, giving high-end gamers 60 frames per second at resolutions as high as 2560x1600 and with 8x antialiasing. 3-way SLI technology means there'll be no more dialing back the image quality settings on the newest PC games. For example, gamers with 3-way SLI can play 'Call of Duty 4' at high resolutions such as 1920x1200 with all the advanced DirectX 10 effects such as motion blur, ambient occlusion, and soft shadows turned on. Its the full experience that the producers and the artists wanted the game players to see.



    The heart of a 3-way SLI system is an NVIDIA nForce® 680 SLI MCP motherboard and three GeForce 8800 GTX or GeForce 8800 Ultra graphics cards. With 3-way SLI, gamers can harness the power of 384 stream processors, a 110+ gigatexel per second texture fill rate, and over two gigabytes of graphics memory for no-compromise gaming performance.



    3-way SLI gives gamers the flexibility to scale their graphics processing power with one, two, or three GeForce GPUs, depending on their desired price and system configuration. 3-way SLI systems are available from leading gaming PC system builders and the components needed to build your own 3-way SLI system are available from leading retailers.



    --------------------



    And for the production Artist it looks like real time shader, texture, bump mapping, and light, rendering on the fly of a 11 million polly model is getting closer at hand.
  • Reply 331 of 398
    jeffdmjeffdm Posts: 12,951member
    Quote:
    Originally Posted by onlooker View Post


    A lot of people use SLI, Although you obviously can't under the Mac OS due to the lack of a driver for it, but you can definitely saturate a X16 lane with any New Nvidia Card from the 8600 series and up, or a Quadro, any X2 Card (Especially a Quadro) GeForce 7950 GX2 (is a prime example), And there will be manufacturers that will be making GX2 cards for the 8800 series also. The 8800 Ultra will easily out power an 8X lane. Actually that's what you should be asking. What will out power an 8X PCI-E Lane? Answer: Many cards.



    Then please give me some actual numbers. How many people use SLI? I've yet to meet anyone that does. Show me the tests that show the speed difference between an x8 and x16 slot in that card that suggests that it can saturate it at x16. The only tests I've seen show a low single digit percent difference.
  • Reply 332 of 398
    onlookeronlooker Posts: 5,252member
    Quote:
    Originally Posted by JeffDM View Post


    Then please give me some actual numbers. How many people use SLI? I've yet to meet anyone that does. Show me the tests that show the speed difference between an x8 and x16 slot in that card that suggests that it can saturate it at x16. The only tests I've seen show a low single digit percent difference.



    Probably because Mac users don't have it.



    And go find them yourself if your that curious. What are your fingers painted on?
  • Reply 333 of 398
    jeffdmjeffdm Posts: 12,951member
    Quote:
    Originally Posted by onlooker View Post


    Probably because Mac users don't have it.



    My "circle" isn't made up of just Mac users. I don't know that many Mac users either, except on the Internet. I don't know any other Mac Pro users. That's not the problem.



    Quote:

    And go find them yourself if your that curious. What are your fingers painted on?



    All I'm asking is that you back up your claim. A claim is pretty worthless if it can't be backed up when requested.



    As it is, based on Steam numbers, only 9% of the users have an 8800 of any kind:



    http://www.steampowered.com/status/survey.html



    It's the most popular model, but as a whole, not a majority as the Internet Bullhorn suggests. A surprising number of users are using very old systems. SLI isn't even covered in the survey, that's the closest I can find.
  • Reply 334 of 398
    onlookeronlooker Posts: 5,252member
    Quote:
    Originally Posted by JeffDM View Post


    No. You make a claim, you back it up.



    I have. Try using the search. n00b!
  • Reply 335 of 398
    jeffdmjeffdm Posts: 12,951member
    Quote:
    Originally Posted by onlooker View Post


    I have. Try using the search. n00b!



    Who are you, tekstud? Saying "It's out there" is not backing it up. You made the claim, the onus is on you to back it up by specifically pointing to the evidence when asked. A claim is worthless if you try to make other people prove your claim for you. Short of that, it's just a religious chant on your part.
  • Reply 336 of 398
    onlookeronlooker Posts: 5,252member
    Quote:
    Originally Posted by JeffDM View Post


    Who are you, tekstud? Saying "It's out there" is not backing it up. You made the claim, the onus is on you to back it up by specifically pointing to the evidence when asked. A claim is worthless if you try to make other people prove your claim for you.



    I've done the research in here before, You should know you need to use search before asking any question, and I don't need to do it again. But I know the benefits lie mainly in the memory. It's not that a 16X lane can be fully saturated, but it's that a 8X can be lightly bottled, and that was a while ago with last generation cards, But, if one were to be using something along the lines of a QuadroFX 5600 you would obviously see more degradation, and I would imagine that any X2 Nvidia card would do the same; such as a Quadro FX 4500X2
  • Reply 337 of 398
    jeffdmjeffdm Posts: 12,951member
    http://www.diy-street.com/forum/show...2&postcount=28



    This one shows that x2 on a 7800 is 80% the speed of an x16 on the same card. x8 is 94% the speed of x16 on the same card. I really doubt that an 8800 can show that x8 would make the card half as fast as it would be on x16. I'm not even sure any similar tests with the 8800 exist. If it's out there, it's cluttered in the noise.
  • Reply 338 of 398
    onlookeronlooker Posts: 5,252member
    Quote:
    Originally Posted by JeffDM View Post


    http://www.diy-street.com/forum/show...2&postcount=28



    This one shows that x2 on a 7800 is 80% the speed of an x16 on the same card. x8 is 94% the speed of x16 on the same card. I really doubt that an 8800 can show that x8 would make the card half as fast as it would be on x16.



    That I never said, so I don't know why your claiming that.
  • Reply 339 of 398
    onlookeronlooker Posts: 5,252member
    Also your X2 is an X2 lane, not a Dual GPU on one card like the X2 I am talking about. Just to clarify.
  • Reply 340 of 398
    jeffdmjeffdm Posts: 12,951member
    Quote:
    Originally Posted by onlooker View Post


    That I never said, so I don't know why your claiming that.



    If a card can truly saturate an x16, it would be severely throttled at x8. A 7800 *might* be called as saturating an x2 lane system. I doubt that an 8600 or 8800 can consume four times that data rate. Maybe the information is out there, but frankly, it's caught in the noise.
Sign In or Register to comment.