Nvidia nForce4 for Opteron avail (SLI Opterons) When will Apple adopt?

13

Comments

  • Reply 41 of 70
    dfilerdfiler Posts: 3,420member
    Quote:

    Originally posted by Protostar

    If I'm going to spend over $2000 on a Mac I want it to have a high end graphics chipset like a 256MB 5900 Ultra.



    Don't you mean a high end gaming chipset?



    Professionals will certainly find a way to put faster GPUs to work when eventually offered by apple. Yet... it's only the gamers who are complaining. The pros, or the vast majority of them, aren't clamoring for faster GPUs.



    Not that gaming isn't a large market, but let's not delude ourselves as to the real 'need' for GPU speed.
  • Reply 42 of 70
    mattyjmattyj Posts: 898member
    The gaming industry is so large that it is the gamers that are driving graphics cards forward. Not those that use them for work.
  • Reply 43 of 70
    dfilerdfiler Posts: 3,420member
    Which is a good point.



    It's just important that we acknowledge why some users are clamoring for better GPUs. They want to game on their mac. Once this is acknowledged, it turns into a discussion of whether Apple should attempt to compete in the gaming-rig market.



    In my opinion, currently Apple has no chance of competing in the gaming-rig market. And likewise, their choice of iMac GPUs has likely been a prudent business tradeoff.
  • Reply 44 of 70
    Quote:

    Originally posted by dfiler

    Which is a good point.



    It's just important that we acknowledge why some users are clamoring for better GPUs. They want to game on their mac. Once this is acknowledged, it turns into a discussion of whether Apple should attempt to compete in the gaming-rig market.



    In my opinion, currently Apple has no chance of competing in the gaming-rig market. And likewise, their choice of iMac GPUs has likely been a prudent business tradeoff.




    As someone who has worked for a long long time in the games business, I don't see Apple re-launching Macs as a gaming platform any time soon.



    Current games productions are very expensive, quite risky with razor thin margins. Most games lose money. Games are not a licence to print money - and the Mac has a non-existent market share of this competitive market.



    The market leader is PS2 which is so far out in front. The PC comes next (although most of that business is focussed in a very small number of titles). The Xbox and Gamecube come further down, with the 'cube having such a poor market share that many publishers are dropping it. Before you get to the Mac, you'd have to go past the handheld formats, the PS1 and mobile phones. I have never seen the Mac quoted in the sales figures.



    In other words the Mac is a non-starter for games developers. There is simply no point in leading on Mac. The presence of OpenGL makes porting to Mac a little less expensive.



    If you are waiting for your Mac to become the ideal gaming platform, you will have a long wait. SLI or not.



    Carni
  • Reply 45 of 70
    pbpb Posts: 4,232member
    Quote:

    Originally posted by dfiler

    It's just important that we acknowledge why some users are clamoring for better GPUs. They want to game on their mac. Once this is acknowledged, it turns into a discussion of whether Apple should attempt to compete in the gaming-rig market.



    I don't think that this is the question. What I see is that the Power Mac has upgrade capabilities for the graphics, something none of the consumer desktops has. Apple needs either to address this, or update each time these "locked" machines with much better graphics chips that will last longer, not ones that are already outdated the day of the update (see the iMac G5 case).
  • Reply 46 of 70
    Quote:

    Originally posted by PB

    I don't think that this is the question. What I see is that the Power Mac has upgrade capabilities for the graphics, something none of the consumer desktops has. Apple needs either to address this, or update each time these "locked" machines with much better graphics chips that will last longer, not ones that are already outdated the day of the update (see the iMac G5 case).



    A graphics card is just a tool. It either does what you need it to do or it doesn't. My G4 has an NVidia GeForce 2 whatever in it. It's 5 years old. Is it outdated? It still works and does nearly everything I need it to. There are a few pieces of software that really make it show its age, but, you know what, they're all games. On average, I probably spend a couple of hours a week playing games on my Mac. If I buy a new iMac G5, why should I care what GPU it has, as long as it does what it needs to do? Is it going to stop doing it?
  • Reply 47 of 70
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by Carniphage

    The idea of strapping two graphic cards into a PC to improve graphics performance seems an obvious way of improving graphics throughput.

    But the difficulty is sharing the load between the two cards. Old Voodoo style SLI did this by rendering the same scene on both cards.

    Both cards had to do identical transform and lighting calculations (no saving) Both cards have to host a duplicate texture cache (no saving) - and only when it came to rasterizing (filling ) the screen was there a benefit to be had.



    One card rendered the odd scanlines the other took the even. This is how it made things go faster. In fill-bound applications this saving would be significant. Although rarely would it be double.



    As soon as the fill performance improves, it not long befire your application is soon going to be hogtied by vertex performance.

    New style SLI is supposed to do some sharing of geometry processing between the two cards - although I am a bit skeptical about that.

    There are applications where SLI *can* clearly make a benefit. Most notably games where there are high numbers of polys, a high level of overdraw - and expensive shaders.

    But any application which needs the following will benefit:

    1) Large frame buffers- HD and above - where the number of pixels to fill is significant.

    2) Expensive pixel shaders. If a pixel shader is very expensive then any application can become fill-rate bound. SLI will help here too. The use of Normal map shaders in Doom 3 is a good example of "expensive".

    Typically, pro 3D apps use wireframe previews or simple shaded previews. And although the performance of some 3D apps is not great on the Mac - it isn't from lack of fill rate.

    3DS Max and Softimage now allow you to use pixel shaders in their 3D preview. So SLI might help here. However neither of these applications to my knowledge runs on the Mac.
    Carni


    1. Agreed

    2. This is actually depends on what card (SLI = would be Cards) your using. (I'll try, and elaborate later in #6)

    3. Again part of what you said depends on which card your using, but The new SLI (as you call it). Does support multi frame rendering, even frames in one odd in the other = theoretical 2x performance, (and some gaming benches have shown it to meet that) and as it may be thought of as a gaming feature only, it does also apply in UI rendering running under a quadro in 3D apps, particularly Maya. (again I'll get to that later in #6)

    4. Reverse answer for 3D Apps - This is where Quadro cards should shine their brightest in Maya because of Mayas interface, and features unique of Quadro. But would not pertain to games in reverse form. (Again Later in #6)

    5. Some of your posts are like this one, and need to be separated into sub directories because these features are not shared, or handled in the same way for gaming, and Pro Apps as if they were valid as one contiguous question under one card for both Gaming, and also in 3D. (so it's invalid, but I'll try to cover it in a round about way)

    6. Finally we get to the heart of the matter. The Geforce GPU, Quadro GPU, and drivers for each are quite different as is gaming vs. The Maya UI. You mentioned your self that the Quadro cards have Ultra fast wireframe support which is true, and at the heart of the Maya UI.

      The Maya UI is rendered similarly to the UI of a CAD application in this way, and is drawn in a OpenGL window which gets quickly loaded with Overlay Planes when working. Overlay Planes are also an ultra fast supported feature of a Quadro which does not pertain to gaming, and is not supported in the Geforce cards. Other similar Overlay plane issues are simple Cursers/Handles, Pop up menus, and so on. (which actually "cause the contents of the window beneath to become damaged.") Also other open Applications which always seems to be the case when I start working. I have at least 6 apps at once, all with multiple windows, and toolbars.

      Maya 6 UI rendering is not particularly unique looking, but to get an accurate reliable test under SLI first off you would need Quadro drivers for SLI which were just released as beta, but are by no means finalized, and a test that shows the performance of the Maya UI under strenuous use specifically.

      SPECapc for Maya 6 is a new SPEC test developed with the Maya interface workflow in mind by SPEC, and Alias. It tests the Maya 6 working environment using four models in: wireframe, Gouraud-shaded, texture, texture highlighted with a wireframe mesh, and texture selected (texture with wireframe mesh and control points). This is how Maya handles it's UI. The benchmark is unique in its ability to test performance for large texture sizes and multiple viewports. (I'm quoting some of this from spec just so you know) SPECapc for Maya 6 consists of 30 individual tests, 27 of which are run three times. The final scoring is based 70 percent on graphics, 20 percent on CPU performance, and 10 percent on I/O.

      Now none of these have yet to be tested under SLI let alone SLI with the Beta Quadro SLI drivers. It's also important to know that Maya 6 is the first version of Maya that is written as a Mach-O application for the OS X version.

    7. If 3DS Max and Softimage use a different way windowing their UI it's their buisness, but as you said neither are Mac bound in the future AFAIK, and Maya is the app under discussion, so let continue with that.

    Quote:

    Does MAYA run faster on an SLI? Quadro boards you will notice have ultra fast wireframe support for precisely this reason.



    These two should be separate questions because they do not fit together with any valid available information pertaining to Maya, Quadro, or SLI, but I think I covered that question already so I'm calling it quits in this thread.



    A little speculation can be a good thing, and I'll give it a go before I do. Core Image for tiger is supposedly going to offload some of the image, and rendering data that would be normally be handled by the CPU to the GPU as I recall. So for a faster workflow in 3D applications (or any applications) the UI rendering would benefit from SLI as I see it, because the CPU is often max'd out in Maya on my CPU meter, and thus I see the spinning beachball. Anything pertaining to UI, is usually OGL based windowing. This I believe is image data that could be offloaded to the GPU if it (or they) were not already too busy. If there were 2 GPU's in an SLI configuration this could add considerable speed if core image lib data was written with instances such as this in mind. But, as I said elsewhere - I'm not a seasoned developer. I only dabble, and I doubt Apple will go with SLI, but I think it's their mistake not to take advantage of it in a what is supposed to be a pro Machine.



    Ciao



    [EDIT] Oops... Here s a link to SPEC for Maya 6 SPEC
  • Reply 48 of 70
    pbpb Posts: 4,232member
    Quote:

    Originally posted by dylanw23

    If I buy a new iMac G5, why should I care what GPU it has, as long as it does what it needs to do? Is it going to stop doing it?



    That depends on what meaning you are giving to "it does what it needs to do", and certainly that meaning varies among different individuals.



    For my personal home use, the iMac G5 would do just fine, but then I am not a gamer, nor do I run graphics-intensive applications. Others here that play games or do work that relies on strong GPU, will tell you some different tales. Even so, why not have a stronger GPU when buying a new system at the moment when Apple moves significant parts of the OS to the GPU for processing?
  • Reply 49 of 70
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by PB

    That depends on what meaning you are giving to "it does what it needs to do", and certainly that meaning varies among different individuals.



    For my personal home use, the iMac G5 would do just fine, but then I am not a gamer, nor do I run graphics-intensive applications. Others here that play games or do work that relies on strong GPU, will tell you some different tales. Even so, why not have a stronger GPU when buying a new system at the moment when Apple moves significant parts of the OS to the GPU for processing?




    I agree. That's exactly why so many peoples opinions vary on the subject.



    Not everyone has the same needs, and now that Apple is growing, and potentially in a position to gain more users they actually need more Mac's with more options for more individuals. It seems the they have low end, and gaming graphics covered for the most part with what they have now - with the iMac as the one exception. They need to branch into a higher level of graphics, and add options for interchangeable GFX cards in some of their lineup that it is missing from. Particularly the iMac IMO. My opinion is the iMac is the pretty machine that many would like to be able to play a few games on in the future, and the PowerMac would be the other gaming Machine.



    With PS3 on a PPC based Cell, and XBOX2 also using PPC, porting games sounds like it would be easier. I've never ported anything that technical before to OS X, but (correct me if I'm wrong programmer) The lengthiest process of porting a game to OS X involved processor variances from from x86 to PPC in machine code. (maybe I'm thinking of VPC to the G5?) Anyway. It seems like it would be easier to get a game going on OS X if the original code was written for the PPC just compiled using a different compiler for a different OS on a PPC processor.



    Other than some optimization, and some OS specific tweaks wouldn't it be mostly a recompile of almost the same C/C++ code?



    Again.

    I'm no developer. I've only dabbled.
  • Reply 50 of 70
    The difficulty in porting games, as far as I know, is mainly in the processor differences. For instance, x86 chips have horrible floating point units. (Actually, Intel's compiler compiles all floating point math to the vector unit). As such, lots of stuff is programmed using integer math, which is where x86 shines, and generally outperforms PPC chips. Then, these are converted to floating point numbers later on. This conversion is cheap on x86 hardware, but somewhat expensive on the Mac.



    It's all small details like that which take up the tiniest of overhead in most programs, but for games speed is critical as some functions get called literally millions of times a second. Going through to change all of these would be as hard or harder than recoding the entire game from scratch, so, games will always run faster on their initial target platform.



    Another problem is that most Windows machines are single processor, and aren't multithreaded. (Another another problem is that OpenGL isn't thread-safe, so you have to have all of your OpenGL calls in one thread to avoid crashes, but that's just an issue with OpenGL, not a porting issue.)



    Finally, some really important code is either done in assembly, or at least programmed to work on the vector unit. Because Intel's vector unit is nothing like Altivec, this all has to be entirely recoded. And of course, because these routines are called so often, they have to be fairly extensively tested, too.



    At least with the first and third issues, though, Cell is going to be a big help with games that are designed for the XBOX2/PS3 first and later ported to PCs/Macs. If Cell uses the Altivec instruction set, then the last bit will also be remedied on Mac ports, although personally I'm rooting for slightly richer instructions.
  • Reply 51 of 70
    wmfwmf Posts: 1,164member
    The biggest problem in porting games is DirectX vs. OpenGL. The processor is minor in comparison.
  • Reply 52 of 70
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by wmf

    The biggest problem in porting games is DirectX vs. OpenGL. The processor is minor in comparison.



    I wonder how that will look in programming terms now that XBOX2, and PS3 are both on a PPC? What is MS going to do about Direct X, and will it effect anything? (or can it potentially effect anything)





    We should start a new thread if we are going to stray this far from the topic.



    I'll do it. Go here to Continue Programming discussion
  • Reply 53 of 70
    dfilerdfiler Posts: 3,420member
    Another option:



    SLI does not require two physical slots. Just as my dual CPU daughter card requires one 'slot', the same could be done for GPUs.



    Note that no manufacturer is shipping dual CPU-slot systems with one slot empty.



    I don't think dual-slot SLI systems will ever see much market penetration.
  • Reply 54 of 70
    relicrelic Posts: 4,735member
    Quote:

    Originally posted by dfiler

    I don't think dual-slot SLI systems will ever see much market penetration.



    I like it when you talk dirty.
  • Reply 55 of 70
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by dfiler

    Another option: SLI does not require two physical slots. Just as my dual CPU daughter card requires one 'slot', the same could be done for GPUs.

    Note that no manufacturer is shipping dual CPU-slot systems with one slot empty.
    I don't think dual-slot SLI systems will ever see much market penetration.


    1. Not really. You wont be able to make use of dual x16 (both ways) PCI-E Express Lanes, or 2x 512MB DDR3 (1GB DDR3) Just these two issues alone would make a single slot SLI feature set second rate for pro apps. And these are just the first two that come to mind.

      I'm guessing gaming wouldn't be hurt as much, but that's someone else's issue. Either way it's still second rate.



      These are early times using nVidia's SLI. Most things are done the old 3DFX way, but some are new, and really cool. Drivers are going to get even more efficient, and take advantage of more things with this technology in the near future. Quadro drivers for SLI are still fairly early beta, and not completed. When they are finally tested, and in their 1st finalized version I think it's possible we'll see where this could be going in the near future. But it's not the end. This is a beginning.

    2. You can order an SLI system from BOXX, or Alienware with only one card on request. I guess anyone else would do the same, but I can not confirm that.

  • Reply 56 of 70
    dfilerdfiler Posts: 3,420member
    Two slots doesn't inherently offer more bandwidth than one. The production of a single slot with twice as many pins is actually cheaper than designing and manufacturing two slots with equal trace lengths.



    When designing an architecture for consumer computers, a single video card has historically seemed to provide the best price/performance ratio.



    In fact, this is true for just about any functional portion of the computer. Rarely is a functional unit split between two boards. It cheaper and more efficient to minimize the number of boards.
  • Reply 57 of 70
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by dfiler

    Two slots doesn't inherently offer more bandwidth than one. The production of a single slot with twice as many pins is actually cheaper than designing and manufacturing two slots with equal trace lengths.

    When designing an architecture for consumer computers, a single video card has historically seemed to provide the best price/performance ratio.

    In fact, this is true for just about any functional portion of the computer. Rarely is a functional unit split between two boards. It cheaper and more efficient to minimize the number of boards.
    [


    1. I don't think I said it would inherently offer more bandwidth, but that does not mean it doesn't, wont, or can't. Also, if as you say "you can use twice as many pins in a single slot" you can also use twice as many pins in two slot's. It's kind of the point of SLI. Like I said. It's not the end. It's a beginning.

    2. Could anything be more obvious. Was that even worth mentioning? Of course one card is cheaper than two. But, the price/performance ratio for SLI has already proven itself for gaming as usually almost twice the performance. Also when buying dual cards (if you want dual cards) their is a reduced bundled price. The one I'm looking at is about a $250 break on the second pro card if your buying pro cards, which would normally be a $1,200 card. Which is better than nothing.

    3. Pretty much the Same thing again. Your thinking old ways. Others are looking foreword to ways to achieve more from new ones. I heard the same arguments when Apple went with all dual CPU's. Everybody had arguments that looked just like yours do now. I don't know anyone with a dual Opteron, or a Dual G5 that really wants to go back to single CPU's that needs, and uses their power.

      I've played Half Life, and DOOM 3 on an SLI system, and most people I talk to are pretty excited to see what Quadros will do once their drivers are finished for SLI, and the tests are in.

      But if it's not something you think you can actually take advantage of I don't see why It's not a good thing for those who do, and can take advantage of it.



      Keep your single CPU, and GPU machines if you want them or just don't have any need for the power you can leverage from duals. That leaves those who do - more processor availability.

  • Reply 58 of 70
    dfilerdfiler Posts: 3,420member
    Let's see if I can boil this down to one sentence...



    Dual GPU machines don't require two boards.



    [EDIT]

    And please stop with the enumerated rebutals! It seems to turn a normal discussion into a fairly dry legal battle.
  • Reply 59 of 70
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by dfiler

    Let's see if I can boil this down to one sentence...



    Dual GPU machines don't require two boards.



    [EDIT]

    And please stop with the enumerated rebutals! It seems to turn a normal discussion into a fairly dry legal battle.




    I'll give you that, but Dual GPU card, and SLI are not the same thing, and SLI offers more than a dual GPU card. That is my point after all of it. I've already outlined some of it in the previous posts.
  • Reply 60 of 70
    Don't worry the drivers are coming. But that willl be in 2008 when the cards are 3 years old ;-)





    Quote:

    Originally posted by Relic

    Who cares about SLI, there isn't enough games on the Mac platform to warrant the cost of buying two (DirectX optimized) graphic cards. What I would like to see are professional graphic cards like a Wildcat, Quadro or FireGL with optimized OSX drivers.



Sign In or Register to comment.