Aqua help in NVidia GeForce 4

13»

Comments

  • Reply 41 of 50
    onyxonyx Posts: 10member
    Both the ATI Radeon and the GeForce3 already have hardware Alpha transparency. It isn't new to the GF4 and is easy to see in Windows when you do things like drag lots of files (which all become transparent) or make entire windows transparent and move them around the screen. There is no slowdown when using a card with hardware acceleration compared to using a vid card without.
  • Reply 42 of 50
    mokimoki Posts: 551member
    [quote]Originally posted by Onyx:

    <strong>Both the ATI Radeon and the GeForce3 already have hardware Alpha transparency. It isn't new to the GF4 and is easy to see in Windows when you do things like drag lots of files (which all become transparent) or make entire windows transparent and move them around the screen. There is no slowdown when using a card with hardware acceleration compared to using a vid card without.</strong><hr></blockquote>



    The issue isn't whether transparency is supported at it on these video cards (as you note, it is), but rather how Quartz can be properly hardware accelerated. It isn't just a matter of making things transparent, the video card would have to work with Quartz's mixer-style compositing, not just simple bitmap slamming as is done on Windows.
  • Reply 43 of 50
    airslufairsluf Posts: 1,861member
  • Reply 44 of 50
    onyxonyx Posts: 10member
    [quote]It isn't just a matter of making things transparent, the video card would have to work with Quartz's mixer-style compositing, not just simple bitmap slamming as is done on Windows.<hr></blockquote>



    Well I understand that Quartz hasn't been coded to take advantage of it but I don't see why it couldn't be like in Windows.



    What do you mean by simple bitmap slamming for Windows? Icons, cursors, windows, etc. with transparency effects just like Quartz can be 2D accelerated in Windows. What?s the big difference?
  • Reply 45 of 50
    [quote]Originally posted by Onyx:

    <strong>There is no slowdown when using a card with hardware acceleration compared to using a vid card without.</strong><hr></blockquote>



    I have a Radeon VE, which most likely doesn't support that eature, and still that half-transparent mass-dragging you mentioned is pretty much real-time for me nonetheless. I don't think dedicated hardware support is needed or even useful in this particular case.



    Bye,

    RazzFazz
  • Reply 46 of 50
    [quote]Originally posted by Onyx:

    <strong>

    Well I understand that Quartz hasn't been coded to take advantage of it but I don't see why it couldn't be like in Windows.

    </strong><hr></blockquote>



    Because the windows display model is *much* simpler (and at the smae time much more limited).





    [quote]<strong>

    What do you mean by simple bitmap slamming for Windows? Icons, cursors, windows, etc. with transparency effects just like Quartz can be 2D accelerated in Windows. What?s the big difference?</strong><hr></blockquote>



    In Windows, each pixel belongs to exactly *one* window, as far as I understand. In the case of translucent windows, it's up to the respective program to take care of drawing anything that's covered underneath.



    In Quartz, on the other hand, a pixel does not belong to a single window context, but rather is composed of all the windows which lie below it according to their transparency.



    Bye,

    RazzFazz
  • Reply 47 of 50
    Well, first, Apple is making an attempt to hardware-accelerate Java widgets with OpenGL.



    This has to be activated manually, but results are reportedly good if your video card has at least 16 Mb of memory.



    If you look at Apple sample codes on the developer web site, you will see that there are more and more examples showing how to use OpenGL to do 2D things (such as writing text, displaying a QuickTime movie...).



    Could the whole screen be accelerated with OpenGL?

    Well, as moki said, this would require huge amounts of video memory.



    A 1280x1024 screen or window = 5 Mb of memory.

    1024x768 = 3 Mb

    800x600 = 1.83 Mb

    128x128 = 0.06 Mb :-)



    Let's suppose 5 Mb for the screen itself, 5 Mb for a buffer, 20 bigs windows, 20 smaller windows, and 250 very small ones (a lot of things can be windows...)

    That's about 121.6 Mb. So a 128 Mb video card could handle this very well, and I think it took a pretty bad case.



    But this doesn't rule out 64 Mb video card. The AGP bus can be used to access the main memory at quite high speed. Granted, it is not as fast as real memory, but a lot of video cards are only using main memory, so...

    And 200 fps is not required for this task.



    This way, the video card could do all the dirty compositing work, which is currently done by the processor, but is trivial to do with OpenGL, including with transparency.

    (compositing is at a very low level in Quartz)



    I have heard Apple is investigating this since quite some time, but that this would require very major changes in Quartz, so it is unlikely this comes soon, if it is ready one day.



    Bruno
  • Reply 48 of 50
    onyxonyx Posts: 10member
    [quote]In Windows, each pixel belongs to exactly *one* window, as far as I understand. In the case of translucent windows, it's up to the respective program to take care of drawing anything that's covered underneath.<hr></blockquote>



    Not sure what you mean by each pixel belongs to one window. And the programs don?t handle transparencies. Alpha blending, antialiasing, and the like are handled by GDI+ in Windows.







    Here?s a Windows XP desktop for example. It has anti-aliasing (including the funky gear cursor), transparencies in icons and in multiple programs/windows, along with the MP3 players blue see-through display. Everything moves as smoothly as if there were no transparencies at all because of my GF3 Ti200.



    GDI+ was designed with hardware acceleration in mind and that keeps it smooth and fast. I think Quartz could also benefit from the same thing if programmed for it.



    But then I'm no expert.
  • Reply 49 of 50
    Without going into any detail the "older" (Windows, OS 9 ...) methods are pixel based methods. All goems are defined in the graphics card in terms of which pixels get painted. Quartz is a vector based method. The goems are defined in terms of points and lines and scales and... the computer has to "render" them into a final images.



    It's like the difference between Photoshop (pixel based) and Illustrator (vector based). Or maybe looking at a PDF doc and looking at a tiff image fax type of doc. The PDF you can zoom in on and it?s always perfect perfect under all zooms but if you zoom in on the fax tiff image it?s pixels.



    Sure Windows does transparencies and some AA but I doubt it does it on the level Quartz can. Windows may AA the edge of a line where as when Quartz renders the line it will be perfectly blended with whatever is under it. I don?t know if windows can do that?





    The MAJOR MAJOR advantage is it?s device independence. Quartz can bring WYSIWYG to a new level. It could also scale to higher res monitors if or when they come out.



    Just read up on the net about the difference between pixel based and vector based methods and you?ll pick up on it.
  • Reply 50 of 50
    programmerprogrammer Posts: 3,467member
    I haven't seen any mention of:



    - Both the Radeon and geForce chipsets have the ability to read across the AGP bus. The existing window buffers could be sucked right out of main memory by the graphics chip without needing the CPU to do so.



    - With most video cards having 16, 32, and 64 megs of fast VRAM these days certainly some things could be put there... especially entities which aren't going to change (icons and widgets, for example).



    - The animation effects are apparently all bezier curve based which doesn't accelerate terribly well, but they could be polygonally based and then drawn using the 3D acceleration hardware. Perhaps the architecture should generate polygon lists rather than doing all the rasterization on the CPU... these animations could be stored on the video card (at least on geForce3 and better hardware) and played back when needed.



    All of these have to be better than having the CPU do the rasterizing and then sending the result across the slow bus. This is especially true if they are not mirroring the entire display in main memory and are doing blending operations across that bus. If they are doing the mirror its a huge waste of memory, if they are not then it is massively slow. Blending requires a read-modify-write operation, the read portion of which can be hundreds of times slower than the writes.
Sign In or Register to comment.