Intel unleashes Mac-bound "Woodcrest" server chip

18911131429

Comments

  • Reply 201 of 565
    sunilramansunilraman Posts: 8,133member
    [QUOTE]Originally posted by aegisdesign

    Perhaps we're at the tipping point then that GPU maths is good enough to rely on. Apple seem to think so.






    GPU maths as Chucker mentioned in Core Image is implemented via OpenGL. No other way?



    That's the thing, if Apple seem to think (based on the earlier quotes such as "It allows for manipulation of deep bit images with incredible accuracy and color fidelity") that Core Image can deliver let's say High accuracy. If via OpenGL then that means Apple has a lot of confidence in Core Image via OpenGL implementing High accuracy when it comes to 2D and 3D effects.



    Just a bit confusing about Apple's claims. Remember that the ATI 9550 was Core Image via GPU compatible. So at that level GPU maths was probably good enough by then???
  • Reply 202 of 565
    chuckerchucker Posts: 5,089member
    Core Image GPU acceleration is determined by whether the GPU implements ARB_fragment_program, not by how fast or precise it is.
  • Reply 203 of 565
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by Chucker

    Core Image GPU acceleration is determined by whether the GPU implements ARB_fragment_program, not by how fast or precise it is.



    Looks like the first part it from the documentation, but the second seems non related. What is the ARB_ Fragment_program, a call?
  • Reply 204 of 565
    chuckerchucker Posts: 5,089member
    ARB is the OpenGL Architecture Review Board. Thus, functions prefixed ARB_ are "official" OpenGL extensions, whereas, for instance, functions prefixed NV_ were created by nVidia. (ATI_fragment_shader and NV_fragment_program interact with ARB_fragment_program, for example.)



    "Fragment program" is OpenGL's term for what is otherwise known as a pixel shader.
  • Reply 205 of 565
    wwworkwwwork Posts: 140member
    more on GPU(er)s...



    At Colleges, Women Are Leaving Men in the Dust



    Quote:

    [Jen] Smyers, also at American, said she recently ended a relationship with another student, in part out of frustration over his playing video games four hours a day...."That's my litmus test now: I won't date anyone who plays video games. It means they're choosing to do something that wastes their time and sucks the life out of them."



    ....In the Dickinson cafeteria on a spring afternoon, the byplay between two men and two women could provide a text on gender differences. The men...talked about playing "Madden," a football video game, six hours a day, about how they did not spend much time on homework.



    ....Some professors and administrators have begun to notice a similar withdrawal among men who arrive on campus with deficient social skills. Each year, there are several who mostly stay in their rooms, talk to no one, play video games into the wee hours and miss classes until they withdraw or flunk out.



    This spring, Rebecca Hammell, dean of freshman and sophomores, counseled one such young man to withdraw. "He was in academic trouble from the start," Ms. Hammell said. "He was playing games till 3, 4, 5 in the morning, in an almost compulsive way."



  • Reply 206 of 565
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by Chucker

    ARB is the OpenGL Architecture Review Board. Thus, functions prefixed ARB_ are "official" OpenGL extensions, whereas, for instance, functions prefixed NV_ were created by nVidia. (ATI_fragment_shader and NV_fragment_program interact with ARB_fragment_program, for example.)



    "Fragment program" is OpenGL's term for what is otherwise known as a pixel shader.




    So if your code has a call to the ARB_ Fragment_program to xxx.lib Your app will automatically use core image? To do what? Move code processes off through the GPU?



    Oh for chrissake. Your quoting wikipedia. I thought you were serious.



    Apples developer documentation where you want to get answers about this stuff from.



    Here:

    Quote:

    Quoted from Apples developer documentation

    Up until now OpenGL, the industry standard for high performance 2D and 3D graphics, has been the primary gateway to the graphics processing unit (GPU). If you wanted to use the GPU for image processing, you needed to know OpenGL Shading Language. Core Image changes all that. With Core Image, you don?t need to know the details of OpenGL to harness the power of the GPU for image processing. Core Image handles OpenGL buffers and state management for you automatically. If for some reason a GPU is not available, Core Image uses a CPU fallback to ensure that your application runs. Core Image operations are opaque to you; your software just works.



    Core Image hides the details of low-level graphics processing by providing an easy-to-use application programming interface (API) implemented in the Objective-C language. The Core Image API is part of the Quartz Core framework (QuartzCore.framework). You can use Core Image from the Cocoa and Carbon frameworks by linking to this framework.




  • Reply 207 of 565
    aegisdesignaegisdesign Posts: 2,914member
    Quote:

    Originally posted by wwwork

    more on GPU(er)s...



    At Colleges, Women Are Leaving Men in the Dust




    Heh!



    Then again, I can bore the knickers off the opposite sex with 'No, that's not black, it's black WITH cyan'.
  • Reply 208 of 565
    kim kap solkim kap sol Posts: 2,987member
    Quote:

    Originally posted by wwwork

    more on GPU(er)s...



    At Colleges, Women Are Leaving Men in the Dust




    "You play video games? Oh wait, wait...do you use a Mac or a PC?"



    "Mac."



    "Oh, whew, you're silly, you can't play games on a Mac...how's Saturday night sound to you?"



    "Uh...good."



    Mac:1 PC:0

  • Reply 209 of 565
    chuckerchucker Posts: 5,089member
    Quote:

    Originally posted by onlooker

    So if your code has a call to the ARB_ Fragment_program to xxx.lib Your app will automatically use core image? To do what? Move code processes off through the GPU?



    No. I never claimed so.
  • Reply 210 of 565
    karnackarnac Posts: 7member
    Quote:

    Originally posted by a_greer

    I bet that they will just call the damn thing "sex". It may attract the Aleinware buyers, they have never had "sex"!



    That is soooo funny. Truly funny.
  • Reply 211 of 565
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by Chucker

    No. I never claimed so.



    I wasn't saying you were. I was asking what it was you were saying, and as I did I looked at your quoted words, and saw a link to an online, user create-able, user editable. user write-able, user definable encyclopedia, and didn't bother taking it serious after that so I looked it up myself in an appropriate credible place of reference. No worries.
  • Reply 212 of 565
    placeboplacebo Posts: 5,767member
    Quote:

    Originally posted by onlooker

    So if your code has a call to the ARB_ Fragment_program to xxx.lib Your app will automatically use core image? To do what? Move code processes off through the GPU?





    It can't merely be stated as a call to the GPU; your filters, effects, and whatever graphics work you want accellerated has to be written as a pixel shader.



    (Yes, there have been a few hacks to get the GPU to work as an extra floating point processor for non-video tasks, but this isn't exactly orthodox or in widespread implementation)
  • Reply 213 of 565
    sunilramansunilraman Posts: 8,133member
    I think chucker has more to say, but he's keeping it simple/ quiet for now. Chucker, why didn't you just say earlier ARB_fragment_program was pixel shaders?



    So we know that we are down to pixel shaders being used for General Purpose computing tasks.



    Here is an interesting wikipedia article:

    http://en.wikipedia.org/wiki/GPGPU



    Here's an interesting sentence that talks about accuracy dependent on GPUs.



    "NVIDIA GPUs currently support 32 bit values through the entire pipeline. ATI cards currently support 24 bit values throughout the pipeline, although their new X1000 series supports 32 bits. The implementations of floating point on GPUs are generally not IEEE compliant, and generally do not match across vendors. This has implications for correctness which are considered important to some scientific applications.



    "General-Purpose Computing on Graphics Processing Units (GPGPU, also referred to as GPGP and to a lesser extent GP^2) is a recent trend in computer science that uses the Graphics Processing Unit to perform the computations rather than the CPU. The addition of programmable stages and higher precision arithmetic to the GPU rendering pipeline have allowed software developers to use the GPU for non graphics related applications"



    "The following are some of the non-graphics areas where GPUs have been used for general purpose computing:



    * Physically based simulation - Game of life, Cloth simulation, Incompressible fluid flow by solution of Navier-Stokes equations

    * Segmentation - 2D and 3D

    * Level-set methods

    * CT reconstruction

    * Fast Fourier Transform

    * Tone mapping

    * Sound Effects Processing

    * Image/Video Processing

    * Raytracing

    * Global Illumination - Photon Mapping, Radiosity, Subsurface Scattering

    * Geometric Computing - Constructive Solid Geometry (CSG), Distance Fields, Collision Detection, Transparency Computation, Shadow Generation

    * Neural Networks

    * Database operations

    * Lattice Boltzmann Method

    * Cryptography

    "
  • Reply 214 of 565
    sunilramansunilraman Posts: 8,133member
    [QUOTE]Originally posted by Placebo

    It can't merely be stated as a call to the GPU; your filters, effects, and whatever graphics work you want accellerated has to be written as a pixel shader...(Yes, there have been a few hacks to get the GPU to work as an extra floating point processor for non-video tasks, but this isn't exactly orthodox or in widespread implementation)






    Excellent. We're getting down to things now. At the lower level Core Image does GPU-driven 2D manipulations as pixel shaders.



    The GPGPU wiki I posted above does not specifically discuss pixel shaders or maybe it is talking more about using the GPU as an extra floating point processor rather than a pixel shader.
  • Reply 215 of 565
    chuckerchucker Posts: 5,089member
    Quote:

    Originally posted by sunilraman

    I think chucker has more to say, but he's keeping it simple/ quiet for now. Chucker, why didn't you just say earlier ARB_fragment_program was pixel shaders?



    I did.



    Quote:

    Originally posted by Chucker

    "Fragment program" is OpenGL's term for what is otherwise known as a pixel shader.



  • Reply 216 of 565
    sunilramansunilraman Posts: 8,133member
    Yeah, I meant as in why didn't you say that like, 1 or 2 pages back. Us GPU fanbois recognise "pixel shader" more than "ARB_Fragment_Program".
  • Reply 217 of 565
    sunilramansunilraman Posts: 8,133member
    Heh. For the record, yeah when I was in college and playing a lot of computer games (a few hours each day) sex was a mystery. However once I left college there have been more interesting encounters with the opposite (and a bit of the same) sex along with some drugs, rock and roll, and trance musik along the way...... The halcyon days of 2000-2004. 2005-this year more chilled/ quiet. I think I've earned back my "right" to playing games, mostly Half Life 2!! now with HDR!! OMFG!!!111oneone!!!11
  • Reply 218 of 565
    chuckerchucker Posts: 5,089member
    Quote:

    Originally posted by sunilraman

    Yeah, I meant as in why didn't you say that like, 1 or 2 pages back. Us GPU fanbois recognise "pixel shader" more than "ARB_Fragment_Program".



    I didn't deliberately want to misguide people, if that's what you're thinking.



    For Core Image to be GPU-accelerated, it relies on GPUs supporting pixel shaders, and making those pixel shaders accessible through OpenGL, using the ARB_Fragment_Program extension. That's all there is to it, really.
  • Reply 219 of 565
    sunilramansunilraman Posts: 8,133member
    Back to the topic (well the off-topic topic) of pixel shaders, I would like to then see implementation examples of Core Image filters across different GPUs.



    I think that would be informative and help settle the matter. Take Image Units of Core Image and see "how accurate" image filters can be*, using Photoshop CPU-based image filters as the benchmark. These Image Units of Core Image, we take the output, and not only compare against Photoshop but compare against the output of different GPUs that ship/ are offered with the Mac.



    *I don't know about coding for Core Image and Image Units but in this case above we'd push for "as accurate as possible" filtering and output. Eg. Gaussian blur on a defined resolution of imagery (say 1-6 megapixels).
  • Reply 220 of 565
    sunilramansunilraman Posts: 8,133member
    [QUOTE]Originally posted by Chucker

    I didn't deliberately want to misguide people, if that's what you're thinking. For Core Image to be GPU-accelerated, it relies on GPUs supporting pixel shaders, and making those pixel shaders accessible through OpenGL, using the ARB_Fragment_Program extension. That's all there is to it, really.






    Yes, no worries, I think it's great now that in that one paragraph above we've got to the description of Core Image in a nice compact form. Now a good starting point for "accuracy of output across GPUs and as compared to Photoshop" discussion
Sign In or Register to comment.