New Open GL with LLVM: GPU Implications?
Quote:
Leopard's OpenGL stack is updated to version 2.1 of the specification, and has been paired with LLVM to allow Apple to rapidly support new video hardware in addition to being optimized for "a dramatic increase in OpenGL performance by offloading CPU-based processing onto another thread which can then run on a separate CPU core feeding the GPU."
Leopard's OpenGL stack is updated to version 2.1 of the specification, and has been paired with LLVM to allow Apple to rapidly support new video hardware in addition to being optimized for "a dramatic increase in OpenGL performance by offloading CPU-based processing onto another thread which can then run on a separate CPU core feeding the GPU."
Does this mean we'll at last see Apple get to new GPUs and a greater selection of them much, much sooner?
Discuss.
Lemon Bon Bon.
Comments
Does this mean we'll at last see Apple get to new GPUs and a greater selection of them much, much sooner?
Most of the references I've seen, it seems like it can be used as something similar to 3DAnalyzer in Windows but much better when the GPU does not support things like hardware T&L. This should mean better software compatibility on chips as low end as the integrated chips - which may mean that OpenGL accelerated software has no interface glitches. It seems that it uses the extra CPU core to support missing hardware features in software.
I haven't read about better/faster driver development but I guess if they can code unsupported GPU features at runtime, they might not need it. Since we'll all be quad core in a year or so, they might as well put all those CPUs to good use.
http://en.wikipedia.org/wiki/LLVM
"Because of this, it is used in the OpenGL pipeline of Mac OS X 10.5 ("Leopard") to provide support for missing hardware features."
http://gcc.gnu.org/ml/gcc/2005-11/msg00888.html
There seemed to be a lot of mention about shader compilation.
"One observation can be made though: there will always be some programs
that you can't map onto the hardware. For example, if you don't have
branches, you can't do loops that execute for a variable number of
iterations.
As such, I'd structure the compiler as a typical code generator with an
early predication pass that flattens branches. If you get to the end of
the codegen and have some dynamic branches left, you detect the error
condition and reject the shader from the hardware path (so you have to
emulate it in software)."
http://lists.cs.uiuc.edu/pipermail/l...ay/009134.html
The best way to think of LLVM is right there in the name: it's a virtual machine, but one that models something quite low-level, more like a CPU than a traditional virtual machine that models an entire PC....Why model something so primitive? Who wants to write code that targets a virtual CPU? Well, compilers, for one. The idea is that you produce code in LLVM's platform-neutral intermediary representation (IR) and then LLVM will optimize it and then convert it to native code for the real CPU of your choice.
Think of it as a big funnel: every sort of code you can imagine goes in the top, all ending up as LLVM IR. Then LLVM optimizes the hell out of it, using every trick in the book. Finally, LLVM produces native code from its IR. The concentration of development effort is obvious: a single optimizer that deals with a single format (LLVM IR) and a single native code generator for each target CPU...
Apple, however, is already on board. In Leopard, LLVM is used in what might strike you as an odd place: OpenGL.
When a video card does not support a particular feature in hardware (e.g., a particular pixel or vertex shader operation), a software fallback must be provided. Modern programmable GPUs provide a particular challenge. OpenGL applications no longer just call fixed functions, they can also pass entire miniature programs to the GPU for execution.
LLVM optimization won't just help OpenGL it'll help any piece of hardware that Apple must keep up with that tends to vary in features and functionality. Also think about how OS X as a platform is changing. You have 3 strains of the OS now running on different hardware platforms. Macintosh, iPhone and Apple TV. Apple's going to need to synchronize all across Leopard foundations yet still be able to offer optimizations unique to each platforms hardware.
Don't be misled by its humble use in Leopard; Apple has grand plans for LLVM. How grand? How about swapping out the guts of the gcc compiler Mac OS X uses now and replacing them with the LLVM equivalents? That project is well underway. Not ambitious enough? How about ditching gcc entirely, replacing it with a completely new LLVM-based (but gcc-compatible) compiler system? That project is called Clang, and it's already yielded some impressive performance results. In particular, its ability to do fast incremental compilation and provide a much richer collection of metadata is a huge boon to GUI IDEs like Xcode.
It'll be interesting to see how fast Clang evolves. I could see that by 10.6 Apple has started to integrated Clang/LLVM in a way that bypasses GCC entirely for smaller or apps of a specific focus. By 10.7 you may have a situation where Developers have the same type of Carbon/Cocoa decision about which tool they want to use.
Clang/LLVM has many benefits including smaller and faster code and better diagnostics so I'd imagine that Xcode 4 or whatever Apple calls it would suddenly have many new features and appear much smarter when developing with Clang/LLVM.
Time to go learn more about LLVM.