Imagination showcases 'ray tracing' graphics tech that could come in Apple's future iPhones, iPads
Semiconductor firm Imagination Technologies, whose PowerVR chipsets are at the heart of Apple's A-series processors, on Tuesday previewed the impressive capabilities of its new Wizard series of ray tracing GPUs that may one day bring hyper-realistic graphics to iOS games.
In a demonstration video, the ray tracing technology was shown working alongside more traditional rasterized graphics to power high-resolution shadows, realistic lighting reflections and refractions, and more believable translucency for materials like plastic and glass. Imagination said that real-world implementations will bring even larger performance improvements, since the GPUs can be integrated directly into system-on-a-chip designs.
Ray tracing is a method for creating a computer-generated image in which the paths of individual rays of light are calculated based on the materials they encounter in a scene. The technique has long been used in computer graphics, but traditionally requires significant processing power and only recently began being used for realtime applications such as games.
Because each ray of light is calculated separately, images generated using ray tracing can be extremely realistic. Its effects are especially noticeable when a scene involves complex reflections, such as light bouncing off of a highly-polished translucent sphere.
Originally introduced at this year's Game Developers Conference, Imagination's new Wizard GPUs are designed to lower the power and memory requirements for realtime ray tracing to make it suitable for mobile environments. The GR6500 -- the first in the Wizard series -- boasts 4 unified shading clusters and 128 ALU cores that can render up to 300 million rays per second.
Apple owns a minority stake in Imagination Technologies, and PowerVR chipsets have been in every iOS device since the iPhone 3GS. In February, the two companies announced an extension of their licensing pact that spans multiple years and "gives Apple access to Imagination's wide range of current and future PowerVR graphics and video IP cores."
In a demonstration video, the ray tracing technology was shown working alongside more traditional rasterized graphics to power high-resolution shadows, realistic lighting reflections and refractions, and more believable translucency for materials like plastic and glass. Imagination said that real-world implementations will bring even larger performance improvements, since the GPUs can be integrated directly into system-on-a-chip designs.
Ray tracing is a method for creating a computer-generated image in which the paths of individual rays of light are calculated based on the materials they encounter in a scene. The technique has long been used in computer graphics, but traditionally requires significant processing power and only recently began being used for realtime applications such as games.
Because each ray of light is calculated separately, images generated using ray tracing can be extremely realistic. Its effects are especially noticeable when a scene involves complex reflections, such as light bouncing off of a highly-polished translucent sphere.
Originally introduced at this year's Game Developers Conference, Imagination's new Wizard GPUs are designed to lower the power and memory requirements for realtime ray tracing to make it suitable for mobile environments. The GR6500 -- the first in the Wizard series -- boasts 4 unified shading clusters and 128 ALU cores that can render up to 300 million rays per second.
Apple owns a minority stake in Imagination Technologies, and PowerVR chipsets have been in every iOS device since the iPhone 3GS. In February, the two companies announced an extension of their licensing pact that spans multiple years and "gives Apple access to Imagination's wide range of current and future PowerVR graphics and video IP cores."
Comments
Must be why people move on to PhDs.
Don’t hit!
Brings back memories of running my computer for days to render a ray-traced image...
Will be interesting to see if ray-traced images are really any better than what's being done with modern GPU shaders (per fragment lighting calculations and effects).
EDIT: I see, shadows can be made a lot more realistic and detailed. Most of the other things they show (reflections and whatnot) can already be implemented in other ways.
That would be a different area of use but both have their benefits. In 2D, rasterisation is like Photoshopping where you have a fixed resolution and if you zoom in, you get blocky output but it means you can do a lot more effects as they don't require calculations to draw them. Vectors done in the likes of Illustrator can be zoomed infinitely and the computer redraws them but you can't easily construct every image as a vector. Often in Photoshop, it will ask explicitly to rasterize a vector as it can't do a certain action on a vector object.
In 3D, rasterisation is what early rendering software had to do because computers weren't fast enough to calculate raytracing - Pixar switched to using raytracing with the movie Cars. Rasterisation also has the benefit though that you have more creative freedom because you can tell it to draw anything. This is difficult to make realistic because it's like trying to paint a photograph. Some people want to have the output to be as close to how real world lighting behaves but not everyone wants that style. The most realistic raytracing is path tracing and has been used to make photoreal graphics for film as it simulates how physical light behaves. There's a demo of this kind of algorithm running in real-time on dual Titan GPUs here:
[VIDEO]
Perhaps if they put the algorithm into hardware, a GPU or co-processor manufacturer could get it to real-time as long as it has a fast connection to RAM. That sort of thing would allow game graphics to be largely indistinguishable from film because it's using a physical simulation of light. You can still see the sampling grain with dual Titans. I'd say it takes about 2-3 seconds to reach a good enough quality and it needs to do that in real-time so it would need to be about 90x faster than dual GTX Titans and that doesn't allow for extra objects. That's actually not bad though, I didn't think we'd see this kind of raytracing at all before compute power increases slowed down. If they can make dedicated hardware or just get the algorithms optimized and push GPUs up another 90x (2x increase over 7 GPU generations), maybe this will happen. Microsoft seems to be looking at some kind of raytracing for their console, most likely hybrid too:
http://www.gamespot.com/articles/xbox-one-could-get-photorealistic-rendering-system-for-amazing-visuals/1100-6418060/
The hybrid approach could well suffice for games rather than full-blown path tracing. The racing games that don't have much dynamic lights manage it pretty well:
[VIDEO]
You dig up some neat stuff sometimes, Marvin.
Brings back memories of running my computer for days to render a ray-traced image...
Will be interesting to see if ray-traced images are really any better than what's being done with modern GPU shaders (per fragment lighting calculations and effects).
EDIT: I see, shadows can be made a lot more realistic and detailed. Most of the other things they show (reflections and whatnot) can already be implemented in other ways.
All rendering engines now have a series of pipline stages, including Ray tracing, to get those realistic looks.
Blender's Open Source Cycles Renderer is just one example. Renderman another.
Here's hoping Apple can leverage some early mover accessibility not unlike ARM 64bit instruction set implementation, with the iPhone 6. Thinking about it, what with a bigger screen, more resolution, larger form factor, faster processor and not forgetting.....a larger battery, it could be an iPhone 6M[SIZE=2]axi[/SIZE].
All rendering engines now have a series of pipline stages, including Ray tracing, to get those realistic looks.
Blender's Open Source Cycles Renderer is just one example. Renderman another.
Right. Renderman had a programmable rendering pipeline before it could be done effectively in hardware (i.e. modern GPUs). But it was designed for generating pre-rendered scenes (for movies) rather than in real-time (for games).
The big deal here is the latter (real-time ray tracing). However, my point is that, with a modern GPU and shader techniques, it's now possible to do what ray tracing does in different ways (e.g. applying complex lighting calculations to every pixel on the screen, multiple rendering passes to create reflections, etc). So it'll be interesting to see whether being able to do real-time ray tracing gains us anything over current techniques. But anyways, it's good to have options -- and the advanced shadows do look great.
EDIT: Marvin's post above basically answers my question -- hybrid approach
That would be a different area of use but both have their benefits. In 2D, rasterisation is like Photoshopping where you have a fixed resolution and if you zoom in, you get blocky output but it means you can do a lot more effects as they don't require calculations to draw them. Vectors done in the likes of Illustrator can be zoomed infinitely and the computer redraws them but you can't easily construct every image as a vector. Often in Photoshop, it will ask explicitly to rasterize a vector as it can't do a certain action on a vector object.
Vector vs raster becomes somewhat more complex when it comes to rendering 3D scenes. All of your scene geometry (objects tessellated into triangles) are vector. The textures/images you apply to them are raster. The lighting effects, shadows, and reflections can be vector or raster depending on how you do them: maps (raster) vs real-time calculations (vector).
Ray tracing effectively makes everything vector aside from the textures. All of the scene effects (reflections, shadows, etc) become calculations instead of maps or similar, resolution-limited data caches. However, many of these effects can be achieved using different algorithms.
My guess is that it is a little early for iPhone 6. I can see A8 being further optimized for power with modest performance increases.
As a side note I have to wonder how much Apple IP are in these designs of Imagination. After all Apple bought Racer years ago.
By the time the A8 is out competitors still won't have anything as good a a year+ old A7.
The M7 is actually a discreet chip. This benefit allows it to be used with a wearable which won't have that comparably large and power hungry Apple A-series chip.
/The world will understand why Apple moved to 64-bit with the M7 built in when Apple has most of it's users own devices with such hardware and Apple announces "must have" apps that require such capability... On that date, All iDevice competitors will be two or three years behind Apple in where it will take the industry...
This is something so many people keep missing even though it's staring them right in the face (although DED did write on it briefly before). Apple has made it easy to port 64bit Mac OS X code over to iOS. Algoriddim is one example of a developer that took code from their Mac software and ported it very quickly and easily over to their iOS App, bringing some desktop features to iOS.
This is where Apple has a huge advantage. You won't ever get a full version of Photoshop on your iPhone/iPad, but you could get a "lite" version that has a limited number of Photoshop features, but where those features are the exact same level of quality/capability as their desktop versions.
Microsoft also has this advantage as they're gearing up to make it easy for developers to target desktop and mobile.
Android lacks this advantage as Google has no desktop OS with a library of high-end applications they can borrow features from. And no, Linux doesn't count as Android is stripped of so much of Linux (and the remainder is modified) that it's miles away from desktop Linux. Besides, Linux doesn't have any software to compete with Mac OS or Windows anyway, so even if you could port code over you're stuck with an OS that geeks have been trying to get to replace Windows forever now, with no success at all.
And I was recently told that anything to do with a raster is a poor design compared to vectors.
Technically raytracing does involve vectors. The facing of vertices and polygon faces is determined by a normal vector calculated by the tangent values of their connections. Dot products (also vectors) are frequently used to compute the direction in which a cast ray will bounce. The kinds of vectors you're referring to aren't really appropriate here though.
Android lacks this advantage as Google has no desktop OS with a library of high-end applications they can borrow features from. And no, Linux doesn't count as Android is stripped of so much of Linux (and the remainder is modified) that it's miles away from desktop Linux. Besides, Linux doesn't have any software to compete with Mac OS or Windows anyway, so even if you could port code over you're stuck with an OS that geeks have been trying to get to replace Windows forever now, with no success at all.
You make Batman sad.
I didn't mean to suggest vectoring wasn't involved with ray tracing. I was making another point that I clearly didn't properly express.
I didn't mean to suggest vectoring wasn't involved with ray tracing. I was making another point that I clearly didn't properly express.
Was it sarcasm regarding comments on vector ui elements? I would have geeked about raytracing either way. I kind of wonder if this is more memory efficient in spite of being more computationally expensive. They don't have to load baked light or shadow maps this way.