or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Imagination showcases 'ray tracing' graphics tech that could come in Apple's future iPhones, iPads
New Posts  All Forums:Forum Nav:

Imagination showcases 'ray tracing' graphics tech that could come in Apple's future iPhones, iPads

post #1 of 28
Thread Starter 
Semiconductor firm Imagination Technologies, whose PowerVR chipsets are at the heart of Apple's A-series processors, on Tuesday previewed the impressive capabilities of its new Wizard series of ray tracing GPUs that may one day bring hyper-realistic graphics to iOS games.



In a demonstration video, the ray tracing technology was shown working alongside more traditional rasterized graphics to power high-resolution shadows, realistic lighting reflections and refractions, and more believable translucency for materials like plastic and glass. Imagination said that real-world implementations will bring even larger performance improvements, since the GPUs can be integrated directly into system-on-a-chip designs.

Ray tracing is a method for creating a computer-generated image in which the paths of individual rays of light are calculated based on the materials they encounter in a scene. The technique has long been used in computer graphics, but traditionally requires significant processing power and only recently began being used for realtime applications such as games.




Because each ray of light is calculated separately, images generated using ray tracing can be extremely realistic. Its effects are especially noticeable when a scene involves complex reflections, such as light bouncing off of a highly-polished translucent sphere.

Originally introduced at this year's Game Developers Conference, Imagination's new Wizard GPUs are designed to lower the power and memory requirements for realtime ray tracing to make it suitable for mobile environments. The GR6500 -- the first in the Wizard series -- boasts 4 unified shading clusters and 128 ALU cores that can render up to 300 million rays per second.




Apple owns a minority stake in Imagination Technologies, and PowerVR chipsets have been in every iOS device since the iPhone 3GS. In February, the two companies announced an extension of their licensing pact that spans multiple years and "gives Apple access to Imagination's wide range of current and future PowerVR graphics and video IP cores."
post #2 of 28
And I was recently told that anything to do with a raster is a poor design compared to vectors. 1hmm.gif

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply
post #3 of 28
Originally Posted by SolipsismX View Post
And I was recently told that anything to do with a raster is a poor design

 

Must be why people move on to PhDs.

Don’t hit!

“The only thing more insecure than Android is its userbase.” – Can’t Remember

Reply

“The only thing more insecure than Android is its userbase.” – Can’t Remember

Reply
post #4 of 28

Brings back memories of running my computer for days to render a ray-traced image...

 

Will be interesting to see if ray-traced images are really any better than what's being done with modern GPU shaders (per fragment lighting calculations and effects).

 

EDIT: I see, shadows can be made a lot more realistic and detailed.  Most of the other things they show (reflections and whatnot) can already be implemented in other ways.


Edited by auxio - 5/6/14 at 11:47am
 
Reply
 
Reply
post #5 of 28
Impressive!
post #6 of 28
I don't think there's any doubt that the iPhone 6 will be an impressive graphics processing device.

Proud AAPL stock owner.

 

GOA

Reply

Proud AAPL stock owner.

 

GOA

Reply
post #7 of 28
Quote:
Originally Posted by SolipsismX View Post

I was recently told that anything to do with a raster is a poor design compared to vectors. 1hmm.gif

That would be a different area of use but both have their benefits. In 2D, rasterisation is like Photoshopping where you have a fixed resolution and if you zoom in, you get blocky output but it means you can do a lot more effects as they don't require calculations to draw them. Vectors done in the likes of Illustrator can be zoomed infinitely and the computer redraws them but you can't easily construct every image as a vector. Often in Photoshop, it will ask explicitly to rasterize a vector as it can't do a certain action on a vector object.

In 3D, rasterisation is what early rendering software had to do because computers weren't fast enough to calculate raytracing - Pixar switched to using raytracing with the movie Cars. Rasterisation also has the benefit though that you have more creative freedom because you can tell it to draw anything. This is difficult to make realistic because it's like trying to paint a photograph. Some people want to have the output to be as close to how real world lighting behaves but not everyone wants that style. The most realistic raytracing is path tracing and has been used to make photoreal graphics for film as it simulates how physical light behaves. There's a demo of this kind of algorithm running in real-time on dual Titan GPUs here:



Perhaps if they put the algorithm into hardware, a GPU or co-processor manufacturer could get it to real-time as long as it has a fast connection to RAM. That sort of thing would allow game graphics to be largely indistinguishable from film because it's using a physical simulation of light. You can still see the sampling grain with dual Titans. I'd say it takes about 2-3 seconds to reach a good enough quality and it needs to do that in real-time so it would need to be about 90x faster than dual GTX Titans and that doesn't allow for extra objects. That's actually not bad though, I didn't think we'd see this kind of raytracing at all before compute power increases slowed down. If they can make dedicated hardware or just get the algorithms optimized and push GPUs up another 90x (2x increase over 7 GPU generations), maybe this will happen. Microsoft seems to be looking at some kind of raytracing for their console, most likely hybrid too:

http://www.gamespot.com/articles/xbox-one-could-get-photorealistic-rendering-system-for-amazing-visuals/1100-6418060/

The hybrid approach could well suffice for games rather than full-blown path tracing. The racing games that don't have much dynamic lights manage it pretty well:

post #8 of 28

You dig up some neat stuff sometimes, Marvin. ;)

Proud AAPL stock owner.

 

GOA

Reply

Proud AAPL stock owner.

 

GOA

Reply
post #9 of 28
Quote:
Originally Posted by auxio View Post
 

Brings back memories of running my computer for days to render a ray-traced image...

 

Will be interesting to see if ray-traced images are really any better than what's being done with modern GPU shaders (per fragment lighting calculations and effects).

 

EDIT: I see, shadows can be made a lot more realistic and detailed.  Most of the other things they show (reflections and whatnot) can already be implemented in other ways.

 

All rendering engines now have a series of pipline stages, including Ray tracing, to get those realistic looks.

 

Blender's Open Source Cycles Renderer is just one example. Renderman another.

post #10 of 28
Heads up?

Here's hoping Apple can leverage some early mover accessibility not unlike ARM 64bit instruction set implementation, with the iPhone 6. Thinking about it, what with a bigger screen, more resolution, larger form factor, faster processor and not forgetting.....a larger battery, it could be an iPhone 6Maxi.
post #11 of 28
Quote:
Originally Posted by mdriftmeyer View Post
 

All rendering engines now have a series of pipline stages, including Ray tracing, to get those realistic looks.

 

Blender's Open Source Cycles Renderer is just one example. Renderman another.

 

Right.  Renderman had a programmable rendering pipeline before it could be done effectively in hardware (i.e. modern GPUs).  But it was designed for generating pre-rendered scenes (for movies) rather than in real-time (for games).

 

The big deal here is the latter (real-time ray tracing).  However, my point is that, with a modern GPU and shader techniques, it's now possible to do what ray tracing does in different ways (e.g. applying complex lighting calculations to every pixel on the screen, multiple rendering passes to create reflections, etc).  So it'll be interesting to see whether being able to do real-time ray tracing gains us anything over current techniques.  But anyways, it's good to have options -- and the advanced shadows do look great.

 

EDIT: Marvin's post above basically answers my question -- hybrid approach


Edited by auxio - 5/6/14 at 1:28pm
 
Reply
 
Reply
post #12 of 28
Quote:
Originally Posted by Marvin View Post

That would be a different area of use but both have their benefits. In 2D, rasterisation is like Photoshopping where you have a fixed resolution and if you zoom in, you get blocky output but it means you can do a lot more effects as they don't require calculations to draw them. Vectors done in the likes of Illustrator can be zoomed infinitely and the computer redraws them but you can't easily construct every image as a vector. Often in Photoshop, it will ask explicitly to rasterize a vector as it can't do a certain action on a vector object.

 

Vector vs raster becomes somewhat more complex when it comes to rendering 3D scenes.  All of your scene geometry (objects tessellated into triangles) are vector.  The textures/images you apply to them are raster.  The lighting effects, shadows, and reflections can be vector or raster depending on how you do them: maps (raster) vs real-time calculations (vector).

 

Ray tracing effectively makes everything vector aside from the textures.  All of the scene effects (reflections, shadows, etc) become calculations instead of maps or similar, resolution-limited data caches.  However, many of these effects can be achieved using different algorithms.

 
Reply
 
Reply
post #13 of 28
Quote:
Originally Posted by Frac View Post

Heads up?

Here's hoping Apple can leverage some early mover accessibility not unlike ARM 64bit instruction set implementation, with the iPhone 6. Thinking about it, what with a bigger screen, more resolution, larger form factor, faster processor and not forgetting.....a larger battery, it could be an iPhone 6Maxi.

My guess is that it is a little early for iPhone 6. I can see A8 being further optimized for power with modest performance increases.

As a side note I have to wonder how much Apple IP are in these designs of Imagination. After all Apple bought Racer years ago.
post #14 of 28
Anandtech reports today that ARM has released Geekbench 3 scores for their A57. It's not looking good. At the same clock speed a single Cyclone core (A7) is still going to be about 25-30% faster. And A57 based SoC's aren't shipping by Qualcomm until the start of 2015, and nobody knows when Samsung will ship theirs.

By the time the A8 is out competitors still won't have anything as good a a year+ old A7.

Author of The Fuel Injection Bible

Reply

Author of The Fuel Injection Bible

Reply
post #15 of 28
/The world will understand why Apple moved to 64-bit with the M7 built in when Apple has most of it's users own devices with such hardware and Apple announces "must have" apps that require such capability... On that date, All iDevice competitors will be two or three years behind Apple in where it will take the industry...
Edited by Macky the Macky - 5/6/14 at 6:42pm
"That (the) world is moving so quickly that iOS is already amongst the older mobile operating systems in active development today." — The Verge
Reply
"That (the) world is moving so quickly that iOS is already amongst the older mobile operating systems in active development today." — The Verge
Reply
post #16 of 28
Quote:
Originally Posted by Macky the Macky View Post

/The world will understand why Apple moved to 64-bit with the M7 built in when Apple has most of it's users own devices with such hardware and Apple announces "must have" apps that require such capability... On that date, All iDevice competitors will be two or three years behind Apple in where it will take the industry...

The M7 is actually a discreet chip. This benefit allows it to be used with a wearable which won't have that comparably large and power hungry Apple A-series chip.

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply
post #17 of 28
Quote:
Originally Posted by Macky the Macky View Post

/The world will understand why Apple moved to 64-bit with the M7 built in when Apple has most of it's users own devices with such hardware and Apple announces "must have" apps that require such capability... On that date, All iDevice competitors will be two or three years behind Apple in where it will take the industry...

 

This is something so many people keep missing even though it's staring them right in the face (although DED did write on it briefly before). Apple has made it easy to port 64bit Mac OS X code over to iOS. Algoriddim is one example of a developer that took code from their Mac software and ported it very quickly and easily over to their iOS App, bringing some desktop features to iOS.

 

This is where Apple has a huge advantage. You won't ever get a full version of Photoshop on your iPhone/iPad, but you could get a "lite" version that has a limited number of Photoshop features, but where those features are the exact same level of quality/capability as their desktop versions.

 

Microsoft also has this advantage as they're gearing up to make it easy for developers to target desktop and mobile.

 

Android lacks this advantage as Google has no desktop OS with a library of high-end applications they can borrow features from. And no, Linux doesn't count as Android is stripped of so much of Linux (and the remainder is modified) that it's miles away from desktop Linux. Besides, Linux doesn't have any software to compete with Mac OS or Windows anyway, so even if you could port code over you're stuck with an OS that geeks have been trying to get to replace Windows forever now, with no success at all.

Author of The Fuel Injection Bible

Reply

Author of The Fuel Injection Bible

Reply
post #18 of 28
Quote:
Originally Posted by SolipsismX View Post

And I was recently told that anything to do with a raster is a poor design compared to vectors. 1hmm.gif

 

Technically raytracing does involve vectors. The facing of vertices and polygon faces is determined by a normal vector calculated by the tangent values of their connections. Dot products (also vectors) are frequently used to compute the direction in which a cast ray will bounce. The kinds of vectors you're referring to aren't really appropriate here though.

post #19 of 28
Quote:
Originally Posted by EricTheHalfBee View Post
 

 

 

 

Android lacks this advantage as Google has no desktop OS with a library of high-end applications they can borrow features from. And no, Linux doesn't count as Android is stripped of so much of Linux (and the remainder is modified) that it's miles away from desktop Linux. Besides, Linux doesn't have any software to compete with Mac OS or Windows anyway, so even if you could port code over you're stuck with an OS that geeks have been trying to get to replace Windows forever now, with no success at all.

You make Batman sad.

post #20 of 28
Quote:
Originally Posted by hmm View Post

Technically raytracing does involve vectors. The facing of vertices and polygon faces is determined by a normal vector calculated by the tangent values of their connections. Dot products (also vectors) are frequently used to compute the direction in which a cast ray will bounce. The kinds of vectors you're referring to aren't really appropriate here though.

I didn't mean to suggest vectoring wasn't involved with ray tracing. I was making another point that I clearly didn't properly express.

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply
post #21 of 28
Quote:
Originally Posted by SolipsismX View Post


I didn't mean to suggest vectoring wasn't involved with ray tracing. I was making another point that I clearly didn't properly express.


Was it sarcasm regarding comments on vector ui elements? I would have geeked about raytracing either way. I kind of wonder if this is more memory efficient in spite of being more computationally expensive. They don't have to load baked light or shadow maps this way.

post #22 of 28
Quote:
Originally Posted by EricTheHalfBee View Post

Anandtech reports today that ARM has released Geekbench 3 scores for their A57. It's not looking good. At the same clock speed a single Cyclone core (A7) is still going to be about 25-30% faster. And A57 based SoC's aren't shipping by Qualcomm until the start of 2015, and nobody knows when Samsung will ship theirs.
Pretty pathetic but don't count Qualcomm out! It may take them awhile but they have the chops to do an enhanced 64 bit core.
Quote:

By the time the A8 is out competitors still won't have anything as good a a year+ old A7.

Yep! People still don't recognize just how far ahead Apple is with A7. I've heard rumors though that much of the focus of A8 will be on lowering power usage. I'm not sure a strong increase in performance, per core, is a given. We might however get more cores or a new GPU.

I can see such a rumors as being valid because of the obvious nature of Apples iOS products. Even a few milliwatts can make a huge difference for a battery powered device. For a 64 bit device it really looks like Apples A7 & coming A8 will continue to outclass the competition for some time to come with respect to overall performance.
post #23 of 28
Quote:
Originally Posted by hmm 
I kind of wonder if this is more memory efficient in spite of being more computationally expensive. They don't have to load baked light or shadow maps this way.

Could well be but most likely they'd still want to cache the shadow in a final product rather than calculate them every frame, which allows them to fake soft shadows much faster as they can apply a blur based on distance. This still gives better clarity than rasterized shadows, which would be generated by placing a camera at the light source. The other thing to consider is volumetric and displacement effects. Crysis is one engine that tries to find the best way to do each effect and there's a lot of work to get it to come together:



They still use rasterization completely:

http://www.crytek.com/download/fmx2013_c3_art_tech_donzallaz_sousa.pdf

For global illumination, they used Cascaded Light Propagation Volumes, there's a Powerpoint here:

http://www.crytek.com/cryengine/presentations/real-time-diffuse-global-illumination-in-cryengine-3

They have stats on how fast it performs on the last-gen consoles. They managed to get it to calculate in 1ms per frame.

The problem with the hacks to tackle each effect with rasterization is the artifacts - the physically-based solutions are just simpler. The only problem is how slow they are. Maybe it's time for Pixar to get back into the hardware business. Why not make a co-processor that puts their best physically-based algorithms into silicon and run it orders of magnitude faster than you can get on generic hardware? They'd have to allow for programmable shaders but they'd have an API for that and there can be multiple iterations of the co-processor.

Right now, hardware manufacturers and software developers are working separately so GPU manufacturers are trying to make hardware run as generic code as possible and rendering engine developers are trying to get as close to photoreal, which is as specific an end-goal as it gets, we know what that looks like. As a co-processor, you'd still have the ability to make stylized graphics with the other hardware but the co-processor gives you the definitive, singular solution faster than any generic hardware can manage.

This has been happening with bitcoins where special hardware is being used in place of GPUs:

http://www.coindesk.com/faster-bitcoin-mining-rigs-leave-gpus-in-the-dust/

One of the special rigs has an efficiency 50x higher than a generic GPU. AMD GPUs seems to be better for it and someone here tried out the new Mac Pro on it but didn't want to risk damaging the machine:

http://www.asteroidapp.com/forums/discussion/226/litecoin-ltc-on-new-mac-pro-w-d300s-stopped-mining-before-i-ever-really-got-started/p1
post #24 of 28

Nice stuff Marvin.

 

In regard to specialized hardware, that's very much like stand-up arcade games: custom hardware for just that game and a few others like it.  It's tricky to find the balance between hardware which is "so niche that the development costs could never be recouped" and that which is "too general purpose to solve the problem well".

 

I don't know if such hardware would be feasible for desktop/laptop computers.  However, I could definitely see console manufacturers being interested in funding specialized efforts.

 
Reply
 
Reply
post #25 of 28
This is truly impressive, considering an image with ray tracing at 1080p that looks photorealistic can take hours to generate when rendering animations.
post #26 of 28
Quote:
Originally Posted by SolipsismX View Post

And I was recently told that anything to do with a raster is a poor design compared to vectors. 1hmm.gif

 

Wich is true in most cases, just thinking of something else entirely: iOS apps nowadays carry four sets of images with them, iPhone, iPhone retina, iPad and iPad retina, when one single vector graphic using less space than the smallest of the pixel graphics would suffice. Or retina-websites, carrying a retina and a non-retina version of a vector graphic rendered into pixels, just because svg sounds scary to some. The list could go on…

post #27 of 28
It occurs to me that the computations involved in ray tracing may be similar to those required for:
  • fingerprint scanning
  • iris scanning
  • even location fingerprint scanning ala WiFiSLAM
  • even pCell location determination via cell signal noise

Thoughts?
"Swift generally gets you to the right way much quicker." - auxio -

"The perfect [birth]day -- A little playtime, a good poop, and a long nap." - Tomato Greeting Cards -
Reply
"Swift generally gets you to the right way much quicker." - auxio -

"The perfect [birth]day -- A little playtime, a good poop, and a long nap." - Tomato Greeting Cards -
Reply
post #28 of 28
Quote:
Originally Posted by Dick Applebaum View Post

It occurs to me that the computations involved in ray tracing may be similar to those required for:
  • fingerprint scanning
  • iris scanning
  • even location fingerprint scanning ala WiFiSLAM
  • even pCell location determination via cell signal noise

Thoughts?

The aim is simulating how light behaves and it's another form of EM radiation so there's some commonality in how it behaves with other radiation but light also behaves like a particle:

http://en.wikipedia.org/wiki/File:EM_Spectrum_Properties_edit.svg

There's not really interference needing to be accounted for either; there's some geometry intersection calculations but there's a difference in the amount of computations required and the computer doesn't need to do triangulation. With waves outside of the visible spectrum, you can't see where they are. It is interesting how we evolved with photoreceptors in our eyes only capable of seeing a limited spectrum of light:

http://en.wikipedia.org/wiki/Photoreceptor_cell

and of course a limited number, which gives rise to both the limit of display resolution and digital camera design.

If you do some kind of signal triangulation then all you want to find out is a single point from a handful of signal generators, that's like a single pixel in a computer image. To get one frame of a computer image, they have to compute ray intersections from every light source for every one of over 2 million pixels for 1080p, multiple light sources and usually multiple samples per pixel. The ideal would be to replicate what happens in the real world and follow the same number of photons but you can see how many photons are flying around:

http://www.eg.bucknell.edu/physics/astronomy/astr101/prob_sets/ps6_soln.html

"in order to emit 60 Joules per second, a 60W lightbulb must emit 1.8 x 10^20 photons per second. (that's 180,000,000,000,000,000,000 photons per second!)"

The sensitivity range of the human eye is quite impressive when you consider that it can detect under 10 photons all the way up to the insane amount you'd see in a fully lit room. But it's clear that computers simply can't simulate this kind of problem with this level of brute force.

On top of geometry, they have to calculate color so every light ray has to keep track of the intensity decay, interaction with volumetric effects, scattering through objects setup with certain materials, occlusion and so on and the reflectivity of some surfaces affects what shows up in different parts of the image. John Carmack has a talk going through real-time graphics where he goes from rasterization to raytracing methods:



Raytracing isn't one method, there's a whole variety of ways to use it. There's real-time raytracing hardware here:



but you can see it doesn't look realistic as it's only doing limited sampling. Rays that are traced need to be traced in a way that simulates how real world lights and materials behave to get realistic output. Simulation will only go so far and the rest will have to be controllable by artists to produce what they want to show.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Imagination showcases 'ray tracing' graphics tech that could come in Apple's future iPhones, iPads