Imagination showcases 'ray tracing' graphics tech that could come in Apple's future iPhones, iPads

2»

Comments

  • Reply 21 of 27
    wizard69wizard69 Posts: 13,377member
    Anandtech reports today that ARM has released Geekbench 3 scores for their A57. It's not looking good. At the same clock speed a single Cyclone core (A7) is still going to be about 25-30% faster. And A57 based SoC's aren't shipping by Qualcomm until the start of 2015, and nobody knows when Samsung will ship theirs.
    Pretty pathetic but don't count Qualcomm out! It may take them awhile but they have the chops to do an enhanced 64 bit core.

    By the time the A8 is out competitors still won't have anything as good a a year+ old A7.

    Yep! People still don't recognize just how far ahead Apple is with A7. I've heard rumors though that much of the focus of A8 will be on lowering power usage. I'm not sure a strong increase in performance, per core, is a given. We might however get more cores or a new GPU.

    I can see such a rumors as being valid because of the obvious nature of Apples iOS products. Even a few milliwatts can make a huge difference for a battery powered device. For a 64 bit device it really looks like Apples A7 & coming A8 will continue to outclass the competition for some time to come with respect to overall performance.
     0Likes 0Dislikes 0Informatives
  • Reply 22 of 27
    Marvinmarvin Posts: 15,585moderator
    hmm wrote:
    I kind of wonder if this is more memory efficient in spite of being more computationally expensive. They don't have to load baked light or shadow maps this way.

    Could well be but most likely they'd still want to cache the shadow in a final product rather than calculate them every frame, which allows them to fake soft shadows much faster as they can apply a blur based on distance. This still gives better clarity than rasterized shadows, which would be generated by placing a camera at the light source. The other thing to consider is volumetric and displacement effects. Crysis is one engine that tries to find the best way to do each effect and there's a lot of work to get it to come together:


    [VIDEO]


    They still use rasterization completely:

    http://www.crytek.com/download/fmx2013_c3_art_tech_donzallaz_sousa.pdf

    For global illumination, they used Cascaded Light Propagation Volumes, there's a Powerpoint here:

    http://www.crytek.com/cryengine/presentations/real-time-diffuse-global-illumination-in-cryengine-3

    They have stats on how fast it performs on the last-gen consoles. They managed to get it to calculate in 1ms per frame.

    The problem with the hacks to tackle each effect with rasterization is the artifacts - the physically-based solutions are just simpler. The only problem is how slow they are. Maybe it's time for Pixar to get back into the hardware business. Why not make a co-processor that puts their best physically-based algorithms into silicon and run it orders of magnitude faster than you can get on generic hardware? They'd have to allow for programmable shaders but they'd have an API for that and there can be multiple iterations of the co-processor.

    Right now, hardware manufacturers and software developers are working separately so GPU manufacturers are trying to make hardware run as generic code as possible and rendering engine developers are trying to get as close to photoreal, which is as specific an end-goal as it gets, we know what that looks like. As a co-processor, you'd still have the ability to make stylized graphics with the other hardware but the co-processor gives you the definitive, singular solution faster than any generic hardware can manage.

    This has been happening with bitcoins where special hardware is being used in place of GPUs:

    http://www.coindesk.com/faster-bitcoin-mining-rigs-leave-gpus-in-the-dust/

    One of the special rigs has an efficiency 50x higher than a generic GPU. AMD GPUs seems to be better for it and someone here tried out the new Mac Pro on it but didn't want to risk damaging the machine:

    http://www.asteroidapp.com/forums/discussion/226/litecoin-ltc-on-new-mac-pro-w-d300s-stopped-mining-before-i-ever-really-got-started/p1
     0Likes 0Dislikes 0Informatives
  • Reply 23 of 27
    auxioauxio Posts: 2,796member

    Nice stuff Marvin.

     

    In regard to specialized hardware, that's very much like stand-up arcade games: custom hardware for just that game and a few others like it.  It's tricky to find the balance between hardware which is "so niche that the development costs could never be recouped" and that which is "too general purpose to solve the problem well".

     

    I don't know if such hardware would be feasible for desktop/laptop computers.  However, I could definitely see console manufacturers being interested in funding specialized efforts.

     0Likes 0Dislikes 0Informatives
  • Reply 24 of 27
    unknwntrrunknwntrr Posts: 69member
    This is truly impressive, considering an image with ray tracing at 1080p that looks photorealistic can take hours to generate when rendering animations.
     0Likes 0Dislikes 0Informatives
  • Reply 25 of 27
    unknwntrrunknwntrr Posts: 69member
    Quote:
    Originally Posted by SolipsismX View Post



    And I was recently told that anything to do with a raster is a poor design compared to vectors. image

     

    Wich is true in most cases, just thinking of something else entirely: iOS apps nowadays carry four sets of images with them, iPhone, iPhone retina, iPad and iPad retina, when one single vector graphic using less space than the smallest of the pixel graphics would suffice. Or retina-websites, carrying a retina and a non-retina version of a vector graphic rendered into pixels, just because svg sounds scary to some. The list could go on…

     0Likes 0Dislikes 0Informatives
  • Reply 26 of 27
    dick applebaumdick applebaum Posts: 12,527member
    It occurs to me that the computations involved in ray tracing may be similar to those required for:
    [LIST]
    [*] fingerprint scanning
    [*] iris scanning
    [*] even location fingerprint scanning ala WiFiSLAM
    [*] even pCell location determination via cell signal noise
    [*]
    [/LIST]

    Thoughts?
     0Likes 0Dislikes 0Informatives
  • Reply 27 of 27
    Marvinmarvin Posts: 15,585moderator
    It occurs to me that the computations involved in ray tracing may be similar to those required for:
    • fingerprint scanning
    • iris scanning
    • even location fingerprint scanning ala WiFiSLAM
    • even pCell location determination via cell signal noise

    Thoughts?

    The aim is simulating how light behaves and it's another form of EM radiation so there's some commonality in how it behaves with other radiation but light also behaves like a particle:

    http://en.wikipedia.org/wiki/File:EM_Spectrum_Properties_edit.svg

    There's not really interference needing to be accounted for either; there's some geometry intersection calculations but there's a difference in the amount of computations required and the computer doesn't need to do triangulation. With waves outside of the visible spectrum, you can't see where they are. It is interesting how we evolved with photoreceptors in our eyes only capable of seeing a limited spectrum of light:

    http://en.wikipedia.org/wiki/Photoreceptor_cell

    and of course a limited number, which gives rise to both the limit of display resolution and digital camera design.

    If you do some kind of signal triangulation then all you want to find out is a single point from a handful of signal generators, that's like a single pixel in a computer image. To get one frame of a computer image, they have to compute ray intersections from every light source for every one of over 2 million pixels for 1080p, multiple light sources and usually multiple samples per pixel. The ideal would be to replicate what happens in the real world and follow the same number of photons but you can see how many photons are flying around:

    http://www.eg.bucknell.edu/physics/astronomy/astr101/prob_sets/ps6_soln.html

    "in order to emit 60 Joules per second, a 60W lightbulb must emit 1.8 x 10^20 photons per second. (that's 180,000,000,000,000,000,000 photons per second!)"

    The sensitivity range of the human eye is quite impressive when you consider that it can detect under 10 photons all the way up to the insane amount you'd see in a fully lit room. But it's clear that computers simply can't simulate this kind of problem with this level of brute force.

    On top of geometry, they have to calculate color so every light ray has to keep track of the intensity decay, interaction with volumetric effects, scattering through objects setup with certain materials, occlusion and so on and the reflectivity of some surfaces affects what shows up in different parts of the image. John Carmack has a talk going through real-time graphics where he goes from rasterization to raytracing methods:


    [VIDEO]


    Raytracing isn't one method, there's a whole variety of ways to use it. There's real-time raytracing hardware here:


    [VIDEO]


    but you can see it doesn't look realistic as it's only doing limited sampling. Rays that are traced need to be traced in a way that simulates how real world lights and materials behave to get realistic output. Simulation will only go so far and the rest will have to be controllable by artists to produce what they want to show.
     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.