New GI Preview

Using the new GI with GPU enabled I rendered the below image in 6 minutes on a notebook PC with a GTX 765M. The leftover noise is from portal lights. (A higher powered machine with a K6000 and more CPU cores renders this scene in less than 3 minutes for the frame)

The new GI is a brute force technique with many improvements to regular brute force rendering including better filtering control. The below image does not use the filter and instead renders as-is.

This feature is not finished and I used “diffuse” paths as it is most complete and fast. You can also render using the same method on the CPU. If you’re curious why it presamples the scene, it is because custom and CPU shaders cannot be rendered automatically on a GPU. Presampling allows you to render legacy scenes without changes if the scene contains supported effects.

There are features not currently supported by this technique since it’s not complete. More conversation and examples will follow. Consider this experimental for now. Begin testing it along with other users, look here.

Improved GI (using GPU)

Improved GI (using GPU)

About David

I am a VFX artist that specializes in Lighting and Rendering. I spend a fair amount of my time supplying clients with artistic solutions as well as technology solutions. With a background in fine art and technical animation training, I strive to bridge the divide between the artist and technologist.

Posted on May 5, 2014, in Example and tagged . Bookmark the permalink. 51 Comments.

  1. thank you for sharing it! I’m very interested to see where this is going!

  2. awesome! can’t to play with it.

  3. That is nice. I assume a fast GPU could render the scene in (2) minutes max. I’m not an official Maya user, not yet anyhow 🙂 I love progressive rendering, it took the software and Mental Ray to catch up to fPrime (Lightwave) which was ahead of it’s time.

    • I will run this scene on a K6000 later for a camera animation to post. Progressive rendering has been available in mental ray itself since 2009 (with version 3.7) It was one of the first renderers to have Progressive rendering with levels of pixel detail feedback.

      Autodesk didn’t see fit to integrate until now. Work is being done to integrate Progressive using the correct API directly into Viewport 2.0 The current renderview cannot update correct as-is.

  4. First I must say, you have a K6000 ? Update with your results !

  5. You are right about progressive rendering since 2009 I remember reading a white paper on it around that time frame.

    David you didn’t tell the time it took to render the scene with a K6000 ? 🙂

  6. I mean render the scene with the GPU !

  7. That is beautiful speed for GI rendering.

    I’m considering buying an additional render engine, between Octane / vRay / RedShift. I do like vRay especially since it now has GPU RT rendering. I’ve been informed if I’m going to buy vRay why not buy Arnold. If I could have your opinion it would help make my decision ? 🙂

    • Octane GPU only render seems very innovative and future-oriented to me!

    • Arnold will be more expensive than Vray and Solid Angle currently has no interest in GPU rendering. But Arnold does some things well like handling lots of geometry and really simple controls. If you want a full GPU renderer I’d go with iray because I can use LPE and once MDL is in mental ray 3.13 I can render across the platforms – realtime > interactive > mental ray > iray photoreal Once this is fully integrated you will be able to render whatever you want in what mode you desire.

  8. Isn’t iRay slower then vRay ?

    • All GPU rendering that isn’t done on a cluster for full BDPT is “slower” than non-brute force rendering. This means mental ray, Vray, etc, will all outperform for standard rendering but they have a lot more controls than GPU renderers which are designed as “push button” rendering.

      The vision for NVIDIA is being able to render with your required features in whichever way you need based on your image desires and technical limitations.

      • Also, keep in mind that features slow down a renderer. The more you add (even if you’re not using them) the slower it is. This is something NVIDIA learned awhile back that a lot of GPU renderers are just now figuring out. But there are ways to improve that. iray itself will have an enhancement later this year that will improve performance across different GPU generations.

        And something a lot of people lose sight of is that how fast it takes to *converge* is a better speed measurement. I hear a lot of “it’s faster but noisier” Well if it’s noisier it’s not actually faster. . .

  9. Your suggestion is using Mental Ray along side with iRay ? Mental Ray as your CPU render and iRay as your GPU Render.

    • iRay was given to Max users, would it be a false move to buy iRay when it could possible be added to Maya in the near future ?

      • Some time ago Autodesk mentioned after they improved the integration of mental ray they would be interested in integrating iray. The mental ray improvement is in full swing now. But I have no idea if their sentiment is the same. mental ray and iray use a different API which is a bit of a roadblock, but correct integration would mean the difference wouldn’t be as noticeable.

        I’m looking much further down the road, 2-3 years. Some of these things will begin to happen in the next year like MDL in mental ray and better Maya integration. Eventually the separation might become more blurry.

  10. What is your opinion on the slew of render engines on the market, it almost is like shopping for toothpaste. Are there any render engines besides Arnold which already has a strong future, that is a good alternative to Arnold ?

    • I’m not convinced Arnold has a strong future. I see the future of rendering as GPU based. As GPUs continue to improve you’ll begin to see the possibility of real time rendering for VFX work. Even with 60 CPU cores that’s not going to happen any time soon without GPUs. And as I mentioned, Arnold doesn’t have plans for GPU rendering so far which puts them behind.

      • can I ask you what do you think of octane? I’m thinking in learning it beacuse it’s availble for almost every software and it’s affordable..

      • I’ve not used Octane for professional work. Quite a few of my colleagues in LA liked it for the price point and features. Keep in mind there are some limitations. You have client-only rendering right now, you can’t send it to a render farm. (It failed to render on the VCAs at GTC as well)

      • thank you!
        well as freelancer I’ve just one machine avaible so I’m not really interested in render farm 😉
        P.S. It seems that network rendering it’s coming in 2.0

  11. vRay / Octane render not long term choices of renders ?

    • I’m less clear on Octane’s future, but Vray is working with NVIDIA (they are a customer of the same people working on mental ray/iray) NVIDIA licenses their technology to other companies interested in using it. But you can imagine the first target of new technology is mental ray and iray. Chaosgroup is agile and has good integration right now but Vray RT is a little behind having wandered through a phase with OpenCL. They do have studios pushing them for some useful (a few not-so-useful) features.

  12. VCAs at GTC ?
    What about vRay considering it now has RT ? Or RedShift ?

    • The iray VCA will run GPU-based renderers (it comes with a license of iray though). iray and Vray RT were the only production capable renderers that were running on the VCAs at GTC.

      Remember, Vray RT was originally considered a preview system for Vray Production. Looks like this might be changing at least a little bit.

      iray is a complete and separate set of renderers.

      I’ve not used Redshift. I hear some nice things about its traditional workflow (biased) but I’m confused with all the choices why you’d stick with biased rendering.

      The reason there are all of these renderers coming out is *not* because they are using a lot of new techniques. In fact it’s the opposite. I cringe when I hear “modern lighting like (insert new renderer here).” Path tracing is one of the oldest forms of raytracing conceived in 1986. The “modern” techniques are all of the tricks and options people now want to avoid.

      Hardware is good enough now that developers can forgo all of the modern techniques in rendering and go back to basics: pounding the pixel. This means smaller tasks to get a working renderer to market.

    • Also, keep in mind that rendering with Vray RT will not match the materials in Vray Production right now.

      This is something that MDL solves for mental ray and iray: your materials will match.

  13. If I understand correctly. the materials you create in Mental Ray will work in iRay that is simplifying your statement, correct ?
    I’ll exclude vRay out of my options, you have convinced me that iRay is a better choice, iRay / Octane I must choose between !

    • I might actually choose Vray over Octane because of the CPU options when necessary.

      Once MDL is integrated into mental ray in 3.13 (before the end of the year for mental ray) you would be able to use the same materials in iray. The layering library will “become” MDL with the same workflow in Maya.

  14. As you said, vRay RT materials won’t match those of vRay, which is something I would look forward too, why I chose to exclude it.

    • Other than MDL, I am not aware of a production renderer that can match materials based on rendering technique. You can port the BRDFs to an extent but that doesn’t necessarily mean it will look the same in practice (at least from demos I’ve seen) Vray RT didn’t really care to match materials, it was just preview. But to some studios it’s important to have better correlation between renderers.

      Back to the post topic: this is why the GPU and CPU versions of the new GI are designed to match. It’s not very useful if you lookdev on a local machine with GPUs and send it to a CPU farm and get a different result. Having predictable and useful results regardless of mode is important to NVIDIA.

    • How about things like displacement and motion blur etc, does iRay excel at those ? Octane being strictly a GPU based render unlike vRay which gives you the option of using the CPU or GPU which I can see the benefit, but if I can’t use materials I create in Vray in VrayRT then what benefit is it. Speed, photorealistic and quasi-realistic rendering is important, including SSS, displacements, motion blur.

      Once again speed being the most important. Lets not forget, if I want to go under the hood those options should be available to enhance the render.

      This is a gripe I have with some software, they expect you to go under the hood from the start. I should be able to get quality work without it being to easy but I can go under the hood when I want, a nice balance is crucial.

      • Displacement is memory intensive, GPU renderers might be fine at it but it will eat your GPU resources forcing you into a CPU fallback, so there goes your speed benefit.

        GPU renders also have a hard time with things like volumes, fur, etc and still maintain speed. A hybrid approach for now might give you more speed and maintain the flexibility. This is why GI on the GPU exists in way.

        Speed being most important is something few people are interested in having. Like my previous explanation: new renderers are interested in simplicity and relying on the hardware to do the work. This means if you want faster, you need better hardware. This balance is sort of a personal thing.

  15. Thanks for answer my questions 🙂

  16. Hi David,

    This looks very promising ! Can’t really see how clean and consistant it really is on just a single frame, but 6 min is still very very short.
    Will it be for any GPU or just Nvidia’s ?
    And do you know if this feature will eventually be included in Maya 2015 (through an update I guess) ?
    And finally, will the community be involved, as with the MILA shaders ?

    Thanks again for keeping us posted !

    • This is only for NVIDIA GPUs like most of their integration technologies. CUDA, Optix, etc run on GTX, Tesla, and Quadro.

      There is a CPU fallback you can use with the speed penalty.

      I rendered this from inside Maya 2015, but integration from Autodesk won’t happen until it’s ready/complete. And since it’s a core feature, the only way to provide more feedback is to test in the Betas for mental ray or provide some feedback to the ARC forums.

      Its limited features make it questionable for animation right now.

  17. I’ve been trying it using the string options. Pretty powerful. One catch I’ve found is if you’re using the the builtin IBL for lighting, the gpu gi sees it twice and produces a brighter result. Once via final gather rays, and once via environment rays. To fix this I made a sphere that encased the scene with all flags off except primary visibilty, and miFinalGatherHide on. Texture it with black and it stops the environment rays getting to the gpu gi.

    • This was caught a bit ago. If you use “on” then the first version uses FG rays for specular interaction. This accidentally doubled the light in some cases. Especially bad with FG force. This won’t always be the case for this method that it works with FG.

  18. David how does Render man compare to Mental Ray and the other renders on the market ? Is it mainly a renderer for Pixar or other large studios over small or freelancers ?

    • That would be a long comparison. Renderman is typically a powerful but complicated product. However, Renderman for Maya simplifies a lot of that and the new BDPT engine is a lot easier to use.

      • I’ve considered Renderman since it has a good history as does Mental Ray but all these “other” render engines has detoured me 🙂

  19. Hi,

    When will the script be available to be able to use this in Maya without messing with the strings manually?

    • The script would manipulate the strings automatically, that’s part of the usage, to make it easier. There is some groundwork to make this work without bothering settings in legacy scenes that isn’t completed yet.

  20. Hi David, this is really interesting, is there a way to use it in a batch render? I used the string option but the batch render doesn’t work with it (render but doesn’t use gi gpu), any advice is well appreciated. Thank you in advance!

    • I’ve not had an issue using batch rendering. Are you using this locally on the machine with the physical GPU?

      Keep in mind there may be artifacts in animation right now since the feature was not completed for animation by the time 3.12 was included in Maya. (Optix Prime has limitations in the version in Maya)

  21. I’m using it on local machine, I tried with one frame before start the entire render. With the render window it take 43s and the gpu usage goes up and down few times then stay in indle, with the batch render it take 1 minute and half and gpu usage is at 100% almost all the time, is strange. Let me say that I’m not an expert so there may be some behavior in the animation render that I don’t know

    • Thank you for your help David, I figured it out, it used gi gpu in all batch render the problem was on the “threads” option of the batch, switched from “auto” to 8 and all went fine!

  1. Pingback: New GI in mental ray directions | elemental ray

  2. Pingback: GI GPU Prototype Testing | elemental ray

Leave a reply to s1b3r1a Cancel reply