Unified Sampling Redux

As a simplified look at using Unified Sampling as a more “brute force” method that was outlined here; the below example outlines the differences in time and sampling on a visually trivial scene. This should make some things very easy to understand and quick to read before moving on to lights. 😉

Glossy Test Scene

In a glossy scene originally rendered at HD 1080, the first frame was rendered with the following settings using all mia_material_x shaders.

Quality 8
Samples Min 1.0
Samples Max 800
Reflection Bounces 2
Shadow Bounces 2

Resulting Time: 48 minutes

In a second test I added these settings:

Error Cutoff 0.04

Resulting Time: 35 minutes

The images appeared to be identical to the eye. I ran imf_diff to analyze actual pixel differences with this result:

differing pixels: 0.379% (7869 of 2073600)
average difference: 1.265%
maximum difference: 4.632%
Summary: Some pixels differ slightly.
== "glossyA.exr" and "glossyB.exr" are similar

So I am pretty happy with the fact that the time savings of 13 minutes resulted in no observable difference.

Below is an explainer graphic of the glossy rays count set for each sphere.

Reflection Samples from Shader

Below is the Samples Diagnostic framebuffer (tonemapped to work on the internet). You can see that the more “brute force” the reflection rays settings, the harder Unified Sampling had to work.

Samples per pixel (brighter is more)

Below is the time buffer where the longer it takes to render a pixel, the brighter the resulting pixel in the time buffer.

Time per pixel (brighter is longer)

You may also have a better understanding of how Unified will perform consistently across a scene with a single Quality parameter when given a wide range between minimum and maximum samples.(These spheres resemble one another despite having large changes in reflection gloss rays.)

Despite these results you might still notice a little grain on the pure brute force sphere. Add a texture map and you’ll hardly notice but is there a reasonable balance in a more complex scene?

If you need a completely smooth scene where there are few textures and more of a “pure” shader effect, then small increases seem to work well without sacrificing extra time. 2-4 samples works well for this in those special cases. But we find that animation and VFX work do not need this level of detail. This would be for something like print work and large resolutions.

Brute Force Only: 22 minutes at HD 1080

 

Next we might take a look at lights and how to use them in similar circumstances.

About David

I am a VFX artist that specializes in Lighting and Rendering. I spend a fair amount of my time supplying clients with artistic solutions as well as technology solutions. With a background in fine art and technical animation training, I strive to bridge the divide between the artist and technologist.

Posted on January 29, 2012, in maya, Optimization, unified sampling and tagged , , . Bookmark the permalink. 40 Comments.

  1. Love these Unified Sampling posts, really useful info!

  2. I’m sorry, :), I still don’t understand.
    So, brute force means you’ve set the reflection rays to 1 and revert to the sampler to handle the glossy effect?

    • Correct. I’m using Unified Sampling Quality to control the reflections instead of the shader settings like glossy (reflection) rays.

      This is more like the future of rendering where shaders will take advantage of Unified Sampling as opposed to relying on internal controls.

      • This is interesting, thanks for the reply. It begins to make sense now.

      • I have to say though, that this probably one of your most accessible posts on unified sampling. 🙂

      • Thanks, although we do want people to understand the “whys” from the original post. I am currently preparing older pipelines to move to Unified Sampling and it’s important to know why certain benefits happen because sometimes there are caveats to this and you have to find the right balance between pure brute force and lots of secondary ray generation.

        This balance happens most with lights that may need more rays and secondary passes like ambient occlusion as an additional color pass that may need more care (Unified Sampling does seem to obey “contrast all buffers” so keep that in mind if you find render time to have increased significantly with the addition of custom passes.)

  3. Holger aka kzin

    hi david, thx again for this one,
    but there is one thing in your renderings that looks a bit strange. the floor reflection of the spheres are way softer if the sphere has only 1 glossi sample.
    you can see from the sample diagnostic image that unified renders glossi way more smooth compared to high local glossi samples. it looks like the problem i run into that local samples react different then unified “glossi rays”. so for me the rendering is showing some inconsistency and it looks like you should never use such high local glossi samples together with unified? because local glossi cant resolve the details that unified is capable to render and that results in different looks.

    • I would have to see what you mean in one of your renderings. In my case I purposefully attenuated the reflection with a falloff and color for the ground plane. I wanted it to be there but not compete with the reflection of the spheres.

  4. Holger aka kzin

    its the blurriness of the spheres reflection on the groundplane. the reflection is sharper from left to right. it looks like its getting sharper the more local samples you take.
    the reflection of the sphere with one local reflection is more diffuse then the ones with 64 local samples.

    • I think this is just a result of angle of view for the highlight.

      The final image in the post is all brute force and the same effect is visible. So that doesn’t seem to be a result of changing the samples. The only difference is the spec change reported by Lee. We know about that already and is generally not enough for us to change the method.

  5. Nice blog with some indepth info…

    I want to highlight something in this comparison, if you switch inbetween the (so called) Brute Force approach vs the standar aproach (of increasing samples in the shader) you will clearly see that as you get more rays in the shader the brighter the spheres get.
    That is happening because there are so few rays being cast from the sphere into the envoiroment that it completaly misses iluminated spots altogether… That said the unified sampling does a fine job at resolving the image anyways, even though its not CONVERGING to the correct result (as an unbiased render would).
    I guess it is a trade-off, and probably a good one. But I would be carefull setting at least a few samples on glossier reflections (and specially for GI, where you can see this effect at its best…

    Cheers and congrats on the nice blog 🙂

    • Think of it like this: in 8 eye samples the shader could send more rays at once, 8*5 maybe. With brute force you may get as many rays for reflection but one at a time, maybe 40 samples. The end result would still have the potential to have just as much information as before and be determined by a similar sampling algorithm. (Each one sample still runs the shader sampling algorithm based on QMC and reflection importance)

      For example, iRay shoots samples into your scene picking away at the shaders one sub-frame at a time and it is considered unbiased. This is a similar technique but omitting the additional brute force nature of path tracing and BSDFs.

      So on the surface that explanation seems to make sense and I thought about that as well. But overall the amount of information gained for the pixel is the same just collected differently. But still a possible explanation without knowing the specifics of how the shader collects the data.

      • Kind of…
        Its not just about the total number of rays…

        Think of it this way, your eye rays meet the object and shoot out 1 reflection ray. The reflection ray hits a dark wall. The pixels surrounding this one are all dark as well. There is not enough contrast no further rays are shot…
        Now if we try again and the shader shoots out 40 reflection rays, 1 of them may go through a window and hit the sun… Now, the sun is very bright and the contrast bewteen that pixel and its surrounding pixels will be massive. To resolve that situation more eye reays will have to be shot on sorrounding pixels. Believe me, I have also stress tested USampling and increasing quality will help, but will not recover all the lost info sometimes.

        Now iRay is considered unbiased, because it IS unbiased. That means it does not ADAPT to contrast like USampling does. If there is a lot of contrast it just doesnt care, the number of eye rays and reflection rays shot in every pixel is EXACTLY THE SAME. Therefore, as it iterates it will always converge to the correct result. Again MR wont because it will only iterate on areas where it THINKS there is the need to do so, which may be right or wrong…

        Im not saying this is good or bad, just that it should be known.

        I will cook some images to send you by e-mail to prove my point.

        PS: Also a relevant point is that reflection rays are cheaper then eye rays. Since they dont botter to calculate the shading network, shadow rays, refract rays, etc, etc…

      • “Now iRay is considered unbiased, because it IS unbiased. That means it does not ADAPT to contrast like USampling does”

        Actually iRay is not stochastic, iRay is adaptive. In fact, Unified is an iRay technology used in mental ray. So that’s why I say the difference in how they sample the image is negligible. So if that’s the basis for the comparison I don’t understand.

        If you run Unified in Progressive mode, you are basically image sampling like iRay.

        Sidenote: This rQMC system doesn’t sample based solely on contrast. You can actually get a good idea of how Unified and iRay are sampling the scene by looking at QMC Image Synthesis by Alexander Keller.

  6. gustavo, i did not understand what you mean, the problems you have. perhaps an example image with your aa settings could help to find a solution to your problem.
    from my experience, unified solves high contrast differences really good, way better then adaptive.

    as an addition for the 1 sample thing i should discribe why i used only 1 local sample for my shaders. it was simply because i rendered with dof. and because of the dof, i used alot of max sample together with a high quality value. so there was no need for higher local samples then 1. it was enough to get nice smooth glossis with nice smooth dof.
    for a actual animation i use more then 1 local samples, mostly 4 or 8 for glossis surfaces because i use only 80 samples for max, quality of 8. it gives nice results and renders smooth with motion blur, also all the details are captured well.
    so it really depends on the scene and you should always test what you really need.
    unified creates really good results for simple shadings, also with motion blur, you dont need that much sample. so its better in this cases to use more local sample instaed of using high max values which will raise the rendertime unnecessarly.

    my starting point on sampling with unified is 1 min, 32max, quality of 5. thats more then enough for simple shaders and also for small details. for smooth motion blur you can raise max to 80 or 100 and quality to 8 or 10 (david would may say its to much for vfx, because you dont need such clean motion blur renderings). if you have glossis surfaces with wide glossi effects and you do brute force, then use 4 to 8 local glossi samples. but its possible that you have large plane areas with wide glossis for which you need more local samples.if you have a lot of such geometry, it would make sense to raise the max sample value to 150 and lower the quality from 10 to 6. but this is shot dependent.

    • We’re usually a bit more aggressive as well. We may end up with samples in the 100s. With a pure brute force approach for say, the Hot Wheels commercial, we found Quality in the 4-10 range useful and samples max 500+

      The nice thing about that approach is that we could very quickly pick away at the scene even on pixels hundreds of samples, calculated in less than a second.

      And yes, we can live with grain on a lot of things in motion, but we actually tuned Hot Wheels in our tests to perfectly smooth motion blur. More like something you would see from the Rasterizer but with raytracing, which is great for cars and shiny objects.

      • these sampling values sounds a bit more realistic in terms of smooth (really smooth) motion blur together with smooth glossis from my tests. that is also one thing i really like, you can render really really smooth mb very fast, not comparision to the old adaptive method. also dof can be rendered without the cost of additionel time with these values. if only the dof control in post wouldn’t be a problem. 😉

  7. Click to access Monte%20Carlo%20SIGGRAPH%20Course.pdf

    Chapter 9 on Biased vs Unbiased Raytracing

    “An unbiased Monte Carlo technique does not have any systematic error. It can be
    stopped after any number of samples and the expected value of the estimator will
    be the correct value. This does not mean that all biased methods give the wrong
    result. A method can converge to the correct result as more samples are used and
    still be biased, such methods are consistent.”
    […]
    [chapt 9.3 on Adaptive Sampling] “Instead of using a fixed number of
    samples per pixel each pixel is sampled until the variance is below a given threshold.”

    So if the sampling number is not fixed (if it has a Min, a Max and a Treshold) it is adaptive… And therefore a biased technique…

    About iRay, I always understood it as being an Unbiased (Fixed Sample) technique… and it is classified as such in many places over the internet:
    http://raytracey.blogspot.com/2010/12/gpu-accelerated-biased-and-unbiased.html
    http://en.wikipedia.org/wiki/Unbiased_rendering

    That said I cant find any oficial Nvidia info calling iRay unbiased (they only call it Photorealistic)… So you may be right… But as far as I know it works on fixed sampling.

    Im still cooking those images 🙂

    • I think if you look around, you’ll find there is argument on what exactly “unbiased” means. So different sources will label different methods as “unbiased”. And as the above states, more samples can indeed converge to the “correct” result, be it adaptive or not. So using a biased technique can still result in a “correct value”. Does that mean it’s unbiased now? Hence the confusion.

      Take a look here for a small discussion on similar techniques: http://igad2.nhtv.nl/ompf2/viewtopic.php?t=59&p=337

      Also, I can promise iRay is indeed adaptive. In fact, you can see the result yourself as the iRay render progresses, each (sub)frame becomes faster and faster. This is partially because the error threshold can be used to control where iRay will continue to sample. I do not believe 3ds Max has error threshold exposed. That may be the source of the confusion.

      • Hey David, I’ve cooked those images and it seems I proved myself wrong 🙂

        Akward, but very good at the same time to know that this technique is actually very consistent with unbiased results…
        At the same time it is interesting how results really change (in an inconsistent way) when you increase local sampling, which still bugs me a bit.

        Anyways I cant find your e-mail anywhere in this blog, if you dont mind to write me at gustavoeb@gmail.com I’ll answer you with the results of my tests…

        Cheers

  8. Hi Gustavo, I’ve found a similar problem, it happens with soft shadows too. Though I think I know what’s causing it. Are you using a lens shader to correct the gamma? That is what’s making the results inconsistent.

    Without the gamma correct node, the results are the same whether the samples are local or global. With the gamma node, the local samples version is correct, but the global samples is not.

  9. Hi David.

    You did not really explain it in the initial post so I was wondering, what made you increase the error cutoff?

    I am doing tests with unified and just wondering what info leads you to make the decision to increase the error cutoff? What results were you seeing in the diagnostics to push you in that direction?

    Many thanks,

    Richard

  10. Very eloquent and useful article indeed!

    Thanks for posting it David!

    Ta

    J

  11. hi and thanks

    why when i set filtering mode to any thing except box render time be twice and more?

  12. hey Dave.. I got a quick question.. can you explain the difference between sorted and segmented shadow method is the render globals.. having trouble understanding y i would use sorted over simple.. been looking all over n really can seem to find a good example.. also this note from autodesk docs confused me as well (When you use Simple shadows, you are limited to only one shadow ray per light source. Therefore, if you want to create soft shadows, which require more than one shadow ray to be cast, you should use Segments shadows instead. ) Like if i use an area light i can get soft shadows.. i wonder if its referring to a directional light .. not sure.. any help would be appreciate bro

    • Simple is usually good enough, but those cast a single ray towards the light, something is or is not in shadow at this point. Simple mode ignores occluding (overlapping) objects order. It doesn’t care if more than one object may be occluding the point shaded.

      Sorted may call shadow shaders as they strike objects in order, this may be important for transparency color etc.

      Segments calls everything in correct order of occluding objects and is used for more accurate volumes and color falloff in solid mediums (colored glass). Sometimes this isn’t necessary visually and a shader may print a warning if it is trying to use this mode and it’s not enabled.

      This might change in future versions.

      • also btw i found out why my shadows turned black in segment mode.. in the render output window i got this when using the max distance and color mode in the advance refractive section..

        If distance-dependent falloff is desired, the ‘Segment’ mode must be used.
        mia_material(_x): Refractive falloff is used, but the shadow mode is not set to ‘Segment’.
        Due to this, the falloff will be fixed, and not be influenced by distance.
        This can look as good, but is not as physically accurate.

        So i just increased the distance n the shadows turned to the correct color.. so i was able to get that warning you spoke about.. Still wasnt able to see how sorted benefited transparency color.. didnt see any visual change from simple to sorted..

  13. thx again for responding David..would it be possible can you give some direction on creating an example scene so i can see this in action.. i created a few sphere with transparency turned up and colored the spheres by changing the transparency.. and also tried another way by using the max distance and use color option .. i arranged the spheres so the shadows can overlap and i didnt see any change from simple to sorted mode.. and once i changed it to segments the shadows turned black.. i got one area light in the scene using a physical light shader

  14. David – what method are you specifically referring to in your latest reply?

  15. Sidenote: Vray (Chaosgroup) did not make this method. It has been around for some time. And in fact is patented by mental images (now Nvidia).

  1. Pingback: اثرهای هنری فارسیان | GFX Persia » mentalRay Unified sampling

  2. Pingback: “My render times are high? How do I fix that?” « elemental ray

  3. Pingback: Render Tests, Combined Lighting | elemental ray

  4. Pingback: New GI Prototype Quick Start | elemental ray

  5. Pingback: Maya 2016, new features and integration | elemental ray

Leave a comment