Unified Sampling – Visually for the Artist

Unified Sampling Visually Explained and Practices

As a primer for using Unified Sampling, look here: Unified Sampling

Unified Sampling is QMC (Quasi-Monte Carlo) image synthesis.

Basically: taking samples based on a QMC pattern and decision making process. The renderer “picks away” at the scene sample by sample to determine where to place the next sample.

Unified Sampling samples inside a pixel (red border)

Unified Sampling samples inside a pixel (red border)

How can this benefit your work and how can you take advantage of it?

Comparing stochastic (random) and QMC sampling patterns you can see benefit in how QMC avoids clumping and spreads samples out across the image to catch details. (image) One can also control how these samples are scattered inside the algorithm (stratification). 

The rendering equation and problems with a complex scene introduce a multi-dimensional problem that Unified Sampling helps resolve through a single control called “Quality”. This process is not only in the space dimension (raster space) but in time (motion blur).

So how do you use Quality? What changes occur when you increase Quality?

Quality increases will cause the sampler to concentrate more samples where it perceives the most error. So here I will introduce you to the Diagnostic Framebuffers. For this first explanation we will pay attention to two layers in the Diagnostic: Error and Samples. You can invoke this feature the way you used to, check on the “Diagnostic” box in the Render Settings. (image) Except now mental ray will generate a separate EXR file in the following directory on Windows: [User Directory]\projects\project_name\renderData\mentalray\diagnostic.exr

Open the .exr in imf_disp.exe  Under the Layer menu select the mr_diagnostic_buffer_error layer. (Alt-2)

Several things to note:

  • Error is per channel (RGB)
  • More error is a higher pixel value
  • Mousing over a pixel will provide you with the error value for the pixel (bottom left of the imf_disp window shows values)

You will notice the perceived error centers around noise and contrast changes as well as areas where geometry might meet in raster space (on the screen).

Now what would happen if you increased your Quality? (image)

A further increase? (image)

Notice that the areas with the most perceived (or calculated) error are eroded first. This makes sense, you want to resolve those areas without wasting time on areas that are relatively noiseless. It also gets progressively darker as error values decrease.

Now look at the mr_diagnostic_buffer_samples layer. (Alt-3)

It’s white/blank!?

This is an EXR and the values for samples are integers (whole numbers) If your minimal samples are 1.0 then your values will begin at 1.0 (white) for the buffer value ‘S’ which are samples. So you can lower the exposure in the top right hand corner (image)

I find that -6.0 is a good place to start. Now you should be able to see some sort of grayscale representation of your image. Mouse over these pixels for a value of ‘S’. You can drag the mouse pointer and hold “alt” (Windows) to change zooming levels on the fly in imf_disp.  (The versions on the blog are .png or .jpeg for obvious reasons. These values don’t exist in these web examples.)

Notice these things:

  • Your samples should increase around areas where the error buffer eroded the most error in the frame.
  • With a max samples of 100 you might not have any pixel with a sample rate of 100 if your Quality did not dictate it. (Quality might not have found it necessary at the current level)
  • Your sample values are linear. Unlike other implementations of QMC sampling, it is not exponential (4, 16, 64, 256) This means more efficiency. Instead of jumping to samples 64 from 16, maybe the pixel only needs 23 samples. You avoid over-sampling in large jumps.

What does this mean for tuning a scene?

This means that all you really need to tune an image is the Quality control. With a wide sample range you can push Quality around without sacrificing efficiency.

This has an added bonus: since your sampling is not a rigid function of samples, you can be assured that frame to frame changes in an animation will have a consistent level of quality to them. Even if a new complex object enters the frame, Unified Sampling will compensate accordingly without you needing to change sample rates.

You now have a consistent level of image quality for shading and anti-aliasing. Once you have chosen your desired Quality you can render unattended and return to correct results. (Less tweaking for quality issues and more time sleeping, my favorite hobby.)

So why do I need Samples at all?

Great question, and honestly there may come a day you won’t see a samples control at all.

Samples gives you the opportunity to fine tune a particularly difficult scene or object. You can indeed control individual objects with a sample override. Keep in mind that these values are now literal and linear in fashion, not an exponential power function like before. These overrides can also be outside the sample limits of your scene settings for extra flexibility. (image)

For scenes with a complex or noisy effect, this can give you some added control.

How will this help me with motion blur or depth of field (DOF)?

Motion Blur and DOF are really just noise problems. Unified Sampling will sample these areas where it finds it needs the most samples. What does this mean? Well, in motion blur or DOF there may be areas that are extremely blurry. (image)

This means that a loss of detail would result in needing fewer samples. Looking at a diagnostic you’ll see that very blurry areas may in fact receive very few samples. So the efficiency now extends to all types of problems and dimensions.

So now you understand how Unified Sampling will resolve a lot of problems more easily in your images using a simple, single control.

Using standard workflows you can generally begin with samples min 1.0 and samples max 100.0. These are scalar numbers because samples min < 1.0 will undersample an image. Samples min 0.1 will minimally sample once every 10 pixels for example. Quality 1.0 to 1.5 are generally decent numbers for higher quality renders.

What about a non-standard workflow? Is there a way to take better advantage of Unified Sampling in how my scene is setup?

Yes! In fact, this may be the best way to use Unified Sampling for complex scenes:

Unified Sampling will “pick away” at your scene. Samples are measured against one another and more are generated when necessary, one at a time. You can make your shader settings do the same thing.

Last Example Scene (image)

Note the glossiness. The foil on the walls, leather, and glossy floor. Usually for glossiness we would help the image sampler out by giving the shader additional rays to shoot for a sample called by an eye ray (from the camera). This is also the same for area lights and other effects where local control can be done inside the shader. So imagine an eye ray striking an object and sending 64 rays for a glossy reflection. In a pixel with 16 samples you can expect up to 1024 reflection rays. These rays might strike yet another object and run shaders. . .1024 times. If your ray depths are sufficiently high, you can expect a ray explosion.

Let’s take a look at another Diagnostic Buffer for Time per pixel in this image. It is labeled mr_diagnostic_buffer_time (image)

Where shaders force more work from the sampler they can take longer to generate. This is multiplied by the number of samples that may be taken inside that sampler. In the old version where samples would jump large amounts, your time per pixel could be very expensive in leaps and bounds. Each value ‘S’ for a pixel is render time in seconds.

What if we decided to let Unified Sampling control the sampling. As an overall control for a frame, Unified Sampling can be done in a more “brute force” way. Lower the local samples on the shaders to 1. In this scenario you can strike a pixel maybe 400 times! But in that case the rays sent are only 400 rays. That’s less than the 1024 we might have seen before with just 16 samples! (This includes lights/shadows. For instance, I used 9 portal lights in a scene where I left their samples at ‘1’, the resulting frame was still under an hour at 720 HD.)

Crazy!

Here’s the result. (image)

Something here is wrong.

Originally we were shooting more samples per eye ray. In some cases this may have been overkill. But now our image doesn’t look so great despite being faster (3 minutes is pretty fast). Think about it. If my reflection ray count was 64 then a pixel with 5 samples could spawn 320 rays. Well, my samples max of 100 is certainly lower than my 320 rays before (remember, I’m shooting 1 at a time now).

How do I fix this?

You can first increase your quality. 2.0, 3.0, more, etc. Keep an eye on your image as well as your Samples Diagnostic. We have found Samples Quality of 6.0 to 10.0 works in most cases. (This has been greatly reduced in mental ray 3.10, look here: Unified Sampling in 3.10 )

This is also where you will need to increase your samples max. Just like the scenario above where we might need 320+ rays, we need to raise the limit so Unified can make that decision.

But now you may notice something else. Areas without much contrast might gain samples for no visible reason. (Look at the black areas.) How do you fix that?

There is a rarely used control called Error Cutoff.

This control can be used to tell Unified Sampling to stop taking additional samples when the error calculation reaches a certain limit. Anything beneath this is no longer considered for additional samples. You may recognize this type of control from iRay where it has Error Threshold.

This control is very sensitive and I find that most tweaking happens in the hundredths of a measurement. So I begin with 0.01.  In this example 0.03 was a good stopping point. 0.03 is triple the amount of 0.01 but overall just a tiny change in the control. So be careful when tuning this or you may erode Unified Sampling’s ability to sample areas that need it. In many cases it is an additional control and not a requirement, but its inclusion is important in difficult scenes.

Will this benefit motion blur and depth of field?

Yes, a lot in most cases.

Now you might be sampling hundreds of times per pixel. Add motion blur and/or depth of field and the effect is much less expensive now. Unified Sampling jitters these samples in time and space for these effects automatically.

Why is it less expensive?

The extra samples you’re already taking will take into account the temporal sampling of motion blur and the ray direction change (circle of confusion again) for depth of field. So achieving these effects is much less overhead here. You’re already sending lots of rays. All while maintaining QMC efficiency. Areas of blur in motion blur or DOF where a sample strikes a shader will also generate a single sample for each type of effect, lowering the cost of that sample on the edge of blurry detail.

So now you have an idea of how to use Unified Sampling based on visual examples. You should hopefully find that keeping your samples settings wide and using Quality will simplify your tuning and scene rendering as well as making it faster.

The below image is Motion Blur and Depth of Field. Samples Quality 8.0 with Samples Max 600  Rendertime: 44 minutes at 1280 by 720

Additional Notes:

  • Using Progressive “on” and Unified Controls may help you resolve your settings faster but for now I find that I need to increase my Quality more than if I have Progressive “off” when using ‘1’ local shader sample. But for look dev you can generate a single pass very quickly to check lighting, materials, etc. all at once. It’s been reported. But in the meantime your Progressive refinements will be freakishly fast! The above image would refine a pass every 9 seconds. So in about 18 seconds I could tell if something was wrong and change it.
  • Using ‘1’ local shader sample is more like a BSDF shader where it samples the scene in a single operation. Current shader models try to collect everything from their environment so one sample at a time is possible but not as good as BSDF.
  • Combining features that are sampling your scene smartly will increase the speed of your render and result in higher quality renders. Unified Sampling is the basis that can be improved through BSDF shaders, Builtin-IBL, Importons, and other modern techniques that work together both present and future.
  • What about lighting? Take a look here for some ideas on area lights and Brute Force: Area Lights 101
  • Unified Sampling performance is Logarithmic like many brute force type of techniques. This means increases in Quality result in smaller and smaller render time increases as you get to higher numbers. Brute force rendering tests have shown a gain in speed for similar quality to be about 10-15%, we encourage more tests with this workflow. Others are testing this including studio projects where motion blur is key.
  • Consider using your render output and mr_diagnostic_buffer_time to see areas in your image that might benefit from changes to get a faster render. (Visually insignificant areas that take too long due to expensive effects, lots of shadow rays, etc.) I find the biggest offender for render time are shadow rays in most cases.

Brute force wide gloss reflection, 2 bounces (glossy) with the Grace Cathedral HDR. 11 minutes a frame at Quality 12.

Below: Brute Force Portal lights and Ambient Occlusion. The frosted windows are the most difficult to resolve. 57 minutes for this frame (15 of that is indirect illumination calculation). Model can be found here: ronenbekerman Render Challenge One  You can find recommendations on Area Lights for Brute Force here: Area Lights 101

Samples: actually still low, averaging less than 100 a pixel, many are in the teens.

About David

I am a VFX artist that specializes in Lighting and Rendering. I spend a fair amount of my time supplying clients with artistic solutions as well as technology solutions. With a background in fine art and technical animation training, I strive to bridge the divide between the artist and technologist.

Posted on November 30, 2011, in maya, unified sampling and tagged , , , . Bookmark the permalink. 37 Comments.

  1. Hi David,

    I was just wondering what happened to the wall’s shading between this post and https://elementalray.wordpress.com/2011/11/23/linear-color-workflows-in-maya-part-2-the-preferred-method/?

    The lighting on the wall from the window has gone to complete mush. Just curious if the unified sampling has anything to do with it or if you fiddled with the scenes settings between this post and the other one.

    • I removed the wall and dropped in an HDR just to make it look different. The HDR is the Ennis EXR. It wasn’t blurred. Used as-is. I want to avoid the exact same thing over and over and over. . . 🙂 As assets build things will change around some.

  2. Just read through the whole post. Very well explained. Way more in depth than previous posts on the various forums. Glad you explained the brute force method here, so when I forget something (which will most likely happen 🙂 ) I can come back here and reference it. I’ll be doing some more tests comparing adaptive / more local glossy samples to the brute force method with Unified explained here. Will be interesting to see how much benefit I can get.

    • Brute force seems to work well in many cases and simplifies things. It’s something that wasn’t possible previously. However, the standard approach still applies with low local samples and decent Quality as a place to start. It’s still a balance for more expensive effects where trickery is also a good compromise. 🙂 We’re hoping more people will expose and use Unified Sampling so other scenarios will give us better guidelines than just VFX work.

  3. Holger aka kzin

    nice explanation!
    i will extent my tests to the use of the diagnostic buffer, looks like a good way to see whats going on. i tryed to use the cutoff and indeed it can help to reduce the rendertime a bit with lots of sampling (1/10th without sacrificing quality in my sample scene).

    david, can you say on what maschine you did your tests? would help me to judge my rendertimes. 😉

    • Some test times mentioned are generally dual quad core machines with anywhere between 8 and 12 GB of RAM. My machine is an older Harpertown Xeon. Nehalem can see 15 to 20% speed increases depending on the scene. (I also tend to lower the rendering process priority so I can listen to music and surf the internet at times.) 🙂

  4. Just testing this and noticed having my area light samples on 1,1,1 kills the specular contribution on my surfaces. Raising it to back to around 8 brings back the spec to the correct levels. Anyone else noticing this?

    • Interesting. We tend to use light cards and something more like a reflection (BSDF-like without a spec. There’s a love/hate relationship with mia spec vs reflection mismatch). Really wide glosses are darker if you use brute force but don’t suffer the “disconnect” in glossy samples Kzin mentions. Increasing samples on the surface shader tends to make it brighter at the cost of noise for more difficult reflections. I’ll look at this and see if there’s a balance between them when an area light is involved. For delta lights like a spot light, really low shadow samples still converges but naturally doesn’t affect the light source.

    • I looked at this a bit more. More samples = more information. Brute force seems to keep this consistent where before a shader with 16 samples might collect more information than a shader with 32 at the same settings otherwise. I notice with brute force my secondary reflections seem to be more consistent than if I have a couple different shaders with different ray counts and all else is equal. (Still goes back to eyeballing a shot aesthetically I suppose.) Making the area light visible also makes it consistent between samples 1 and 8 because it’s dealing with actual reflection and not fake highlight. If you need that spec result then you can probably bump it up but reduce Quality a bit.

    • Holger aka kzin

      if i need specs, then i go with another shader like blinn or some custom one which give me more control (puppet for example). but in my normal workflow i try to avoid specs and use reflections. a alternative to specs, if rendertime is a problem, can be the use of env reflections with 1 raydepth only. renders fast, also with diffuse reflections.

      the actual bsdf shader also have some problems with specs, i think it misses some sampling optimizations here (i read something in connection with arnold tech, that specs needs optimizations here, but i am not sure where i reed this).

      • Thanks for the info.

        We do a lot of work with both the mia and car paint shaders here, using non visible area lights with the physical light shader and reflector cards with maps for custom reflection fall off.
        Whilst the light cards drive the main reflection of the light source on our products we tend to balance this with some subtle spec. to enhance the feeling of the material being multi-layered, especially on carpaints where the subtle use of colour in both spec 1 and spec 2 can create interesting colour variation across the surface. It also gives that subtle glow around the edge of the light card reflector that helps enhance the realism. Even though it’s a complete fake.

      • In your case I would just stick to the normal workflow but keep the area light samples low.

        The usual problem is that people work the exact same way with Unified Sampling and become unhappy with the quality of the image produced or the time it takes to get it. The more you allow Unified Sampling to make the decision, the better the result; this is a bit more like iRay where you set it and forget it.

        BSDF in mental ray right now assumes a physical workflow so there’s not a spec control on them. Arnold relies on BSDFs entirely to gain speed from a brute force technique. This simplifies things but also locks you in to a workflow as well since it’s your only choice.

      • Sidenote: the mia_x material (not passes) has an extra reflection lobe (base reflection) under Extra Attributes you can use to fake some of those things. It maintains physical correctness as well. I wish it was available in the passes shader as an output.

      • Great tip David. For some reason I never noticed that and I haven’t used it before. That could come in handy. Wonder why they left it off the passes shader? Perhaps they didn’t feel like programming yet another output pass in there? 🙂

  5. When using a bruteforce approach, what settings would you recommend using the mr native hdri for lighting?

    • Lighting takes some more care and for now may need more samples than 1 for some lighting like area lights. The Native IBL is an area light and I find values of about 0.3 are generally pretty good. But this will vary based on the range and detail in the HDR. The Native IBL loses a little detail when resizing (resolution string option)

  6. Hi David,

    First of all, thanks for this nice explanation of unified sampling. I tried it on a project I’m working on, and I got an unexpected issue : I usually split my render into the classic render passes and I noticed that my passes where clamped, whereas it is not if using the classic sampling method. Do you know where this might come from ?

    • I haven’t experienced clamping.

      However, 3.9 has an error where some user passes that are marked as filtered are output incorrectly (artifacts in the pass). That has been fixed for 3.10. So passes weren’t always very useful in 3.9 for this reason.

  7. hey david.. very nice article.. You wrote something about progression being on in the notes section.. having trouble understanding that… Not sure where is that progressive setting is? can you help me out?

    • Progressive Rendering uses the string option “progressive” on, boolean and follows the Unified Sampling controls and a time limit parameter. For now the ‘0’ (infinite) limit is broken so I use 99999.

      You can find it as well as the other options in the UI thread. It’s best with Maya 2013 SP1+

  8. Hey David. As always thx for providing much needed info.. I wanted to follow up with you on progressive rendering string option.. I tried adding the string option myself in the midefault options.. I see where i can put the name, value and type.. not sure where i can i add that time limit parameter.. any help would b much appreciated.

    • Hi Jason,

      Give the Maya UI on the Google Code Site a try. It has those options included. There’s a link to it on the blog. It’s designed for 2013.

      • Hi David, and thanks for all the info.
        So I’m kinda late here, as I’m in Maya 2014 sap, and am finally getting a chance to use the Unified Sampler. I noticed in earlier versions of Maya, you use ‘string options’? Do I need those now, or are they built in?
        thanks

      • Maya 2014 integrates Unified Sampling, you should see it as the default algorithm. There should be a Quality slider at the top of the Render Quality Settings.

  1. Pingback: Elemental Ray – Great new mental ray blog | Jozblog

  2. Pingback: Area Lights 101 « elemental ray

  3. Pingback: Unified Sampling in 3.10 (and other changes) « elemental ray

  4. Pingback: Unified Sampling « elemental ray

  5. Pingback: Unified Sampling – Visually for the Artist | Pixel

  6. Pingback: “My render times are high? How do I fix that?” « elemental ray

  7. Pingback: The Layering Library (MILA) UI: BETA SHADERS « elemental ray

  8. Pingback: Using Mental Ray Unified Sampling in Maya

  9. Pingback: Unified Sampling – The way to go!! | Sandeep Kittur

  10. Pingback: The Layering Library in Action | elemental ray

  11. Pingback: Unified Sampling – Visually for the Artist | elemental ray | timcoleman3d

  12. Pingback: Custom Shading Attributes: Shader Portfolio – Rendering Studio 1 | 3d art and animation

  13. Pingback: Shader Portfolio – Fruit Bowl Project | 3d art and animation

Leave a comment