Monthly Archives: November 2011
Unified Sampling – Visually for the Artist
Unified Sampling Visually Explained and Practices
As a primer for using Unified Sampling, look here: Unified Sampling
Unified Sampling is QMC (Quasi-Monte Carlo) image synthesis.
Basically: taking samples based on a QMC pattern and decision making process. The renderer “picks away” at the scene sample by sample to determine where to place the next sample.
How can this benefit your work and how can you take advantage of it?
Comparing stochastic (random) and QMC sampling patterns you can see benefit in how QMC avoids clumping and spreads samples out across the image to catch details. (image) One can also control how these samples are scattered inside the algorithm (stratification).
The rendering equation and problems with a complex scene introduce a multi-dimensional problem that Unified Sampling helps resolve through a single control called “Quality”. This process is not only in the space dimension (raster space) but in time (motion blur).
So how do you use Quality? What changes occur when you increase Quality?
Quality increases will cause the sampler to concentrate more samples where it perceives the most error. So here I will introduce you to the Diagnostic Framebuffers. For this first explanation we will pay attention to two layers in the Diagnostic: Error and Samples. You can invoke this feature the way you used to, check on the “Diagnostic” box in the Render Settings. (image) Except now mental ray will generate a separate EXR file in the following directory on Windows: [User Directory]\projects\project_name\renderData\mentalray\diagnostic.exr
Open the .exr in imf_disp.exe Under the Layer menu select the mr_diagnostic_buffer_error layer. (Alt-2)
Several things to note:
- Error is per channel (RGB)
- More error is a higher pixel value
- Mousing over a pixel will provide you with the error value for the pixel (bottom left of the imf_disp window shows values)
You will notice the perceived error centers around noise and contrast changes as well as areas where geometry might meet in raster space (on the screen).
Now what would happen if you increased your Quality? (image)
A further increase? (image)
Notice that the areas with the most perceived (or calculated) error are eroded first. This makes sense, you want to resolve those areas without wasting time on areas that are relatively noiseless. It also gets progressively darker as error values decrease.
Now look at the mr_diagnostic_buffer_samples layer. (Alt-3)
It’s white/blank!?
This is an EXR and the values for samples are integers (whole numbers) If your minimal samples are 1.0 then your values will begin at 1.0 (white) for the buffer value ‘S’ which are samples. So you can lower the exposure in the top right hand corner (image)
I find that -6.0 is a good place to start. Now you should be able to see some sort of grayscale representation of your image. Mouse over these pixels for a value of ‘S’. You can drag the mouse pointer and hold “alt” (Windows) to change zooming levels on the fly in imf_disp. (The versions on the blog are .png or .jpeg for obvious reasons. These values don’t exist in these web examples.)
Notice these things:
- Your samples should increase around areas where the error buffer eroded the most error in the frame.
- With a max samples of 100 you might not have any pixel with a sample rate of 100 if your Quality did not dictate it. (Quality might not have found it necessary at the current level)
- Your sample values are linear. Unlike other implementations of QMC sampling, it is not exponential (4, 16, 64, 256) This means more efficiency. Instead of jumping to samples 64 from 16, maybe the pixel only needs 23 samples. You avoid over-sampling in large jumps.
What does this mean for tuning a scene?
This means that all you really need to tune an image is the Quality control. With a wide sample range you can push Quality around without sacrificing efficiency.
This has an added bonus: since your sampling is not a rigid function of samples, you can be assured that frame to frame changes in an animation will have a consistent level of quality to them. Even if a new complex object enters the frame, Unified Sampling will compensate accordingly without you needing to change sample rates.
You now have a consistent level of image quality for shading and anti-aliasing. Once you have chosen your desired Quality you can render unattended and return to correct results. (Less tweaking for quality issues and more time sleeping, my favorite hobby.)
So why do I need Samples at all?
Great question, and honestly there may come a day you won’t see a samples control at all.
Samples gives you the opportunity to fine tune a particularly difficult scene or object. You can indeed control individual objects with a sample override. Keep in mind that these values are now literal and linear in fashion, not an exponential power function like before. These overrides can also be outside the sample limits of your scene settings for extra flexibility. (image)
For scenes with a complex or noisy effect, this can give you some added control.
How will this help me with motion blur or depth of field (DOF)?
Motion Blur and DOF are really just noise problems. Unified Sampling will sample these areas where it finds it needs the most samples. What does this mean? Well, in motion blur or DOF there may be areas that are extremely blurry. (image)
This means that a loss of detail would result in needing fewer samples. Looking at a diagnostic you’ll see that very blurry areas may in fact receive very few samples. So the efficiency now extends to all types of problems and dimensions.
So now you understand how Unified Sampling will resolve a lot of problems more easily in your images using a simple, single control.
Using standard workflows you can generally begin with samples min 1.0 and samples max 100.0. These are scalar numbers because samples min < 1.0 will undersample an image. Samples min 0.1 will minimally sample once every 10 pixels for example. Quality 1.0 to 1.5 are generally decent numbers for higher quality renders.
What about a non-standard workflow? Is there a way to take better advantage of Unified Sampling in how my scene is setup?
Yes! In fact, this may be the best way to use Unified Sampling for complex scenes:
Unified Sampling will “pick away” at your scene. Samples are measured against one another and more are generated when necessary, one at a time. You can make your shader settings do the same thing.
Last Example Scene (image)
Note the glossiness. The foil on the walls, leather, and glossy floor. Usually for glossiness we would help the image sampler out by giving the shader additional rays to shoot for a sample called by an eye ray (from the camera). This is also the same for area lights and other effects where local control can be done inside the shader. So imagine an eye ray striking an object and sending 64 rays for a glossy reflection. In a pixel with 16 samples you can expect up to 1024 reflection rays. These rays might strike yet another object and run shaders. . .1024 times. If your ray depths are sufficiently high, you can expect a ray explosion.
Let’s take a look at another Diagnostic Buffer for Time per pixel in this image. It is labeled mr_diagnostic_buffer_time (image)
Where shaders force more work from the sampler they can take longer to generate. This is multiplied by the number of samples that may be taken inside that sampler. In the old version where samples would jump large amounts, your time per pixel could be very expensive in leaps and bounds. Each value ‘S’ for a pixel is render time in seconds.
What if we decided to let Unified Sampling control the sampling. As an overall control for a frame, Unified Sampling can be done in a more “brute force” way. Lower the local samples on the shaders to 1. In this scenario you can strike a pixel maybe 400 times! But in that case the rays sent are only 400 rays. That’s less than the 1024 we might have seen before with just 16 samples! (This includes lights/shadows. For instance, I used 9 portal lights in a scene where I left their samples at ‘1’, the resulting frame was still under an hour at 720 HD.)
Crazy!
Here’s the result. (image)
Something here is wrong.
Originally we were shooting more samples per eye ray. In some cases this may have been overkill. But now our image doesn’t look so great despite being faster (3 minutes is pretty fast). Think about it. If my reflection ray count was 64 then a pixel with 5 samples could spawn 320 rays. Well, my samples max of 100 is certainly lower than my 320 rays before (remember, I’m shooting 1 at a time now).
How do I fix this?
You can first increase your quality. 2.0, 3.0, more, etc. Keep an eye on your image as well as your Samples Diagnostic. We have found Samples Quality of 6.0 to 10.0 works in most cases. (This has been greatly reduced in mental ray 3.10, look here: Unified Sampling in 3.10 )
This is also where you will need to increase your samples max. Just like the scenario above where we might need 320+ rays, we need to raise the limit so Unified can make that decision.
But now you may notice something else. Areas without much contrast might gain samples for no visible reason. (Look at the black areas.) How do you fix that?
There is a rarely used control called Error Cutoff.
This control can be used to tell Unified Sampling to stop taking additional samples when the error calculation reaches a certain limit. Anything beneath this is no longer considered for additional samples. You may recognize this type of control from iRay where it has Error Threshold.
This control is very sensitive and I find that most tweaking happens in the hundredths of a measurement. So I begin with 0.01. In this example 0.03 was a good stopping point. 0.03 is triple the amount of 0.01 but overall just a tiny change in the control. So be careful when tuning this or you may erode Unified Sampling’s ability to sample areas that need it. In many cases it is an additional control and not a requirement, but its inclusion is important in difficult scenes.
Will this benefit motion blur and depth of field?
Yes, a lot in most cases.
Now you might be sampling hundreds of times per pixel. Add motion blur and/or depth of field and the effect is much less expensive now. Unified Sampling jitters these samples in time and space for these effects automatically.
Why is it less expensive?
The extra samples you’re already taking will take into account the temporal sampling of motion blur and the ray direction change (circle of confusion again) for depth of field. So achieving these effects is much less overhead here. You’re already sending lots of rays. All while maintaining QMC efficiency. Areas of blur in motion blur or DOF where a sample strikes a shader will also generate a single sample for each type of effect, lowering the cost of that sample on the edge of blurry detail.
So now you have an idea of how to use Unified Sampling based on visual examples. You should hopefully find that keeping your samples settings wide and using Quality will simplify your tuning and scene rendering as well as making it faster.
The below image is Motion Blur and Depth of Field. Samples Quality 8.0 with Samples Max 600 Rendertime: 44 minutes at 1280 by 720
Additional Notes:
- Using Progressive “on” and Unified Controls may help you resolve your settings faster but for now I find that I need to increase my Quality more than if I have Progressive “off” when using ‘1’ local shader sample. But for look dev you can generate a single pass very quickly to check lighting, materials, etc. all at once. It’s been reported. But in the meantime your Progressive refinements will be freakishly fast! The above image would refine a pass every 9 seconds. So in about 18 seconds I could tell if something was wrong and change it.
- Using ‘1’ local shader sample is more like a BSDF shader where it samples the scene in a single operation. Current shader models try to collect everything from their environment so one sample at a time is possible but not as good as BSDF.
- Combining features that are sampling your scene smartly will increase the speed of your render and result in higher quality renders. Unified Sampling is the basis that can be improved through BSDF shaders, Builtin-IBL, Importons, and other modern techniques that work together both present and future.
- What about lighting? Take a look here for some ideas on area lights and Brute Force: Area Lights 101
- Unified Sampling performance is Logarithmic like many brute force type of techniques. This means increases in Quality result in smaller and smaller render time increases as you get to higher numbers. Brute force rendering tests have shown a gain in speed for similar quality to be about 10-15%, we encourage more tests with this workflow. Others are testing this including studio projects where motion blur is key.
- Consider using your render output and mr_diagnostic_buffer_time to see areas in your image that might benefit from changes to get a faster render. (Visually insignificant areas that take too long due to expensive effects, lots of shadow rays, etc.) I find the biggest offender for render time are shadow rays in most cases.
Brute force wide gloss reflection, 2 bounces (glossy) with the Grace Cathedral HDR. 11 minutes a frame at Quality 12.
Below: Brute Force Portal lights and Ambient Occlusion. The frosted windows are the most difficult to resolve. 57 minutes for this frame (15 of that is indirect illumination calculation). Model can be found here: ronenbekerman Render Challenge One You can find recommendations on Area Lights for Brute Force here: Area Lights 101
Samples: actually still low, averaging less than 100 a pixel, many are in the teens.
The Unified Rasterizer
If you read my post on unified sampling you might have heard me talking about using the unified sampling settings to control the rasterizer. While it is important to note that there is no difference between how the regular rasterizer and unified rasterizer work, it is convenient to be able to control all of mental ray’s primary rendering modes using the same set of controls. Thus I will explore the rasterizer from a unified point of view.
Why Use the Rasterizer?
With unified sampling, it’s clever error estimation, and it’s fast ray traced motion blur, you may be wondering why care about the rasterizer? You should care because ray traced motion blur will inherently have a certain about of grain and it will never be super-duper fast. For those projects with heavy motion blur, render time limitations, and perfectly smooth expectations, I welcome you to the world of scanline-based rasterization.
Technically speaking, the rasterizer achieves motion blur performance improvements over adaptive ray tracing by separating the shading component of sampling from the antialiasing component of sampling. So, within a given frame, tessellated objects can be shaded, sampled, moved, and sampled again without the need for additional shading. This resulting faster render time is because of the decreased amount of shading calls. The resulting smoother motion blur is because the tessellation was essentially slid across the pixel/tile. Despite these advantages, the result is not as physically correct as ray tracing which shades at every sample point.
You can read more about this in the mental ray documentation that comes with Maya.
It is also important to note that the scanline-based rasterization approach to rendering is not as efficient at tracing rays as a ray traced approach is. For this reason the rasterizer will suffer with reflections and refractions while unified sampling will shine.
How to Enable the Unified Rasterizer
To enable the unified rasterizer you need to turn on unified sampling (currently implemented with string options) and switch the Primary renderer to “Rasterizer” (Render Settings > Features > Primary Renderer).
Antialiasing Rasterizer Controls
When using the unified rasterizer, you only need to consider one setting to control antialiasing sample quality: “samples max”
- “samples max” replaces “visibility samples” aka “samples collect”.
- While “samples max” acts as a limit in regular unified, here it controls the absolute number of antialiasing samples taken per pixel.
- The value is truncated to the nearest square number of lesser or equal value. i.e. 16.0 → 16, 32.5 → 25, 99.9 → 81, 100.0 → 100, etc…
- “production quality” lies somewhere around 25.0 or 36.0. Use lower values for faster previews or higher values for difficult renders.
- scalar, defaults to 100.0
Shading Rasterizer Controls
The main setting that you need to consider for shading sample quality is “samples quality”.
- “samples quality” replaces “shading quality” aka “shading samples”.
- While “samples quality” controls error allowance between samples and pixels in regular unified, here it approximates the number of shading triangles per pixel.
- A value of 1.o corresponds to about 1 shader call per pixel per time sample. 2.0 corresponds to 2 shader calls per pixel per time sample. 0.5 corresponds to 1 shader call per 2 pixels per time sample. Etc.
- scalar, default to 1.0
Additional Shading Optimization
- “time samples”/”time contrast” is also known as “samples motion”.
- While “time samples” is ignored in regular unified, here it controls the number of times the tessellated triangles will be shaded over the motion blur shutter interval.
- While it is beneficial to leave this setting at 1.0, sometimes it is necessary to raise this setting to avoid animation artifacts cause by dragging the shaded tessellations to create blur.
- This is integrated in the Maya UI under Render Settings > Quality > Motion Blur > Time Samples. scalar, defaults to 1.0
- Note: this setting has a different meaning with the rasterizer than it does with AA sampling. Similar to unified, each sample is QMC jittered in time when using the rasterizer.
“rast motion factor”
- When enabled, “rast motion factor” allows you to raise or lower the amount of shading samples performed for fast moving geometry.
- This allows you to limit shading samples where the detail would otherwise be lost in the blur.
- Shading samples performed scale linearly with “rast motion factor” and the speed of the moving geometry.
- scalar, defaults to 1.0. 0.0 is disabled.
Linear Color Workflow(s) in Maya – Part 2: The Preferred Method
Continued from Part 1: Maya Tools
Now we have the part that is the most robust and effective but more complicated.
The preferred method (and one you may find at a visual effects studio with a color pipeline.)
Photoshop -> Linearize in Nuke based on original color space -> Render (view through LUT) -> Composite (view with correct LUT) -> Output to correct Color space
After painting your textures in Photoshop, take them into Nuke. Nuke has a “colorspace” node where you can specify the output color space and even white point. Write the file out in the linear format. Preferably to an EXR that can be later cached and mipmapped (https://elementalray.wordpress.com/2012/04/17/texture-publishing-to-mental-ray/). To clarify: when writing to a non-floating point format, the write node in Nuke will choose an appropriate colorspace. This is generally sRGB for 8-bit images. If saving to EXR, the write node will understand a floating point format is linear and the colorspace node should be omitted. You can also change the colorspace to be written in the write node as well but not white point.
If you are on a project with a specific LUT/Color space you will have to take care to strip the original color space out (linearize it). This way when it is viewed through the LUT it will look as expected based on what you painted. You notice that the Maya selections mention linearization based on “primaries” as well as just a gamma curve. LUTs may alter the primaries for the correct source such as DCI-P3. Your Digital Supervisor will generate one of these for use. How to make one is beyond the scope of this tutorial since it delves into Nuke too much.
Load these into your texture nodes inside Maya after linearized.
What about the color picker? That’s a sticky problem, Maya colors are stuck to sRGB unless you reverse engineer the color you need or simply drop a gamma node and use its color picker and then correct it (0.4545 for RGB). Generally “good enough”. Then use the Renderview Color Management to load the LUT for viewing as you render.
Take care that your output colorspace is what your LUT is designed to use, be it Linear or Logarithmic (Cineon Log format) Your input will be the best Linear approximation as created by Nuke. Example: Use logarithmic when your LUT expects Cineon formats.
Render away and your passes will automatically be output in linear form (EXR 16-bit half please!) Load these into your compositing package and view the compositing process through the correct color space. Nuke has several processes for this, but the input_process is preferred. (image)
You now have a color correct pipeline where you are rendering with the correct linear textures and viewing them like they will appear in your final project.
This means color correct decisions can be made during all phases. This reduces artist “guessing” and surprises. Your images will operate correctly inside the renderer and with some care in choosing your materials, it will be physically plausible and achieve photo realism more quickly. You also alleviate extra trouble where compositors were relighting scenes as opposed to integrating elements. It should look like the below flow but feel like Heaven. . .maybe.
Paint (sRGB) -> Linearize based on original colorpsace -> Render (linear) -> Composite (linear) -> Output (sRGB or Specific Colorspace)
Some Simple Examples:
The original workflow was simple: Paint your textures and render. The problem here is that the image is dark and the lighting is blown-out in comparison. When complicated with physically inaccurate shaders the result was a look that could take hours to “eyeball” to a plausible solution.
Corrected to sRGB from sRGB painted textures: Quick fix, right? No, now everything is “double” corrected. 2+2=5 so-to-speak. Everything washes out while your black and white areas are unchanged. This also means your lighting will be much too strong and wash out entire areas of your scene.
Rendered with the correct linearized textures but viewed incorrectly. Now it’s certainly too dark. But your overall contrast and lighting information are correct. As a 16-bit half floating point image you can easily correct and view this result.
Linear to sRGB rendered and viewed correctly. You have a wider range of values and contrast without anything being washed out.
Additional notes:
- You do not need to render through a lens shader for sampling reasons. mental ray samples internally in perceptual space automatically. In combination with Unified Sampling, correctly rendered images should be free of artifacts. However, if you are rendering directly for beauty to an 8-bit image format then it would benefit you to render with your color space baked in (in the render). Post operations to correct a lower bit depth image will introduce artifacts and banding.
- No real mention of using a lens shader for anything else aesthetically. Well, when rendering for beauty the mia_exposure_photographic lens shader is very nice. But a 2D package like Nuke or Lightroom has much more powerful tools to give you the look and color grading you desire.
- There is a framebuffer gamma setting in the Render Settings. Ignore it. Using this control will apply a gamma correction to your inputs overall and will cause undesirable artifacts.
- Output passes (diffuse, specular, etc.) are not affected by the lens shader. Correct. These passes are meant for compositing operations. As mentioned previously, these operations should be done in linear color space so that your results are correct. Then output to the desired color space at the end. Ideally these operations should be additive.
- The color picker being sRGB is a bit of an issue that complicates things, it might be nice to log a Suggestion for Autodesk to further refine the linear workflow controls and actions. Under the hood these colors are most likely floating point.
- The easiest way to understand what should be corrected are colors/textures that will be seen as some sort of color in the final rendered effect. Bumps, displacement, cutout maps, etc are not seen as a color in the final render and can be left alone.
- Normalized colors (range 0-1) are best for diffuse textures. In Nuke you can normalize a texture as well as change the color space. Emissive textures (like an HDR environment) should not be normalized. This will defeat the purpose of having that lighting information. It will flatten out your lighting. This is also true of geometry sources of light where you apply a texture as a light. But these textures should still be linear color space.
- If you have a plate you are rendering with (or environment), it needs to be linearized correctly if you are going to use it in the renderer and later corrected. Otherwise you will accidentally double correct it. Maya is primarily a VFX package so it assumes you will composite your results later. It’s a best practice to hide the primary visibility of these elements so they can be added again in the compositing stage and not inside Maya.
- If you always paint a texture in sRGB space and linearize it, then output to a different color space, there will be some difference in what you painted versus the final output. The solution there is to work and view your texture in the final color space as you paint. This isn’t always easy to do in something like Photoshop unless you have a correct working color space and calibrated monitor.