Monthly Archives: January 2012

Unified Sampling Redux

As a simplified look at using Unified Sampling as a more “brute force” method that was outlined here; the below example outlines the differences in time and sampling on a visually trivial scene. This should make some things very easy to understand and quick to read before moving on to lights. 😉

Glossy Test Scene

In a glossy scene originally rendered at HD 1080, the first frame was rendered with the following settings using all mia_material_x shaders.

Quality 8
Samples Min 1.0
Samples Max 800
Reflection Bounces 2
Shadow Bounces 2

Resulting Time: 48 minutes

In a second test I added these settings:

Error Cutoff 0.04

Resulting Time: 35 minutes

The images appeared to be identical to the eye. I ran imf_diff to analyze actual pixel differences with this result:

differing pixels: 0.379% (7869 of 2073600)
average difference: 1.265%
maximum difference: 4.632%
Summary: Some pixels differ slightly.
== "glossyA.exr" and "glossyB.exr" are similar

So I am pretty happy with the fact that the time savings of 13 minutes resulted in no observable difference.

Below is an explainer graphic of the glossy rays count set for each sphere.

Reflection Samples from Shader

Below is the Samples Diagnostic framebuffer (tonemapped to work on the internet). You can see that the more “brute force” the reflection rays settings, the harder Unified Sampling had to work.

Samples per pixel (brighter is more)

Below is the time buffer where the longer it takes to render a pixel, the brighter the resulting pixel in the time buffer.

Time per pixel (brighter is longer)

You may also have a better understanding of how Unified will perform consistently across a scene with a single Quality parameter when given a wide range between minimum and maximum samples.(These spheres resemble one another despite having large changes in reflection gloss rays.)

Despite these results you might still notice a little grain on the pure brute force sphere. Add a texture map and you’ll hardly notice but is there a reasonable balance in a more complex scene?

If you need a completely smooth scene where there are few textures and more of a “pure” shader effect, then small increases seem to work well without sacrificing extra time. 2-4 samples works well for this in those special cases. But we find that animation and VFX work do not need this level of detail. This would be for something like print work and large resolutions.

Brute Force Only: 22 minutes at HD 1080

 

Next we might take a look at lights and how to use them in similar circumstances.

Developing Shaders for Unified Sampling

For those of you more interested in the developer side of things; Barton Gawboy has started a thread on optimizing and creating shaders for mental ray that will take advantage of how Unified Sampling works.

Take a look here: Developing Shaders for Unified Sampling

Even if you aren’t a developer, you may find the added information on how an optimization works to be useful when tuning a scene.

Hot Wheels – Immersion

This is an example of Unified Sampling on a commercial production.

Hot Wheels "Immersion"

These images were rendered with motion blur at HD 1080. Tests using more brute force settings helped lead us to the results we’ve discussed here: Unified Sampling – Visually for the Artist We found we could render smooth motion blur (no grain) and at a savings of 10-15% time per frame.

This piece is also nominated for a Visual Effects Society (VES) award for Best Virtual Cinematography. Congratulations to the team members involved at The Mill! (Link was broken, restored now)

Take a look! Hot Wheels – Immersion

All images © Mattel

fgshooter UI for Maya

mip_fgshooter used to achieve flicker free final gather

mip_fgshooter is a mental ray production shader that allows you to shoot final gather points from multiple cameras instead of just the render camera.   These virtual FG cameras can greatly reduce flickering by providing stability to final gather points between frames. Increased stability reduces the need for overly aggressive final gather settings on difficult-to-light situations and can lead to faster render times as well as improved image quality.  This offers similar advantages to baking FG points (see Flicker-free Final Gather in dynamic animations) but with a significantly simpler workflow.  Also, I have put together a python script (complete with a user interface!) that will make using the fgshooter easy.

Thanks to The Mill for letting me post this script.

Final Gather Flicker

Generally, flicker is a result of changing indirect lighting contribution computations between frames.  This indirect contribution computation is based off of the perceived indirect lighting at each of the FG points.  Because the location/number of FG points is camera/geometry dependent, and cameras/geometry move between frames in animation, subtle differences in the locations of the FG points causes flicker.

For instance, if part of the scene geometry is visible to the camera in one frame and not visible in another frame, you might get flickering if the indirect contribution around this geometry is important.  Additional FG cameras that either do not move or can view geometry that might not be visible to the render camera for every frame, enable you to stabilize the indirect lighting contribution computations.

For the HTC advertisement above, the green laser lights that write on the buildings were causing FG flicker because their intensity was so great.  When the camera moved slightly, additional FG points inside the buildings would significantly change the indirect lighting contribution computations.  Even brute force indirect lighting flickered because of the addition/loss of a few primary/eye rays changed the QMC determinism so much!  We used stationary FG shooter cameras to anchor the FG points geometrically and kill the flicker at minimal cost to render time (much faster actually if you consider the original unnecessarily high FG settings).

Using the fgshooter UI

First off, you need to expose the mental ray production shaders if you have not already.  To do that, run this simple MEL command and then restart Maya:

optionVar -intValue "MIP_SHD_EXPOSE" 1;

Because focal distance and aspect ratio information is passed via the scale attributes of the camera transform matrix to the mip_fgshooter shader, it can be somewhat difficult to use inside of Maya.

I have provided a script that will make it fairly easy to set up an fgshooter camera(s).  To install the script download the compressed python file from the bottom of this post.  Place the unzipped python file inside of one of your script’s paths.  Now, create a custom fgshooter button from the shelf editor.  You should only need to add these two lines of code (make sure you select python, not MEL!):

import fgshooter
fgshooter.ui()

fgshooter UI

When you click the fgshooter button that you have just created on the shelf editor, an fgshooter window should pop up.  This window gives you three options of how to create virtual FG cameras:

  1. You can create a virtual camera at the same location as the render camera (Include Render Camera).
  2. You can create virtual cameras that are fixed at a certain frame along the path of the render camera (Stationary Cameras).
  3. You can create virtual cameras that are offset in time by a few frames from the render camera (Offset Cameras).

Virtual fgshooter cameras

The defaults settings will create 4 virtual FG cameras: 1 at the position of the render camera and 3 stationary cameras at frames 0.0, 12.0, and 24.0.  Specific settings will vary heavily scene to scene.  If you wish to change from this default virtual camera setup, raise or lower the number of stationary or offset cameras and then click “Update”.  The UI will now display the corresponding number of slots for each of the types of virtual cameras.  When you are ready to create the actual virtual cameras and mip_fgshooter node network, click “Apply / Refresh”.  Since this script is not cumulative, the entire fgshooter setup will change every time you click this button.  This way your scene won’t accumulate virtual final gather cameras.  You may also remove all virtual cameras/mip_fgshooter/node networks by clicking “Remove All”.

Note: This script will only create cameras for a non-default camera set as the render camera under render settings.

Offset vs Stationary

In general, the more stable the final gather points, the more stable the final gather, so it is best to use the stationary cameras in combination with the render camera.  This will be particularly useful for pans where flicker is being caused by small changes in the render camera’s position and orientation.  For fly-throughs where the render camera’s position changes greatly, offset cameras may be more useful than stationary cameras.  These offset cameras will help smooth out flicker by providing information from a few frames ahead and a few frames behind the render camera.  You should always include the render camera.

fgshooter.py.zip

version 1.0 – posted 1/4/12
version 1.0.1 – posted 1/7/12
 
Animation below rendered with no lights, FG only with an animated camera and 3 fgshooter cameras (4 total) The image was tuned to the desired quality and then sent to render. No tuning for flickering was performed.

Simple fgshooter example with moving objects and camera

Simple fgshooter example with moving objects and camera

Another faster example with higher contrast

 

Example file (may need to cut and paste, Maya 2013), file courtesy of Narann on the ARC forums: FGshooter File