Category Archives: final gather
In looking at features in mental ray, there are a lot of combinations that can be had. Some are more helpful than others based on scene needs.
And some results might be counter intuitive.
In rendering you have two usual options for indirect lighting.
One involves brute force-like rendering (also known as “unbiased”) where every sample in the scene sends rays back into the scene to sample other objects and the environment. This is typically easy to tune (just crank it up until acceptable), requires less technical skill, and is pretty predictable frame to frame in variance without testing. However, it can increase render times by a large amount in complex scenes.
Option two involves interpolated (or “biased”) schemes where sparse samples are taken across the image and merged together through a interpolation algorithm. This is typically much faster to render but requires more artist tuning and may cause surprises in later frames. For Final Gather we have the option of using the fg_shooter to help mitigate this problem.
In the below examples I am combining brute force (non-caching to be specific) Final Gather with Irradiance Particles. This means Final Gather rays are used to collect detailed information in the primary bounce (typically the most important). Irradiance Particles are used to collect secondary (and higher) bounces. There are no portal lights being used. I might also call this “The Joys of Rendering On My Ancient Workstation” which can still get the job done.
Irradiance Particles have a few interesting features:
- Additional bounces each send fewer and fewer rays so each pass of collection is faster (Final Gather is only one ray each additional bounce)
- It only collects diffuse information (Final Gather collects specular as well as diffuse)
- The Importon optimization phase is sadly not multi-threaded. But this is less important here since we are using it as secondary bounces and emit fewer
- The importance based scheme means it’s smart about probing the brighter areas of your scene for information instead of wasting rays in dark spots. This effectively “aims” indirect illumination rays towards the more important areas of the scene.
Notice a few things here about how long this took to render in the image captions. . .
Unified Sampling takes less time with each increase in diffuse bounces (I do use a higher than normal setting in this case to force Unified to catch small illumination details). Adding bounces for a faster render might be counter intuitive in this case. There’s a reason for this, the decreased lighting contrast needs fewer samples per pixel. But there are things to think about in a more realistic scene.
In most scenes, texturing will create more contrast in the image. This may drive more samples through Unified Sampling. In this case you may be able to reduce the number of rays sent by brute force Final Gather since Unified is taking more samples anyway. This would be a balance you might need to tune. I would still recommend portal lights which means more direct lighting detail and a reduced need for indirect illumination rays. Also, using this type of combination might require less time to tune an image for the right lighting/detail balance that one or the other algorithm might need.
My settings for this scene are:
- Final Gather Force – 48 accuracy, filter 1 (the filter option means nothing if rays/accuracy are less than 8)
- Importon Density – 0.5 (this can be played with based on what you might need, but recommend it be as low as possible. Too few introduces artifacts in crevices and corners.), bounces 3
- Irradiance Particles – rays 64 (The Maya UI limits this to 512 rays. There is no actual limit. You can change this in the mel or use our UI)
Diffuse bounces are controlled through the Final Gather trace settings, not the Irradiance Particles.
A quick primer for Irradiance Particles:
This lighting technique is best suited for interiors and as a secondary bounce usually. For exteriors I highly recommend the Environment Lighting Mode. The main reason for this choice being: It only collects diffuse information. This means something like glass will block it from getting information. Also, the optimization phase for Importons is single threaded and cannot be cached for multiple frames.
In looking at the easiest way to increase the quality of Irradiance Particles the simple answer is: increase the number of Importons emitted. This means:
- Better detail in geometry and cracks
- More scene information from better sampling and more rays
- Slower renders from the emission and optimization phases
However, you can achieve better performance by balancing rays, interpolation, and Importon density if you have the time and/or desire.
mip_fgshooter used to achieve flicker free final gather
mip_fgshooter is a mental ray production shader that allows you to shoot final gather points from multiple cameras instead of just the render camera. These virtual FG cameras can greatly reduce flickering by providing stability to final gather points between frames. Increased stability reduces the need for overly aggressive final gather settings on difficult-to-light situations and can lead to faster render times as well as improved image quality. This offers similar advantages to baking FG points (see Flicker-free Final Gather in dynamic animations) but with a significantly simpler workflow. Also, I have put together a python script (complete with a user interface!) that will make using the fgshooter easy.
Thanks to The Mill for letting me post this script.
Final Gather Flicker
Generally, flicker is a result of changing indirect lighting contribution computations between frames. This indirect contribution computation is based off of the perceived indirect lighting at each of the FG points. Because the location/number of FG points is camera/geometry dependent, and cameras/geometry move between frames in animation, subtle differences in the locations of the FG points causes flicker.
For instance, if part of the scene geometry is visible to the camera in one frame and not visible in another frame, you might get flickering if the indirect contribution around this geometry is important. Additional FG cameras that either do not move or can view geometry that might not be visible to the render camera for every frame, enable you to stabilize the indirect lighting contribution computations.
For the HTC advertisement above, the green laser lights that write on the buildings were causing FG flicker because their intensity was so great. When the camera moved slightly, additional FG points inside the buildings would significantly change the indirect lighting contribution computations. Even brute force indirect lighting flickered because of the addition/loss of a few primary/eye rays changed the QMC determinism so much! We used stationary FG shooter cameras to anchor the FG points geometrically and kill the flicker at minimal cost to render time (much faster actually if you consider the original unnecessarily high FG settings).
Using the fgshooter UI
First off, you need to expose the mental ray production shaders if you have not already. To do that, run this simple MEL command and then restart Maya:
optionVar -intValue "MIP_SHD_EXPOSE" 1;
Because focal distance and aspect ratio information is passed via the scale attributes of the camera transform matrix to the mip_fgshooter shader, it can be somewhat difficult to use inside of Maya.
I have provided a script that will make it fairly easy to set up an fgshooter camera(s). To install the script download the compressed python file from the bottom of this post. Place the unzipped python file inside of one of your script’s paths. Now, create a custom fgshooter button from the shelf editor. You should only need to add these two lines of code (make sure you select python, not MEL!):
import fgshooter fgshooter.ui()
When you click the fgshooter button that you have just created on the shelf editor, an fgshooter window should pop up. This window gives you three options of how to create virtual FG cameras:
- You can create a virtual camera at the same location as the render camera (Include Render Camera).
- You can create virtual cameras that are fixed at a certain frame along the path of the render camera (Stationary Cameras).
- You can create virtual cameras that are offset in time by a few frames from the render camera (Offset Cameras).
The defaults settings will create 4 virtual FG cameras: 1 at the position of the render camera and 3 stationary cameras at frames 0.0, 12.0, and 24.0. Specific settings will vary heavily scene to scene. If you wish to change from this default virtual camera setup, raise or lower the number of stationary or offset cameras and then click “Update”. The UI will now display the corresponding number of slots for each of the types of virtual cameras. When you are ready to create the actual virtual cameras and mip_fgshooter node network, click “Apply / Refresh”. Since this script is not cumulative, the entire fgshooter setup will change every time you click this button. This way your scene won’t accumulate virtual final gather cameras. You may also remove all virtual cameras/mip_fgshooter/node networks by clicking “Remove All”.
Note: This script will only create cameras for a non-default camera set as the render camera under render settings.
Offset vs Stationary
In general, the more stable the final gather points, the more stable the final gather, so it is best to use the stationary cameras in combination with the render camera. This will be particularly useful for pans where flicker is being caused by small changes in the render camera’s position and orientation. For fly-throughs where the render camera’s position changes greatly, offset cameras may be more useful than stationary cameras. These offset cameras will help smooth out flicker by providing information from a few frames ahead and a few frames behind the render camera. You should always include the render camera.
Example file (may need to cut and paste, Maya 2013), file courtesy of Narann on the ARC forums: FGshooter File