Blog Archives

“My render times are high. How do I fix that?”

If you don’t have access to an infinite render farm, chances are you might be concerned about render times.

With a certain amount of flexibility and exposed controls you may be tempted to try lots of different things or even combine techniques seen on the internet. In some cases this can be useful and in others this combination doesn’t work well.

For example:

If you use an inexpensive Final Gather solution you may increase the quality of, or add, Ambient Occlusion to increase details. If you then find that Final Gathering has splotches or hotspots caused by some other effect, your first instinct may be to increase the quality of Final Gather. Well, it may be that now you can reduce or eliminate the Ambient Occlusion. In some cases we forget to do that and suddenly our render takes much longer. This is both the benefit and downfall of flexibility: keeping track of your decisions.

Where’s a good place to see what might be eating your render time?

The Output Window and the Time Diagnostic Buffer with Unified Sampling.

The Maya Output Window

What effects cost you the most time?

Well, that depends on what you are rendering. Hair can be difficult. Or scenes that reach your memory limit. Layering shader on shader can also increase or double some ray counts (this will change with the introduction of the layering library in mental ray 3.11) Even texture input/output (I/O) can make rendering slower. I will try and touch on some of the more common cases and solutions.

Let’s look at some output from a render. How can you find it? Well, you can increase the verbosity of the output in the Maya Rendering Menu > Render > Render Current Frame (options box)

Render Current Frame Options Box

I usually choose “Progress Messages”. The option below that is “Detailed Messages” and gives you more information, but also tells you every time mental ray blinks and isn’t usually necessary. Also, the more messages it prints, the more it might impact render time as a debug process.

So, I have rendered a decently complex scene from a project at 1280 by 720. I have quite a few lights in the scene, most of which are area lights (about 46 of them, most are small). I have wide glossiness and I am using the Native IBL to render the environment lighting.

I haven’t included the image here because we’re going to look at the numbers. (I know, really boring.)
RC 0.9 1072 MB info : rendering statistics
RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : eye rays                 6613564            1.00
RC 0.9 1072 MB info : reflection rays         65049860            9.84
RC 0.9 1072 MB info : refraction rays          3693155            0.56
RC 0.9 1072 MB info : shadow rays            501916475           75.89
RC 0.9 1072 MB info : environment rays        69498575           10.51
RC 0.9 1072 MB info : probe rays              33284793            5.03
RC 0.9 1072 MB info : fg points interpolated  31840843            4.81
RC 0.9 1072 MB info : on average 34.21 finalgather points used per interpolation
RC 0.2 844 MB progr: writing frame buffer mayaColor to image file D:/untitled_project.exr (frame 12)
RC 0.2 844 MB progr: rendering finished
RC 0.2 844 MB info : wallclock 0:31:52.00 for rendering
RC 0.2 844 MB info : current mem usage 844 MB, max mem usage 1091 MB
GAPM 0.2 844 MB info : triangle count (including retessellation) : 5240633
IMG 0.2 844 MB info : total for cached textures and framebuffers:
IMG 0.2 844 MB info :                 4656815552 pixel block accesses
IMG 0.2 844 MB info :                     535270 pages loaded/saved, 0.0114943% image cache failures
IMG 0.2 844 MB info : maximal texture cache size: 2700 pages, 298.781 MBytes
IMG 0.2 844 MB info : uncompressed cached texture I/O: 16650.313 MB
PHEN 0.2 726 MB info : Reflection rays skipped by threshold: 17691563
PHEN 0.2 726 MB info : Refraction rays skipped by threshold: 2272489

What can you gain from this?

In a raytracer, you are shooting quite a few rays around in your scene. These strike other objects and more rays are sent, etc. This can grow geometrically in a scene where you are using some expensive effects.

Eye Rays:

RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : eye rays                 6613564            1.00

These are the rays shot from the camera used to sample the scene. These strike objects and cause other rays to be cast. This is part of why they are listed first. These may be more or less depending on a couple things:

1. Motion blur will call more of these to smooth blur. Each ray is jittered temporally during the frame (shutter interval) to catch changes in a pixel as objects pass through them spatially (objects crossing the frame in movement.)

2. Depth of Field will call more of these to resolve blur

3. Scenes with high detail or contrast will need more to improve anti-aliasing.

Tuning: Reducing these is only an option if you can live with less quality (more grain or aliasing). Reducing these in Unified Sampling is done through decreasing the Quality parameter or by artificially capping the maximum through Max Samples. You can try using a Render Region on a noisy area of the most complex/blurry frame.

-Gradually lower the Quality until you reach a limit of what you’d accept.

-Introduce small amounts of “Error Cutoff”

-Lastly, alter per-object samples as needed for difficult objects.

Reflection Rays:

RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : reflection rays         65049860            9.84

These are rays cast into a scene from an object/shader to collect indirect reflection (light from other objects seen as a reflection). When these strike another object they run that object’s shader to get color information.

1. You may have more of these when you increase the shader ‘Samples’ parameter for glossy reflections

2. You may have quite a few if you have a high trace depth set by either the shader or render settings to allow more than one ‘bounce’ of the ray. This is necessary for things like a reflection in a reflection (imagine a hall of mirrors).

Tuning: You can reduce these in a few different ways. You can:

-reduce the samples for glossy rays based on acceptable quality (the appearance of grain) For objects and scenes where this is textured or there is motion blur and/or depth of field, we recommend a brute force approach (Using Unified Sampling) with a setting of ‘1’ sample. You may need more for very wide glossy lobes on perfect surfaces without textures that are very reflective or blurred.

mia_material Glossy Samples

-reduce the trace depth where it makes little or no visual difference (you may not need a reflection of a reflection of a reflection if it is blurry or dim) Or use a falloff distance with either a color or environment attached.

Trace Depth Options: Reflection

Reflection Falloff Distance and Trace Depth overrides

-use the mia_envblur node to send only single samples to measure an environment texture that is pre-blurred. This is supported in the mia_material and the car_paint phenomenon. An example can be seen here on Zap’s blog: More Hidden Gems: mia_envblur

Single Sample from Environment (mia_envblur node as environment)


RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : refraction rays          3693155            0.56

These are rays sent through and bent or change direction (refracted) through objects like glass or windows. Note that this isn’t the same as transparency where a ray passes through an object and is not bent. Transparency is handled differently in a scene but may still be expensive with large amounts of semi-transparent objects. There are no such rays in this scene or they would be listed. Glossy refraction (specular transmission) is an effect like frosted glass and can be one of the most expensive effects. It is not as simple as a more diffuse effect like specular (glossy) reflection.

1. Frosted glass or blurry effects will increase these.

2. A high refraction trace depth may also increase these.


-Reduce the samples on the refraction (similar to the control seen in the mia_material reflection samples) to an acceptable amount of grain.

-Add a small amount of translucency instead

-Reduce the refraction trace depth in the shader or the Render Settings. Or use a falloff distance with either a color or environment attached. A good guide for the global trace depth is how many surfaces you must pass through before stopping. For instance: a correctly modeled (volumetric) empty bottle will have 4 surfaces to strike before passing through the other side.

Shadow Rays:

RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : shadow rays            501916475           75.89

Shadow rays are rays sent from surfaces back to light sources. These may sample an area light (direct reflection or specular for invisible sources). These can be expensive and are usually the most prolific in a scene. The more lights casting shadows and especially soft shadows, the more you will have. The larger the area light, the more you will have as well if you need to reduce grain. And naturally, the more lights you have casting shadows, the more of these rays you will have.

1. Large area lights may be sampled more to reduce shadow grain or direct reflection noise on shiny objects. This happens when the light is invisible or the shader is selected to use “highlight only” for reflections even if the area light is visible.

2. Slightly softening the shadow and increasing the rays on delta lights (lights without area like point and spot lights) generate more samples

3. The shadow trace depth is high so you can see a  shadow in a reflection or refraction for example

4. High ‘Quality” settings on Native IBL or high samples on the user_ibl shader.

A quick way to read about optimizing area lights can be seen in the Area Lights 101 post.


-Follow the guidelines in the Area Lights 101 post for tuning area lights using both High/Low Samples and a helpful shader like the Physical Light

-Reduce the trace depth as needed. You may not need the reflection or refraction of a shadow in a blurry surface; especially if Final Gather is already darkening it.

-Old trick: use a depth map shadow or preferably a Detail Shadowmap that can be baked and reused on many frames assuming the objects or lights casting those shadows do not move or are not animated. You can do this selectively per light.

Environment Rays:

RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : environment rays        69498575           10.51

Environment rays are rays that leave the scene and call the environment. These are usually fast. They exist in this scene more often because I am also using the mia_envblur to speed up the environment lookups for glossy reflections as described above in Zap’s blog.


-None typically.

Probe Rays:

RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : probe rays              33284793            5.03

These rays are usually the result of Ambient Occlusion rays being sent into the scene. (Ambient Occlusion + Colorbleed isn’t the same thing in this case.) These are caused by their inclusion in a material like the mia_material or in a separate occlusion framebuffer pass. In the mia_material with Unified Sampling we usually recommend keeping the sample counts to 4-6 since it is like a lighting effect. If used in a pass there are a few things that affect the quality: these things may be the distance they travel, the distribution of objects in the scene, and of course sample count for the buffer. (Note: The Native IBL set to “approximate” mode will generate probe rays as it is using lighting by the environment through occlusion. Not usually recommended but ok for tests.) If no distance cutoff is used, it increases the raytracing overhead for your scene by striking anything contained.

Ambient Occlusion has become a staple effect for most CG work but the need is less than before. Generally used as a fake for global illumination in the past, the inclusion now that global illumination is faster and more detailed isn’t always necessary unless the global illumination solution is purposefully reduced or interpolated a lot and loses details.

Creating an AO pass by default for compositing can be used to enhance or create details that aren’t there (occlusion where there is direct lighting is not realistic but is an artistic consideration) Using this pass as a multiplication in post is also incorrect mathematically if trying to reproduce a beauty render. Production is starting to move away from using Ambient Occlusion as a pass or effect in modern raytracers. Path tracers like iRay automatically include such an effect in its light transport so adding this effect on top is redundant.


1. Avoid large sample counts in a shader. These rays may be sent when the shader is struck by some other ray like reflection or refraction. High trace depths will call more and more of these as the ray bounces around.

Ambient Occlusion in the mia_material

2. Use a small distance to avoid lots of strikes on other objects that are unnecessary. For example: Buildings to scale 3 city blocks apart do not need to occlude one another.

3. Detailed indirect illumination may decrease the need to have this feature on at all.

Final Gather Points Interpolated:

RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : fg points interpolated  31840843            4.81

Final Gather is a topic in and of itself, so I will hit the highlights here. These are the points that are generated in the prepass and interpolated when they are struck (or near) by an eye ray. Points are generated based on geometric complexity (automatically adaptive) and by altering the “Point Density” parameter in the Render Settings. (These are also altered by the old radius settings that have since been deprecated and should be avoided for easier setup and rendering.)

Final Gather prepass time is greatly influenced by “Accuracy” which are the number of rays sent to measure the scene and “Point Density” used to place points projected by the camera on geometry. During the render phase, “Point Interpolation” can increase render time at higher settings because the renderer is doing more work mathematically.

Since we are talking about the render phase and not the prepass phase I will just mention those solutions here. Final Gather settings and prepass may be covered later.


-Avoid large interpolation values. If your scene has complex lighting, increase the “Accuracy”. If the scene has complex geometry, increase “Point Density”

Final Gather Settings

-Use more direct lighting to stabilize the solution (such as the Native IBL or user_ibl)

-Use the fgshooter shader/script to avoid flickering in animations

Triangle Count:

GAPM 0.2 844 MB info : triangle count (including retessellation) : 5240633

This may sound silly, but most modern raytracers do not have a Scanline option. This is because we have reached a point where complex scenes with lots of triangles are common. Scanline rendering may slow down this process with many triangles. Instead you should turn off Scanline and select “Raytracing” as the renderer. This is the default in Maya 2013. Rasterization also counts as a Scanline algorithm although more modern.

This is often why comparisons with other renderers may show a slower result, users with lots of objects or displacements fail to turn off Scanline.


-Stop using Scanline Algorithm!!

-Do not use overly aggressive displacements

-Use proxies or assemblies: these are pre-translated and more memory efficient since they are on-demand geometry

Texture I/O:

IMG 0.2 844 MB info : total for cached textures and framebuffers:
IMG 0.2 844 MB info :                 4656815552 pixel block accesses
IMG 0.2 844 MB info :                     535270 pages loaded/saved, 0.0114943% image cache failures
IMG 0.2 844 MB info : maximal texture cache size: 2700 pages, 298.781 MBytes

Texture usage can not only increase memory usage, it can slow down a render by quite a lot! Reasons for this can be:

1. Large textures pulled across a network

2. Un-mipmapped or un-cached textures, this will force mental ray to load the entire full-resolution texture from the source even if all of it is not seen.

3. Insufficient memory means a lot of flushing instead of rendering (related to point 2)

4. Poorly filtered textures may also call more eye rays to solve aliasing. This runs the shader and may increase all of the other rays counts.


-The easiest catch-all is read the post on Texture Publishing

-For the image cache failures you will want to keep this as low as possible. Preferably below 0.01% This can be done by altering a few things manually such as:

*The tile size of the cached texture with imf_copy

*The cached memory limit with the registry option to force more efficient handling, either an increase or decrease. This and the option above work together and are scene dependent. Not always worth a lot of tweaking unless your scene is exceptionally texture heavy. I can now render scenes with hundreds of 4k textures with only 8GB of RAM locally.


    • I didn’t cover interpolated reflections or refractions. This is because in animation they are difficult to not artifact. With the usage of Unified Sampling you may not even need those features. Future shading models (BSDF) will also omit these features.
    • I didn’t cover the Ambient Occlusion Cache. While it may be faster to use during a render, tuning it can be difficult and less necessary with the usage of Unified Sampling
    • Try to avoid layering shaders for some effects. A lot can be accomplished through selective layering of textures instead. 3.11 will introduce the layering library that will help remove this effect of added ray counts.
    • I assume usage of Unified Sampling
    • Using ray cutoff values: these can be useful and exist in the mia_materials as a way to tell the shader not to cast a ray if the effect is not important. It’s a little tricky to use, but heavily traced scenes may have some speed-up if this parameter is increased. Do so slowly and test frames, it will erode raytraced effects if too high with little benefit.
    • Use the Time Buffer Diagnostics as seen in the Unified Sampling for the Artist post to identify where your scene is taking longest to render. Then look at those shader settings or possibly change per-object sample settings.

Time per pixel measured in ‘S’ or seconds. Brighter is longer.

  • Dimmer reflections/refractions need fewer samples
  • The decadent “Maya Glow Buffer” is very slow on large resolution rendered frames, even if the effect isn’t used. Turn off “Export Post Effects” in the Render Settings > Options and do the effect in post.
  • Scenes rendered in motion with motion blur do not need to be perfectly smooth when viewed in motion.
  • Do not marry an image. Some tweaks may alter the look. Even client notes alter the look. Go with the best balance of what’s achievable and is possible in the time you have. Otherwise you’ll constantly be unhappy.
  • To resolve artifacts, simply “cranking up the settings” is a horrible idea. Use the progress messages and the time buffer to make faster/smart decisions.
  • Form habits of rendering with correctly prepared textures and default settings. Only tweaking where necessary by recognizing the cause of the artifact, be it FG splotch or aliasing crawl.
  • Begin to ween off of using Ambient Occlusion as a default effect or pass. The original reason to use it (ao multiplied against an ambient pass) no longer exists and is really an artistic consideration now.
  • Always remember you are going for a good image. An improvement on reality so-to-speak. Avoid using mental ray as a physical simulation to render an image. Use iRay or similar for that type of workflow. Flexibility and choice is key to getting what you need quickly for animation and visual effects
  • I did not include Irradiance Particles or Photons. These aren’t used as often (or at all) in VFX or animation work. They are also (like Final Gathering) topics in their own right.
  • If non-of-the-above applies, change your BSP2 to BSP and try again. If your geometry has bad bounding boxes and other problems, ray traversal can be painfully slow. This is a geo problem, remake it if necessary.

Your car was built in mental ray

Chevrolet Corvette © General Motors

If you’ve been shopping for a car lately and you’ve been coveting the new Chevrolet Corvette 2013; then you might be surprised to know the cars in Build Your Own are rendered with mental ray.

Produced by RTT in Detroit, the cars are rendered in a complex pipeline that allows the creation of an entire car in pieces lit realistically by the user_ibl_env with set captured data. The cars are then taken into Nuke for assembly and color grading to match up pieces. Once delivered you can browse different builds or trims of the car; each created from separate files and merged to view photo-realistically.

With multiple passes, these cars using modern techniques and Unified Sampling take anywhere from 20-30 minutes a frame at 3000 x 1688 resolution (when rendered complete). Smaller HD resolutions can take as little as 10 minutes.

Keep an eye on the Chevrolet site to see more and more of these renders show up in not just the Build Your Own (BYO) but other places as well.

Start building your own Corvette here: Chevy Corvette – Build Your Own

Chevrolet Corvette © General Motors

Chevrolet Corvette © General Motors

Unified Sampling in 3.10 (and other changes)

Autodesk released their 2013 products this last week. This is the first public release of mental ray 3.10.

You will find this release focuses mostly on bug fixes and enhancements to existing features. The “What’s New” section leaves out the details. You can find the details in the Release Notes section. This is the best place to find fixes and enhancements you may need to know about.

The majority of your performance increase will be seen with Unified Sampling. Things to note for Unified Sampling in 3.10:

  • Previous scene’s Quality settings will generate fewer eye rays since they are unnecessary
  • Unified Sampling will sample dark areas of an image less
  • Unified Sampling produces smoother grain in areas with insufficient Quality
  • Framebuffers no longer have artifacts
  • Edges and thin objects (like hair) have improved sharpness
  • Motion Blur is smoother than before

Many of the previous caveats that may have kept you from using Unified Sampling before have been fixed in Maya now. More improvements are to come. We’ve been told from Autodesk that Hotfixes will be more often and hopefully provide more opportunity to upgrade and replace mental ray since it is now a separate plug-in.

For same-scene renders you can expect complex scenes will render 10-15% faster than before.

Below are some examples. I was kindly given a scene from a current intern at Full Sail University in Orlando, Florida. I cannot experiment or release my current work on the blog so it’s great when I have some nice projects to play with and share!

You can find more of Jiayu’s work on: Jayuliu

Previously for this scene it was taking multiple hours a frame using traditional techniques. I have since updated the sampling on the scene to use Unified Sampling and area lights. For a 720HD render it now takes about an hour for the night time scenes you will see shortly. 10 minutes a frame for the close-up Bedroom scene. There could be some more tweaking done to these using advanced lighting (covered later) but one thing at a time. 😉

Daylight Bedroom:

Jiayu's Bedroom Model Render

These scenes are rendered following the guides in the Unified Sampling for Artists Post and the Area Light Post.

In mental ray 3.9 I rendered the above scene with these settings:

  • samples min 1
  • samples max 500
  • samples quality 4.
  • error cutoff 0.04
  • Gaussian filter 2. 2.

For mental ray 3.10 I used the settings that gave me the nearest result without going overboard:

  • samples min 1
  • samples max 500
  • samples quality 2.5
  • error cutoff 0.02
  • Gaussian filter 2. 2.

The time saved for a quick render are minimal for a trivial scene:

3.9:       0:12:11.04

3.10:     0:11:12.86

What is of note is how Unified Sampling sees the scene. Below are the Sampling Diagnostic Framebuffers, 3.9 first, 3.10 next. Brighter areas are more samples.

3.9 Samples Diagnostic

3.10 Samples Diagnostic

Notice how dark areas sample much less than before (in the painting above the bed and the foot of the bed for example). Below is the visual difference from imf_diff utility, somewhat exaggerated to see better.

Image Difference from 3.9 to 3.10

Things to notice here are:

  • Edges are cleaner than in 3.9.1 (this change was introduced in 3.9.2 and improved in 3.10)
  • Area Light grain is less noticeable/clumpy than before (overall sampling pattern is smoother)
Pertinent diagnostics are below:
mental ray 3.9

JOB 0.2 progr: 100.0% rendered on SIAB.2
RC 0.10 info : rendering statistics
RC 0.10 info : type                             number     per eye ray
RC 0.10 info : eye rays                       12597442     1.00
RC 0.10 info : transparent rays                 278449     0.02
RC 0.10 info : reflection rays                 8661007     0.69
RC 0.10 info : refraction rays                  643795     0.05
RC 0.10 info : shadow rays                   126100094    10.01
RC 0.10 info : environment rays                 146106     0.01
RC 0.10 info : probe rays                     34008648     2.70
RC 0.10 info : fg points interpolated         20323641     1.61
RC 0.10 info : on average 86.51 finalgather points used per interpolation
RC 0.10 progr: rendering finished
RC 0.10 info : wallclock 0:12:11.04 for rendering
RC 0.10 info : allocated 353 MB, max resident 413 MB
GAPM 0.10 info : triangle count (including retessellation) : 994312
PHEN 0.10 info : Reflection rays skipped by threshold: 4405980
PHEN 0.10 info : Refraction rays skipped by threshold: 22742

mental ray 3.10

JOB 0.7 398 MB progr: 100.0% rendered on SIAB.7
RC 0.3 398 MB info : rendering statistics
RC 0.3 398 MB info : type                        number    per eye ray
RC 0.3 398 MB info : eye rays                   9192639    1.00
RC 0.3 398 MB info : transparent rays            275289    0.03
RC 0.3 398 MB info : reflection rays            6560278    0.71
RC 0.3 398 MB info : refraction rays             553186    0.06
RC 0.3 398 MB info : shadow rays               95549199   10.39
RC 0.3 398 MB info : environment rays            142848    0.02
RC 0.3 398 MB info : probe rays                26833006    2.92
RC 0.3 398 MB info : fg points interpolated    14966173    1.63
RC 0.3 398 MB info : on average 87.47 finalgather points used per interpolation
RC 0.3 352 MB info : wallclock 0:11:12.86 for rendering
RC 0.3 352 MB info : current mem usage 352 MB, max mem usage 411 MB
GAPM 0.3 352 MB info : triangle count (including retessellation) : 994312
PHEN 0.3 352 MB info : Reflection rays skipped by threshold: 3560987
PHEN 0.3 352 MB info : Refraction rays skipped by threshold: 19657

Below is the night image. 12 area lights of different sizes/types. As above I am using the ambient occlusion in the mia_material set to 4 samples. The wall behind the bed has the color bleed option turned on to improve the light from the glowing mushroom night light.

Night image, time for render: 1:09:13

The more important diagnostics can be seen here from 3.9 to 3.10. This scene did not have a change in any settings from 3.9 to 3.10. Exactly the same settings were used.

mental ray 3.9

RC 0.10 info : rendering statistics
RC 0.10 info : type                           number     per eye ray
RC 0.10 info : eye rays                     18624542     1.00
RC 0.10 info : wallclock 1:34:57.79 for rendering
RC 0.10 info : allocated 1132 MB, max resident 1302 MB
GAPM 0.10 info : triangle count (including retessellation) : 1882529

mental ray 3.10

RC 0.3 764 MB info : rendering statistics
RC 0.3 764 MB info : type                     number     per eye ray
RC 0.3 764 MB info : eye rays               11500119     1.00
RC 0.3 679 MB info : wallclock 1:09:09.39 for rendering
RC 0.3 679 MB info : current mem usage 679 MB, max mem usage 850 MB
GAPM 0.3 679 MB info : triangle count (including retessellation) : 1882529

The time saved is 23%! A reduction in 7 million eye rays and nearly 500MB for memory consumption for the same image.

Below is a day scene based on the original scene I was given that can now be rendered in a reasonable amount of time as an animation:

Jiayu's Original Day scene, no changes from file other than technical.

Additional Notes:

  • Complex scenes will now render faster than before with no changes and consume less memory.
  • Greater complexity sees more benefit.
  • You should be able to reduce your Quality for Unified Sampling and still see a better quality image than 3.9 with a shorter render time.
  • 3.10 now allows you to mix Final Gathering with Irradiance Particles without interpolation artifacts. It also exposes new ways to combine FG + Importons and/or IP, see the Release Notes for the most information.
  • Texture caching is improved and will allow you to render more textures at once with much less memory. The mechanism is also faster. In a future post we will show you how to use this with your Maya installation.
  • user_ibl shaders increase how quickly and easily you can light using mapped textures. user_ibl_env allows you to light in the same way as you would with the Native IBL, but has simpler controls and preserves texture details better. Both follow the same guidelines for samples as the Area Light post (4-8 for samples in the area light) and must match the samples on the light shader itself for best results. More samples will be necessary for more complex HDR images.
  • Raytracing speed was improved but the emphasis was hair and fur. Unified Sampling and better raytrace speed should increase the speed of scenes with hair and fur without needed rasterization.
  • New shaders for hair and fur can be written to make brute force sampling for hair and fur a reality with Final Gathering.
  • Framebuffers with Unified Sampling no longer produce artifacts.

Unified Sampling Redux

As a simplified look at using Unified Sampling as a more “brute force” method that was outlined here; the below example outlines the differences in time and sampling on a visually trivial scene. This should make some things very easy to understand and quick to read before moving on to lights. 😉

Glossy Test Scene

In a glossy scene originally rendered at HD 1080, the first frame was rendered with the following settings using all mia_material_x shaders.

Quality 8
Samples Min 1.0
Samples Max 800
Reflection Bounces 2
Shadow Bounces 2

Resulting Time: 48 minutes

In a second test I added these settings:

Error Cutoff 0.04

Resulting Time: 35 minutes

The images appeared to be identical to the eye. I ran imf_diff to analyze actual pixel differences with this result:

differing pixels: 0.379% (7869 of 2073600)
average difference: 1.265%
maximum difference: 4.632%
Summary: Some pixels differ slightly.
== "glossyA.exr" and "glossyB.exr" are similar

So I am pretty happy with the fact that the time savings of 13 minutes resulted in no observable difference.

Below is an explainer graphic of the glossy rays count set for each sphere.

Reflection Samples from Shader

Below is the Samples Diagnostic framebuffer (tonemapped to work on the internet). You can see that the more “brute force” the reflection rays settings, the harder Unified Sampling had to work.

Samples per pixel (brighter is more)

Below is the time buffer where the longer it takes to render a pixel, the brighter the resulting pixel in the time buffer.

Time per pixel (brighter is longer)

You may also have a better understanding of how Unified will perform consistently across a scene with a single Quality parameter when given a wide range between minimum and maximum samples.(These spheres resemble one another despite having large changes in reflection gloss rays.)

Despite these results you might still notice a little grain on the pure brute force sphere. Add a texture map and you’ll hardly notice but is there a reasonable balance in a more complex scene?

If you need a completely smooth scene where there are few textures and more of a “pure” shader effect, then small increases seem to work well without sacrificing extra time. 2-4 samples works well for this in those special cases. But we find that animation and VFX work do not need this level of detail. This would be for something like print work and large resolutions.

Brute Force Only: 22 minutes at HD 1080


Next we might take a look at lights and how to use them in similar circumstances.