Blog Archives

“My render times are high. How do I fix that?”

If you don’t have access to an infinite render farm, chances are you might be concerned about render times.

With a certain amount of flexibility and exposed controls you may be tempted to try lots of different things or even combine techniques seen on the internet. In some cases this can be useful and in others this combination doesn’t work well.

For example:

If you use an inexpensive Final Gather solution you may increase the quality of, or add, Ambient Occlusion to increase details. If you then find that Final Gathering has splotches or hotspots caused by some other effect, your first instinct may be to increase the quality of Final Gather. Well, it may be that now you can reduce or eliminate the Ambient Occlusion. In some cases we forget to do that and suddenly our render takes much longer. This is both the benefit and downfall of flexibility: keeping track of your decisions.

Where’s a good place to see what might be eating your render time?

The Output Window and the Time Diagnostic Buffer with Unified Sampling.

The Maya Output Window

What effects cost you the most time?

Well, that depends on what you are rendering. Hair can be difficult. Or scenes that reach your memory limit. Layering shader on shader can also increase or double some ray counts (this will change with the introduction of the layering library in mental ray 3.11) Even texture input/output (I/O) can make rendering slower. I will try and touch on some of the more common cases and solutions.

Let’s look at some output from a render. How can you find it? Well, you can increase the verbosity of the output in the Maya Rendering Menu > Render > Render Current Frame (options box)

Render Current Frame Options Box

I usually choose “Progress Messages”. The option below that is “Detailed Messages” and gives you more information, but also tells you every time mental ray blinks and isn’t usually necessary. Also, the more messages it prints, the more it might impact render time as a debug process.

So, I have rendered a decently complex scene from a project at 1280 by 720. I have quite a few lights in the scene, most of which are area lights (about 46 of them, most are small). I have wide glossiness and I am using the Native IBL to render the environment lighting.

I haven’t included the image here because we’re going to look at the numbers. (I know, really boring.)
RC 0.9 1072 MB info : rendering statistics
RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : eye rays                 6613564            1.00
RC 0.9 1072 MB info : reflection rays         65049860            9.84
RC 0.9 1072 MB info : refraction rays          3693155            0.56
RC 0.9 1072 MB info : shadow rays            501916475           75.89
RC 0.9 1072 MB info : environment rays        69498575           10.51
RC 0.9 1072 MB info : probe rays              33284793            5.03
RC 0.9 1072 MB info : fg points interpolated  31840843            4.81
RC 0.9 1072 MB info : on average 34.21 finalgather points used per interpolation
RC 0.2 844 MB progr: writing frame buffer mayaColor to image file D:/untitled_project.exr (frame 12)
RC 0.2 844 MB progr: rendering finished
RC 0.2 844 MB info : wallclock 0:31:52.00 for rendering
RC 0.2 844 MB info : current mem usage 844 MB, max mem usage 1091 MB
GAPM 0.2 844 MB info : triangle count (including retessellation) : 5240633
IMG 0.2 844 MB info : total for cached textures and framebuffers:
IMG 0.2 844 MB info :                 4656815552 pixel block accesses
IMG 0.2 844 MB info :                     535270 pages loaded/saved, 0.0114943% image cache failures
IMG 0.2 844 MB info : maximal texture cache size: 2700 pages, 298.781 MBytes
IMG 0.2 844 MB info : uncompressed cached texture I/O: 16650.313 MB
PHEN 0.2 726 MB info : Reflection rays skipped by threshold: 17691563
PHEN 0.2 726 MB info : Refraction rays skipped by threshold: 2272489

What can you gain from this?

In a raytracer, you are shooting quite a few rays around in your scene. These strike other objects and more rays are sent, etc. This can grow geometrically in a scene where you are using some expensive effects.

Eye Rays:

RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : eye rays                 6613564            1.00

These are the rays shot from the camera used to sample the scene. These strike objects and cause other rays to be cast. This is part of why they are listed first. These may be more or less depending on a couple things:

1. Motion blur will call more of these to smooth blur. Each ray is jittered temporally during the frame (shutter interval) to catch changes in a pixel as objects pass through them spatially (objects crossing the frame in movement.)

2. Depth of Field will call more of these to resolve blur

3. Scenes with high detail or contrast will need more to improve anti-aliasing.

Tuning: Reducing these is only an option if you can live with less quality (more grain or aliasing). Reducing these in Unified Sampling is done through decreasing the Quality parameter or by artificially capping the maximum through Max Samples. You can try using a Render Region on a noisy area of the most complex/blurry frame.

-Gradually lower the Quality until you reach a limit of what you’d accept.

-Introduce small amounts of “Error Cutoff”

-Lastly, alter per-object samples as needed for difficult objects.

Reflection Rays:

RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : reflection rays         65049860            9.84

These are rays cast into a scene from an object/shader to collect indirect reflection (light from other objects seen as a reflection). When these strike another object they run that object’s shader to get color information.

1. You may have more of these when you increase the shader ‘Samples’ parameter for glossy reflections

2. You may have quite a few if you have a high trace depth set by either the shader or render settings to allow more than one ‘bounce’ of the ray. This is necessary for things like a reflection in a reflection (imagine a hall of mirrors).

Tuning: You can reduce these in a few different ways. You can:

-reduce the samples for glossy rays based on acceptable quality (the appearance of grain) For objects and scenes where this is textured or there is motion blur and/or depth of field, we recommend a brute force approach (Using Unified Sampling) with a setting of ‘1’ sample. You may need more for very wide glossy lobes on perfect surfaces without textures that are very reflective or blurred.

mia_material Glossy Samples

-reduce the trace depth where it makes little or no visual difference (you may not need a reflection of a reflection of a reflection if it is blurry or dim) Or use a falloff distance with either a color or environment attached.

Trace Depth Options: Reflection

Reflection Falloff Distance and Trace Depth overrides

-use the mia_envblur node to send only single samples to measure an environment texture that is pre-blurred. This is supported in the mia_material and the car_paint phenomenon. An example can be seen here on Zap’s blog: More Hidden Gems: mia_envblur

Single Sample from Environment (mia_envblur node as environment)

Refractions:

RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : refraction rays          3693155            0.56

These are rays sent through and bent or change direction (refracted) through objects like glass or windows. Note that this isn’t the same as transparency where a ray passes through an object and is not bent. Transparency is handled differently in a scene but may still be expensive with large amounts of semi-transparent objects. There are no such rays in this scene or they would be listed. Glossy refraction (specular transmission) is an effect like frosted glass and can be one of the most expensive effects. It is not as simple as a more diffuse effect like specular (glossy) reflection.

1. Frosted glass or blurry effects will increase these.

2. A high refraction trace depth may also increase these.

Tuning:

-Reduce the samples on the refraction (similar to the control seen in the mia_material reflection samples) to an acceptable amount of grain.

-Add a small amount of translucency instead

-Reduce the refraction trace depth in the shader or the Render Settings. Or use a falloff distance with either a color or environment attached. A good guide for the global trace depth is how many surfaces you must pass through before stopping. For instance: a correctly modeled (volumetric) empty bottle will have 4 surfaces to strike before passing through the other side.

Shadow Rays:

RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : shadow rays            501916475           75.89

Shadow rays are rays sent from surfaces back to light sources. These may sample an area light (direct reflection or specular for invisible sources). These can be expensive and are usually the most prolific in a scene. The more lights casting shadows and especially soft shadows, the more you will have. The larger the area light, the more you will have as well if you need to reduce grain. And naturally, the more lights you have casting shadows, the more of these rays you will have.

1. Large area lights may be sampled more to reduce shadow grain or direct reflection noise on shiny objects. This happens when the light is invisible or the shader is selected to use “highlight only” for reflections even if the area light is visible.

2. Slightly softening the shadow and increasing the rays on delta lights (lights without area like point and spot lights) generate more samples

3. The shadow trace depth is high so you can see a  shadow in a reflection or refraction for example

4. High ‘Quality” settings on Native IBL or high samples on the user_ibl shader.

A quick way to read about optimizing area lights can be seen in the Area Lights 101 post.

Tuning:

-Follow the guidelines in the Area Lights 101 post for tuning area lights using both High/Low Samples and a helpful shader like the Physical Light

-Reduce the trace depth as needed. You may not need the reflection or refraction of a shadow in a blurry surface; especially if Final Gather is already darkening it.

-Old trick: use a depth map shadow or preferably a Detail Shadowmap that can be baked and reused on many frames assuming the objects or lights casting those shadows do not move or are not animated. You can do this selectively per light.

Environment Rays:

RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : environment rays        69498575           10.51

Environment rays are rays that leave the scene and call the environment. These are usually fast. They exist in this scene more often because I am also using the mia_envblur to speed up the environment lookups for glossy reflections as described above in Zap’s blog.

Tuning:

-None typically.

Probe Rays:

RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : probe rays              33284793            5.03

These rays are usually the result of Ambient Occlusion rays being sent into the scene. (Ambient Occlusion + Colorbleed isn’t the same thing in this case.) These are caused by their inclusion in a material like the mia_material or in a separate occlusion framebuffer pass. In the mia_material with Unified Sampling we usually recommend keeping the sample counts to 4-6 since it is like a lighting effect. If used in a pass there are a few things that affect the quality: these things may be the distance they travel, the distribution of objects in the scene, and of course sample count for the buffer. (Note: The Native IBL set to “approximate” mode will generate probe rays as it is using lighting by the environment through occlusion. Not usually recommended but ok for tests.) If no distance cutoff is used, it increases the raytracing overhead for your scene by striking anything contained.

Ambient Occlusion has become a staple effect for most CG work but the need is less than before. Generally used as a fake for global illumination in the past, the inclusion now that global illumination is faster and more detailed isn’t always necessary unless the global illumination solution is purposefully reduced or interpolated a lot and loses details.

Creating an AO pass by default for compositing can be used to enhance or create details that aren’t there (occlusion where there is direct lighting is not realistic but is an artistic consideration) Using this pass as a multiplication in post is also incorrect mathematically if trying to reproduce a beauty render. Production is starting to move away from using Ambient Occlusion as a pass or effect in modern raytracers. Path tracers like iRay automatically include such an effect in its light transport so adding this effect on top is redundant.

Tuning:

1. Avoid large sample counts in a shader. These rays may be sent when the shader is struck by some other ray like reflection or refraction. High trace depths will call more and more of these as the ray bounces around.

Ambient Occlusion in the mia_material

2. Use a small distance to avoid lots of strikes on other objects that are unnecessary. For example: Buildings to scale 3 city blocks apart do not need to occlude one another.

3. Detailed indirect illumination may decrease the need to have this feature on at all.

Final Gather Points Interpolated:

RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : fg points interpolated  31840843            4.81

Final Gather is a topic in and of itself, so I will hit the highlights here. These are the points that are generated in the prepass and interpolated when they are struck (or near) by an eye ray. Points are generated based on geometric complexity (automatically adaptive) and by altering the “Point Density” parameter in the Render Settings. (These are also altered by the old radius settings that have since been deprecated and should be avoided for easier setup and rendering.)

Final Gather prepass time is greatly influenced by “Accuracy” which are the number of rays sent to measure the scene and “Point Density” used to place points projected by the camera on geometry. During the render phase, “Point Interpolation” can increase render time at higher settings because the renderer is doing more work mathematically.

Since we are talking about the render phase and not the prepass phase I will just mention those solutions here. Final Gather settings and prepass may be covered later.

Tuning:

-Avoid large interpolation values. If your scene has complex lighting, increase the “Accuracy”. If the scene has complex geometry, increase “Point Density”

Final Gather Settings

-Use more direct lighting to stabilize the solution (such as the Native IBL or user_ibl)

-Use the fgshooter shader/script to avoid flickering in animations

Triangle Count:

GAPM 0.2 844 MB info : triangle count (including retessellation) : 5240633

This may sound silly, but most modern raytracers do not have a Scanline option. This is because we have reached a point where complex scenes with lots of triangles are common. Scanline rendering may slow down this process with many triangles. Instead you should turn off Scanline and select “Raytracing” as the renderer. This is the default in Maya 2013. Rasterization also counts as a Scanline algorithm although more modern.

This is often why comparisons with other renderers may show a slower result, users with lots of objects or displacements fail to turn off Scanline.

Tuning:

-Stop using Scanline Algorithm!!

-Do not use overly aggressive displacements

-Use proxies or assemblies: these are pre-translated and more memory efficient since they are on-demand geometry

Texture I/O:

IMG 0.2 844 MB info : total for cached textures and framebuffers:
IMG 0.2 844 MB info :                 4656815552 pixel block accesses
IMG 0.2 844 MB info :                     535270 pages loaded/saved, 0.0114943% image cache failures
IMG 0.2 844 MB info : maximal texture cache size: 2700 pages, 298.781 MBytes

Texture usage can not only increase memory usage, it can slow down a render by quite a lot! Reasons for this can be:

1. Large textures pulled across a network

2. Un-mipmapped or un-cached textures, this will force mental ray to load the entire full-resolution texture from the source even if all of it is not seen.

3. Insufficient memory means a lot of flushing instead of rendering (related to point 2)

4. Poorly filtered textures may also call more eye rays to solve aliasing. This runs the shader and may increase all of the other rays counts.

Tuning:

-The easiest catch-all is read the post on Texture Publishing

-For the image cache failures you will want to keep this as low as possible. Preferably below 0.01% This can be done by altering a few things manually such as:

*The tile size of the cached texture with imf_copy

*The cached memory limit with the registry option to force more efficient handling, either an increase or decrease. This and the option above work together and are scene dependent. Not always worth a lot of tweaking unless your scene is exceptionally texture heavy. I can now render scenes with hundreds of 4k textures with only 8GB of RAM locally.

Notes:

    • I didn’t cover interpolated reflections or refractions. This is because in animation they are difficult to not artifact. With the usage of Unified Sampling you may not even need those features. Future shading models (BSDF) will also omit these features.
    • I didn’t cover the Ambient Occlusion Cache. While it may be faster to use during a render, tuning it can be difficult and less necessary with the usage of Unified Sampling
    • Try to avoid layering shaders for some effects. A lot can be accomplished through selective layering of textures instead. 3.11 will introduce the layering library that will help remove this effect of added ray counts.
    • I assume usage of Unified Sampling
    • Using ray cutoff values: these can be useful and exist in the mia_materials as a way to tell the shader not to cast a ray if the effect is not important. It’s a little tricky to use, but heavily traced scenes may have some speed-up if this parameter is increased. Do so slowly and test frames, it will erode raytraced effects if too high with little benefit.
    • Use the Time Buffer Diagnostics as seen in the Unified Sampling for the Artist post to identify where your scene is taking longest to render. Then look at those shader settings or possibly change per-object sample settings.

Time per pixel measured in ‘S’ or seconds. Brighter is longer.

  • Dimmer reflections/refractions need fewer samples
  • The decadent “Maya Glow Buffer” is very slow on large resolution rendered frames, even if the effect isn’t used. Turn off “Export Post Effects” in the Render Settings > Options and do the effect in post.
  • Scenes rendered in motion with motion blur do not need to be perfectly smooth when viewed in motion.
  • Do not marry an image. Some tweaks may alter the look. Even client notes alter the look. Go with the best balance of what’s achievable and is possible in the time you have. Otherwise you’ll constantly be unhappy.
  • To resolve artifacts, simply “cranking up the settings” is a horrible idea. Use the progress messages and the time buffer to make faster/smart decisions.
  • Form habits of rendering with correctly prepared textures and default settings. Only tweaking where necessary by recognizing the cause of the artifact, be it FG splotch or aliasing crawl.
  • Begin to ween off of using Ambient Occlusion as a default effect or pass. The original reason to use it (ao multiplied against an ambient pass) no longer exists and is really an artistic consideration now.
  • Always remember you are going for a good image. An improvement on reality so-to-speak. Avoid using mental ray as a physical simulation to render an image. Use iRay or similar for that type of workflow. Flexibility and choice is key to getting what you need quickly for animation and visual effects
  • I did not include Irradiance Particles or Photons. These aren’t used as often (or at all) in VFX or animation work. They are also (like Final Gathering) topics in their own right.
  • If non-of-the-above applies, change your BSP2 to BSP and try again. If your geometry has bad bounding boxes and other problems, ray traversal can be painfully slow. This is a geo problem, remake it if necessary.

New Maya Rendering UI Testing!

After some careful thought and a lot of tedious work by developers Brenton and Corey (mental core), you can find a Maya Render Settings UI for testing. Barton Gawboy is project management. The purpose of this UI is to provide a more official/intended workflow for mental ray. This also means makers of UIs for other packages can use this as a template for features on modern mental ray (3.10 and future).

This UI is in an early phase and should not be used for production. Currently the Quality settings have been re-worked to provide a simpler interface and a modern workflow for using mental ray in Maya. Over time we will be improving on this (light and passes tabs to be re-worked) and adding documentation. We recommend Maya 2013 SP1.

You can find the scripts on the Google Code page: Maya Render Settings UI

A further discussion can be found on the ARC forum (must be a member): Maya Options UI

Navigating mental ray in Maya: ideas and experience

Since the integration of mental ray in Maya may obscure modern workflows, many of the posts here explain how to make the correct choices. But typically we assume some prior knowledge of Maya and mental ray.

We will begin a series soon on how to make modern choices and avoid pitfalls in the UI (like the default absence of shadows, no passes with materials unless using the *_passes materials, etc).

If you have experience where you have had difficulty tracking down a problem only to discover it was a checkbox or some other snafu, leave that here in the comments with a short description. We can use this information to make new users more efficient and help refresh the rest of us too. We will try and include this in sections explaining basic scene setup. Our cheat sheet for quickly rendering.

If you are a developer, it might be helpful for you to see where some defaults could be changed and alter the corresponding .mel in your installation to make a new default.

Please keep the comment factual and tidy. We reserve the right to edit it to the essence of the problem. 😉

For example:

My passes kept rendering black using the mia_material_x. I discovered I should upgrade them to the mia_material_x_passes shader.

The user_ibl shaders: Part 1

Introduced with mental ray 3.10 are new shaders called the user_ibl shaders.

Inside this shader package are two new shaders with different usage scenarios.

  • user_ibl_env
    • A more simplified usage than the Native IBL (mental ray), the user_ibl is a scene entity used for lighting a scene globally from an environment.
  • user_ibl_rect
    • A shader used to generate light cards or “billboards” to replace otherwise complex geometry and lights in a scene. An example workflow is discussed with Speed Racer and Digital Domain: How to Paint a Digital Car 

Part 1: The user_ibl_env

Why would you use this instead of the Native IBL with string options?

The user_ibl_env can be used as a shader in the scene. This means it can be operated and manipulated as a scene shader attached to an area light. It also improves on the importance sampling used by the Native IBL.

The shader uses a connection to a texture. This means it has direct access to all of the detail found in an HDR image. The Native IBL will bake anything (including procedurals) attached as an environment. The user_ibl_env requires a texture to work correctly. If you are using a procedural like a ramp, it should be baked to a 32-bit image format like an HDR or EXR for rendering.

A model car rendered with user_ibl_env and HDR

First, let’s look at the shader settings:

user_ibl_env Shader Settings

Texture: This is where you attach a lat-long formatted HDR or similar high dynamic range texture to be used to light the scene.

Samples: The samples used for lighting. This is the maximum number of samples used for lighting. More complex images or images with wide ranges of values may need more samples. As the importance of the sample becomes less (maybe it’s a few reflections or refractions later) the shader may call fewer than this number.

Color: A scaling factor (RGB) for the colors in the texture. You can use this to manipulate the texture colors from the shader.

Intensity: A standard multiplier for the lighting effect. 2 = twice a s bright. 3 = three times as bright, etc.

Shadow Mode: 0 = opaque 1= solid 2 = transparent  The default is transparent shadows (good for images with windows or colored glass). The UI can be changed by using the enum attribute in the .mi file that will give you a dropdown menu instead of an integer field. Change the line in the .mi file regarding the shadows to this:

integer "shadow_mode" default 2,            #: enum "no shadow:opaque:transparent"

Added “enum” attribute to the .mi file.

Rotate: This rotates the lat-long texture for placement. This is measured in degrees.

As Reflection: Was the image being used for lighting shot through a reflection (mirror ball)? This will reverse the image to correctly integrate objects.

Primary: Visibility

How do you use this shader correctly?

There are some steps to correctly use this shader inside Maya. We’ll look at them here step by step.

1. Create an area light in Maya. Its position and size do not matter. Under the mental ray rollout select Use Light Shape.

2. Under the mental ray ->area light rollout,  Set the area light to ‘visible’.

3. Set the Type to Custom

4. Under the mental ray -> Custom Shaders attach a user_ibl_env shader to the Light Shader connection

5. To keep things easy, use the Maya Connection Editor to connect the user_ibl_env Samples to the High and Low Samples of the Area Light

Connect the user_ibl_env samples to the area light samples

A final area light:

A connected area light set up for the user_ibl_env

If you try to render your scene now, it won’t light as an environment. You still need to attach this shader to the Environment connection on the camera under the mental ray rollout.

Attach the user_ibl_env to the camera Environment

Additional Notes on this connection: You can still use ray switches and the mia_envblur shader here and attach the user_ibl_env to them. Keep in mind you may need to use a large resolution setting in the mia_envblur shader to preserve detail in reflections.

Lastly, and very important: you must use the string option for Light Relative Scale for this shader to correctly scale the light for non-BSDF or older (legacy) shaders. This value is 1/pi. This is added in the miDefaultOptions.

Name: light relative scale

Type: scalar

Value: 0.318

Now render with Unified Sampling. The following image was rendered using a backplate and HDR from: HDR Labs.

user_ibl_env example render

 

Additional Notes:

  • In these images I did not use indirect lighting. If you do use it in such a scene, you can decrease the quality by quite a lot. Maybe 16 or 32 rays (or around there) for Final Gather.
  • Insufficient samples for the user_ibl will show as grain in the image. In motion, slight grain won’t be noticed, don’t over-tune your scene.
  • Avoid using any shader where it forces a specular (direct reflection) calculation from these lights. It will cause noise and increase render time.
  • Do not defeat the importance sampling mechanism by making the High and Low Samples of the area light different than the user_ibl shader. Let it do the work for you.