Monthly Archives: November 2013

Using Mayatomr: Normal Displacement

Normal displacement is displacement relative to the surface normal.  Heres a quick tutorial for using normal displacement with Mayatomr.

Full mesh, level 5

Reference mesh, divisions 5

Start with

  • Base mesh (created by down-res’ing a high-res reference mesh)
  • Normal displacement map (exported with values that match the base mesh to the reference mesh)

Step-by-step

Open Maya; import base mesh; apply material shader.  Typically your base mesh will have a low poly count:

poly_base1

Base mesh, divisions 1

From the material’s shadingEngine, attach a displacement shader.

shadingEngine > Shading Group Attributes > Dispacement mat. [map button]

Select the “File” node from prompt and load the correct displacement map. Note: Maya automatically creates a displacementShader between the shading group and the file node. Because normal displacement only requires a scalar (greyscale) value, the file node connects to the displacementShader via it’s alpha channel. “Alpha Is Luminance” has also been enabled so that the alpha is the average of the RGB channels. Set the file node’s Filter Type to Off.

file > File Attributes > Filter Type = Off

This is important because you do not want Maya to try and interpolated displacement samples. If you render now, you’ll get an interesting looking result.

lda_base_no_offset

cool, but not useful

The reason the image looks like this is because the displacement map is only 8-bit.  Mental ray expects negative values to be inward displacement, positive values to be outward displacement, and zero to be neutral displacement.  Because 8-bit images only map to positive values (0.0 to 1.0), the sculpting application has exported a displacement map where 0.5 corresponds to neutral.  Offsetting the alpha by that amount (subtracting) will adjust for difference.

file > Color Balance > Alpha Offset = -0.5

You will most likely want to leave Alpha Offset at 0.0 for 32-bit/floating-point displacement maps (depending on how it was exported).

Better

Better

The mesh still does not match.  Here’s why:

  1. The base mesh still has hard normals causing the displacement to open up seams as it pushes the surface outward.
  2. As soon as you attach a displacement shader, Mayatomr “helps” users by applying a length-distance-angle approximation to the polygon surface.  This is not ideal.

We can fix both these problems by turning the mesh into a Catmull-Clark subdivision surface (ccmesh).  Using the approximation editor, apply a parametric subdivision to the surface.  By default, applying this approximation triggers Mayatomr to export the object as a ccmesh.

Window > Rendering Editors > mental ray > Approximation Editor
mental ray Approximation Editor > Subdivisions (Polygons and Subd. Surfaces) > Create
mentalraySubdivApprox > Subdivision Surface Quality > Approx Method = Parametric

N Subdivisions = 2

N Subdivisions = 2

The seams have closed up, however the mesh has lost all detail.  The shadows look particularly bad because of the low triangle count.   Increasing the number of subdivisions will bring back detail.  To match the reference mesh, set N Subdivisions = N(reference) – N(base), in this case 5 – 1 = 4.

N Subdivisions = 4

N Subdivisions = 4

Lower N Subdivisions result in a loss of detail.

N Subdivisions 3

N Subdivisions 3

 Larger N Subdivisions may cause visual artifacts as you exceeded the resolution of the displacement map (displacement map pixels become visible).

N Subdivisions 6

N Subdivisions 6

Unfortunately, normal displacement is inherently limited in the geometry it can describe. For modern workflows (and the my next blog post!) use vector displacement.

Render Tests, Combined Lighting

In looking at features in mental ray, there are a lot of combinations that can be had. Some are more helpful than others based on scene needs.

And some results might be counter intuitive.

HD Kitchen Render. Click to enlarge.

HD Kitchen Render. Click to enlarge.

In rendering you have two usual options for indirect lighting.

One involves brute force-like rendering (also known as “unbiased”) where every sample in the scene sends rays back into the scene to sample other objects and the environment. This is typically easy to tune (just crank it up until acceptable), requires less technical skill, and is pretty predictable frame to frame in variance without testing. However, it can increase render times by a large amount in complex scenes.

Option two involves interpolated (or “biased”) schemes where sparse samples are taken across the image and merged together through a interpolation algorithm. This is typically much faster to render but requires more artist tuning and may cause surprises in later frames. For Final Gather we have the option of using the fg_shooter to help mitigate this problem.

In the below examples I am combining brute force (non-caching to be specific) Final Gather with Irradiance Particles. This means Final Gather rays are used to collect detailed information in the primary bounce (typically the most important). Irradiance Particles are used to collect secondary (and higher) bounces. There are no portal lights being used. I might also call this “The Joys of Rendering On My Ancient Workstation” which can still get the job done.

Irradiance Particles have a few interesting features:

  • Additional bounces each send fewer and fewer rays so each pass of collection is faster (Final Gather is only one ray each additional bounce)
  • It only collects diffuse information (Final Gather collects specular as well as diffuse)
  • The Importon optimization phase is sadly not multi-threaded. But this is less important here since we are using it as secondary bounces and emit fewer
  • The importance based scheme means it’s smart about probing the brighter areas of your scene for information instead of wasting rays in dark spots. This effectively “aims” indirect illumination rays towards the more important areas of the scene.
2 diffuse bounces, 40 minutes

2 diffuse bounces, 40 minutes

3 diffuse bounces, 29 minutes

3 diffuse bounces, 29 minutes

5 diffuse bounces, 24 minutes

5 diffuse bounces, 24 minutes

10 diffuse bounces, 18 minutes

10 diffuse bounces, 18 minutes

Notice a few things here about how long this took to render in the image captions. . .

Unified Sampling takes less time with each increase in diffuse bounces (I do use a higher than normal setting in this case to force Unified to catch small illumination details). Adding bounces for a faster render might be counter intuitive in this case. There’s a reason for this, the decreased lighting contrast needs fewer samples per pixel. But there are things to think about in a more realistic scene.

In most scenes, texturing will create more contrast in the image. This may drive more samples through Unified Sampling. In this case you may be able to reduce the number of rays sent by brute force Final Gather since Unified is taking more samples anyway. This would be a balance you might need to tune. I would still recommend portal lights which means more direct lighting detail and a reduced need for indirect illumination rays. Also, using this type of combination might require less time to tune an image for the right lighting/detail balance that one or the other algorithm might need.

My settings for this scene are:

  • Final Gather Force – 48 accuracy, filter 1 (the filter option means nothing if rays/accuracy are less than 8)
  • Importon Density – 0.5 (this can be played with based on what you might need, but recommend it be as low as possible. Too few introduces artifacts in crevices and corners.), bounces 3
  • Irradiance Particles – rays 64 (The Maya UI limits this to 512 rays. There is no actual limit. You can change this in the mel or use our UI)

Diffuse bounces are controlled through the Final Gather trace settings, not the Irradiance Particles.

A quick primer for Irradiance Particles:

This lighting technique is best suited for interiors and as a secondary bounce usually. For exteriors I highly recommend the Environment Lighting Mode. The main reason for this choice being: It only collects diffuse information. This means something like glass will block it from getting information. Also, the optimization phase for Importons is single threaded and cannot be cached for multiple frames.

In looking at the easiest way to increase the quality of Irradiance Particles the simple answer is: increase the number of Importons emitted. This means:

  • Better detail in geometry and cracks
  • More scene information from better sampling and more rays
  • Slower renders from the emission and optimization phases

However, you can achieve better performance by balancing rays, interpolation, and Importon density if you have the time and/or desire.

iray hardware benchmarks

For those of you looking into using iray inside Maya or through future solutions like [0x1] integration, the friendly developers at migenius have put together a useful comparison for hardware (GPU) setups.

Take a look at their comparison here using Reality Server: Iray Benchmarks

iray benchmark scene from Evermotion