Blog Archives

Using Mayatomr: Normal Displacement

Normal displacement is displacement relative to the surface normal.  Heres a quick tutorial for using normal displacement with Mayatomr.

Full mesh, level 5

Reference mesh, divisions 5

Start with

  • Base mesh (created by down-res’ing a high-res reference mesh)
  • Normal displacement map (exported with values that match the base mesh to the reference mesh)

Step-by-step

Open Maya; import base mesh; apply material shader.  Typically your base mesh will have a low poly count:

poly_base1

Base mesh, divisions 1

From the material’s shadingEngine, attach a displacement shader.

shadingEngine > Shading Group Attributes > Dispacement mat. [map button]

Select the “File” node from prompt and load the correct displacement map. Note: Maya automatically creates a displacementShader between the shading group and the file node. Because normal displacement only requires a scalar (greyscale) value, the file node connects to the displacementShader via it’s alpha channel. “Alpha Is Luminance” has also been enabled so that the alpha is the average of the RGB channels. Set the file node’s Filter Type to Off.

file > File Attributes > Filter Type = Off

This is important because you do not want Maya to try and interpolated displacement samples. If you render now, you’ll get an interesting looking result.

lda_base_no_offset

cool, but not useful

The reason the image looks like this is because the displacement map is only 8-bit.  Mental ray expects negative values to be inward displacement, positive values to be outward displacement, and zero to be neutral displacement.  Because 8-bit images only map to positive values (0.0 to 1.0), the sculpting application has exported a displacement map where 0.5 corresponds to neutral.  Offsetting the alpha by that amount (subtracting) will adjust for difference.

file > Color Balance > Alpha Offset = -0.5

You will most likely want to leave Alpha Offset at 0.0 for 32-bit/floating-point displacement maps (depending on how it was exported).

Better

Better

The mesh still does not match.  Here’s why:

  1. The base mesh still has hard normals causing the displacement to open up seams as it pushes the surface outward.
  2. As soon as you attach a displacement shader, Mayatomr “helps” users by applying a length-distance-angle approximation to the polygon surface.  This is not ideal.

We can fix both these problems by turning the mesh into a Catmull-Clark subdivision surface (ccmesh).  Using the approximation editor, apply a parametric subdivision to the surface.  By default, applying this approximation triggers Mayatomr to export the object as a ccmesh.

Window > Rendering Editors > mental ray > Approximation Editor
mental ray Approximation Editor > Subdivisions (Polygons and Subd. Surfaces) > Create
mentalraySubdivApprox > Subdivision Surface Quality > Approx Method = Parametric

N Subdivisions = 2

N Subdivisions = 2

The seams have closed up, however the mesh has lost all detail.  The shadows look particularly bad because of the low triangle count.   Increasing the number of subdivisions will bring back detail.  To match the reference mesh, set N Subdivisions = N(reference) – N(base), in this case 5 – 1 = 4.

N Subdivisions = 4

N Subdivisions = 4

Lower N Subdivisions result in a loss of detail.

N Subdivisions 3

N Subdivisions 3

 Larger N Subdivisions may cause visual artifacts as you exceeded the resolution of the displacement map (displacement map pixels become visible).

N Subdivisions 6

N Subdivisions 6

Unfortunately, normal displacement is inherently limited in the geometry it can describe. For modern workflows (and the my next blog post!) use vector displacement.

2013 Toyota Avalon: Electric

Recently completed at The Mill is a commercial for the Toyota Avalon called “Electric”. You may recognize this set from a previous piece called “Formula”.

This was rendered in mental ray with a complete CG car as before but added lightning as object area lights to get the right feel of light from the contact points.

Special thanks to Brenton for making this light shader for us to use.

You can see the commercial here: Toyota Avalon: Electric

Toyota Avalon: Electric

Toyota Avalon: Electric

 

Rollin’ Safari

You’re probably familiar with these hilarious shorts from some students at Filmakademie Baden-Wuerttemberg. But we thought we’d share them again to make sure no one missed their work. Too often we shoot for overly complex visuals (explosions and smoke, etc) when something clean and simple (with a good story and texture work) is often more effective and even more enjoyable. The animation, color, and shape here works well together and shows a good aesthetic design.

Hoping to see more from this group of artists in the future!

Rendered with mental ray through Mental Core, these show another side to rendering with mental ray that’s more cartoon and humor driven.

Take a look here: Rollin’ Safari

And read more about it here: CGSociety: Rollin’ Safari

Rollin’ Safari

Directors: Kyra Buschor, Anna Habermehl, Constantin Paeplow.
Producers: Valentina Brüning, Anna Habermehl, Philipp Wolf.
Animation: Kyra Buschor, Anna Habermehl, Constantin Paeplow.
Camera/DoP: Chris McKissick.
Character Design: Kyra Buschor.
Technical Directors: Thomas Hartmann, Sascha Langer, Markus Kranzler Christoph Westphal, David Kirchner.
Effects: Thomas Hartmann, David Kirchner, Markus Kranzler.
Music: Stephan Schelens.
Sound: Nami Strack.
Voice Actors: Ferdinand Engländer, Gottfried Mentor.
Compositing: Johannes Peter, Constantin Paeplow, Christoph Westphal.
Editing: Anna Habermehl, Kyra Buschor, Constantin Paeplow.

Using Framebuffers with the Layering Library (MILA)

You can take a brief look at the main structure of the MILA shaders inside Maya in the first post explaining their usage: The Layering Library (MILA)

One of the most important things to remember about MILA is how the framebuffer passes work.

The builtin framebuffers use the modern framebuffer mechanism in mentalray that uses a named framebuffer and type.

Your main framebuffers are additive; this means in compositing you simply add or plus the passes together to create your beauty. This avoids other operations that might cause problems like multiplication. Multiplying in compositing causes problems with edges and makes it impossible to recreate the beauty render. It also complicates compositing objects onto one another or plates.

Your main passes are (First given as a Light Path Expression (LPE)):

  • L<RD>E  or direct_diffuse
  • L.+<RD>E or indirect_diffuse
  • L<RG>E or direct_glossy
  • L.+<RG>E or indirect_glossy
  • L<RS>E or direct_specular
  • L.+<RS>E or indirect_specular
  • L.+<TD>E or diffuse_transmission
  • L.+<TG>E or glossy_transmission
  • L.+<TS>E or specular_transmission
  • LTVTE and/or front_scatter and back_scatter
  • emission (in LPE, emission is actually a light)

Direct effects are usually the result of the light source.

Indirect effects are usually the result of light from other objects.

Why include LPE? LPE makes specifying passes the same for all rendering solutions. This idea unifies the conventions used for getting the same data regardless of renderer used.

You also have the option to add custom shaders on top of this in the material root node. Keep in mind that what is added here may increase render time since they are run separately from the material and we typically reserve them for inexpensive utility passes like noise, fresnel, and ID mattes.

The Root node, the mila_material This includes the ability to create and attach custom framebuffers for output

The Root node, the mila_material This includes the ability to create and attach custom framebuffers for output

Getting these framebuffer passes from Maya requires a bit of a workaround using a legacy user pass system rediscovered by Brenton. I find it to be easier than using Custom Color with the exception you have to keep track of the names of your passes and spell them correctly to match up. MILA also makes it a universal solution since all shaders are set to automatically write to these buffers without more work. This is part of the idea behind LPE: the  light path stored is always the same for the LPE chosen regardless of renderer. Making this automatic is an easy decision in this case.

For the passes built into MILA you simply need to have the framebuffers ready with the correct name and MILA will write to them automatically. Keep in mind that Maya’s current system overwrites data written to their default passes like “diffuse” so you cannot use those or the same names if they are used in the scene.

Fist: Select the miDefaultOptions node

select miDefaultOptions;

Second create a framebuffer:

AEmrUserBuffersAppend miDefaultOptions.frameBufferList;

The above command creates a user framebuffer seen as default below.

A new user framebuffer

A new user framebuffer

You have two selections above: Data Type and whether or not to interpolate (filter) the result.

You typically want to interpolate results for color framebuffers like direct diffuse, ID mattes, fresnel passes, etc. You do NOT want to interpolate data buffers like z-depth, normals, world points, etc.

You can see the typical data types that should not be interpolated at the bottom of the list. These data types are not interpolated by mental ray because it is mathematically incorrect for compositing. They also require high precision so you will notice they default to Floating Point 32-bit data.

Framebuffer Data Types - Data Passes

Framebuffer Data Types – Data Passes

I have not used the LPE for direct diffuse because Maya does not allow angled brackets and other symbols in text fields for names at this point. After creating and naming your passes, you can then add them to the camera Output Shaders as the last step to render them.

mental ray tab, add an output pass

mental ray tab, add an output pass

When you create an entry you will see a Output Pass that looks like the default one below.

Default Output Shader

Default Output Shader

Since we have already created passes, you can select the “Use User Buffer” option, then in the dropdown “user Buffer” menu, select the pass you want. Below is the direct_diffuse example:

Direct Diffuse Output Shader

Direct Diffuse Output Shader

I then select the following options:

  • File Mode: I want to write to the rendered file
  • Image Format: OpenEXR, I have already specified 16-half float and EXR as my rendering format in the main Render Settings editor.
  • File Name Postfix: I leave this blank. This way all of the passes are written to the same EXR and packed together as layers.

You can follow this same method when adding user passes to the mila_material root. Be sure and name them the same as the pass you will create and then reference in the camera Output Shader

Added Color and Vector buffers

Added Color and Vector buffers

Keep in mind that the usual Maya Default passes for data will still work with MILA and you can select those as well instead of adding those default passes here. It’s useful here if you need different or additional data per shader. ID Mattes are very useful in this case. And in fact, this shader can detect and use the user_data shaders for you to assign ID groups and other data to objects. This means you can render complex scenes with fewer shaders and still organize the passes logically. This will be a future explanation since this introduces a new workflow/pipeline for getting information from Maya while avoiding the Render Layers system when possible.

In the example file below you’ll see I am driving some parameters of the shader with attached object data. This has a few benefits. One such benefit is the data follows the object rather than the shader and you can change the result of the shader by manipulating the object user_data. I also have a single sphere in one ID matte group but also included with another group of ID mattes, giving me different ways to handle the object in post.

You can find an example workflow in this Maya File [removed since MILA updates broke it, need to make a new one]. The scene has the default framebuffers and a couple ID mattes set up. You can play with the materials and quality to get other buffers to show a result (for example, emission is empty because I am not using that effect.) I also have single-layer materials. Try mixing and matching and seeing the resulting framebuffers. Be sure and attach your own HDRI to the Maya IBL Image.

(Maya 2013 SP2)

Additional layering/mixing is left to user experimentation.