Category Archives: maya
You can take a brief look at the main structure of the MILA shaders inside Maya in the first post explaining their usage: The Layering Library (MILA)
One of the most important things to remember about MILA is how the framebuffer passes work.
The builtin framebuffers use the modern framebuffer mechanism in mentalray that uses a named framebuffer and type.
Your main framebuffers are additive; this means in compositing you simply add or plus the passes together to create your beauty. This avoids other operations that might cause problems like multiplication. Multiplying in compositing causes problems with edges and makes it impossible to recreate the beauty render. It also complicates compositing objects onto one another or plates.
Your main passes are (First given as a Light Path Expression (LPE)):
- L<RD>E or direct_diffuse
- L.+<RD>E or indirect_diffuse
- L<RG>E or direct_glossy
- L.+<RG>E or indirect_glossy
- L<RS>E or direct_specular
- L.+<RS>E or indirect_specular
- L.+<TD>E or diffuse_transmission
- L.+<TG>E or glossy_transmission
- L.+<TS>E or specular_transmission
- LTVTE and/or front_scatter and back_scatter
- emission (in LPE, emission is actually a light)
Direct effects are usually the result of the light source.
Indirect effects are usually the result of light from other objects.
Why include LPE? LPE makes specifying passes the same for all rendering solutions. This idea unifies the conventions used for getting the same data regardless of renderer used.
You also have the option to add custom shaders on top of this in the material root node. Keep in mind that what is added here may increase render time since they are run separately from the material and we typically reserve them for inexpensive utility passes like noise, fresnel, and ID mattes.
Getting these framebuffer passes from Maya requires a bit of a workaround using a legacy user pass system rediscovered by Brenton. I find it to be easier than using Custom Color with the exception you have to keep track of the names of your passes and spell them correctly to match up. MILA also makes it a universal solution since all shaders are set to automatically write to these buffers without more work. This is part of the idea behind LPE: the light path stored is always the same for the LPE chosen regardless of renderer. Making this automatic is an easy decision in this case.
For the passes built into MILA you simply need to have the framebuffers ready with the correct name and MILA will write to them automatically. Keep in mind that Maya’s current system overwrites data written to their default passes like “diffuse” so you cannot use those or the same names if they are used in the scene.
Fist: Select the miDefaultOptions node
Second create a framebuffer:
The above command creates a user framebuffer seen as default below.
You have two selections above: Data Type and whether or not to interpolate (filter) the result.
You typically want to interpolate results for color framebuffers like direct diffuse, ID mattes, fresnel passes, etc. You do NOT want to interpolate data buffers like z-depth, normals, world points, etc.
You can see the typical data types that should not be interpolated at the bottom of the list. These data types are not interpolated by mental ray because it is mathematically incorrect for compositing. They also require high precision so you will notice they default to Floating Point 32-bit data.
I have not used the LPE for direct diffuse because Maya does not allow angled brackets and other symbols in text fields for names at this point. After creating and naming your passes, you can then add them to the camera Output Shaders as the last step to render them.
When you create an entry you will see a Output Pass that looks like the default one below.
Since we have already created passes, you can select the “Use User Buffer” option, then in the dropdown “user Buffer” menu, select the pass you want. Below is the direct_diffuse example:
I then select the following options:
- File Mode: I want to write to the rendered file
- Image Format: OpenEXR, I have already specified 16-half float and EXR as my rendering format in the main Render Settings editor.
- File Name Postfix: I leave this blank. This way all of the passes are written to the same EXR and packed together as layers.
You can follow this same method when adding user passes to the mila_material root. Be sure and name them the same as the pass you will create and then reference in the camera Output Shader
Keep in mind that the usual Maya Default passes for data will still work with MILA and you can select those as well instead of adding those default passes here. It’s useful here if you need different or additional data per shader. ID Mattes are very useful in this case. And in fact, this shader can detect and use the user_data shaders for you to assign ID groups and other data to objects. This means you can render complex scenes with fewer shaders and still organize the passes logically. This will be a future explanation since this introduces a new workflow/pipeline for getting information from Maya while avoiding the Render Layers system when possible.
In the example file below you’ll see I am driving some parameters of the shader with attached object data. This has a few benefits. One such benefit is the data follows the object rather than the shader and you can change the result of the shader by manipulating the object user_data. I also have a single sphere in one ID matte group but also included with another group of ID mattes, giving me different ways to handle the object in post.
You can find an example workflow in this Maya File [removed since MILA updates broke it, need to make a new one]. The scene has the default framebuffers and a couple ID mattes set up. You can play with the materials and quality to get other buffers to show a result (for example, emission is empty because I am not using that effect.) I also have single-layer materials. Try mixing and matching and seeing the resulting framebuffers. Be sure and attach your own HDRI to the Maya IBL Image.
(Maya 2013 SP2)
Additional layering/mixing is left to user experimentation.
Inside mental ray there is a Native or Builtin image based lighting scheme called environment lighting mode. This was integrated into mental ray some time ago in version 3.7. So it’s been around for about 4 years or more.
It’s unexposed in OEM packages but using our new UI for rendering, we have exposed it for use in Maya 2013. mentalCore also makes use of the Environment Light. We refer to the feature in the UI by its String Option name: Environment Lighting.
Edit: This is properly exposed in Maya 2015 in the Environment Lighting as “Light Emission”
Keep in mind you should be using the UI provided or at the least add the “light relative scale” string option to correctly light non-BSDF or legacy materials. Otherwise they will blow out. This is also true of the user_ibl_env shader. The MEL script below will add it otherwise. (This is unnecessary in Maya 2015 as it is already added from Autodesk.)
select miDefaultOptions; int $idx = `getAttr -size miDefaultOptions.stringOptions`; setAttr -type "string" miDefaultOptions.stringOptions[$idx].name "light relative scale"; setAttr -type "string" miDefaultOptions.stringOptions[$idx].value "0.318"; setAttr -type "string" miDefaultOptions.stringOptions[$idx].type "scalar";
Why do I want to use this?
Simply put: It’s fast and automatic direct lighting of scenes where your primary light source is a high dynamic range image (HDRI).
You have heard of this method in other packages as a Dome Light and in Maya 2013, the user_ibl_env was introduced with similar functionality.
Why would I use this instead of the user_ibl_env?
- The Environment Light re-bakes data by point sampling an environment attached to your camera. This means it also accepts procedurals unlike the user_ibl_env shader.
- In the baking process you can re-bake to a lower resolution that automatically blurs the texture detail, meaning typically faster glossiness and less variance
- The Environment Light combines with BSDF shaders to use Multiple Importance Sampling automatically for quick renders
- The Environment Light will allow you to continue to use the Maya IBL mechanism that is visible and manipulable from the viewport.
- It uses a simple “Quality” scheme that is familiar from Unified Sampling and they work together accordingly.
Can’t I just light with Final Gather?
Sure. But Final Gathering (FG) samples in a way that distributes rays somewhat randomly. This means you have a few obvious problems illustrated below:
- Splotches. Being an interpolated scheme means you need great accuracy to resolve fine lighting details.
- Soft shadows and sort-of occlusion-like shadowing. You need a LOT of accuracy to get the shadows looking correct.
- Complex or high contrast HDRIs need a lot of FG tuning.
An easy way to illustrate this is to look at the indirect diffuse lighting pass.
Now try the Environment Light alone!
MUCH better with pretty much default settings.
Was it longer to render? Yes. It’s a brute force-like technique. But in order to get the same quality from FG would take much MUCH longer to render. So in that comparison, Environment Light is actually much faster.
What does FG + Environment Light look like?
In this situation I was able to capture direct and indirect lighting. But now, indirect lighting is only captured for object to object light reflection. Take a look at the indirect diffuse pass now (it’s very hard to see in this case, open in a new window):
This means you can greatly reduce your FG “accuracy” setting. In some cases as low as 8 or 16 rays. Keep in mind you might still need an FG filter setting of 1 if highlights on nearby objects are especially hot and generate speckles like the first indirect diffuse picture.
Another important thing when using the mia_material: by default it does not generate a specular (direct reflection) from a visible area light. This means your specular pass may be empty for the mia_material because it is using indirect reflection (reflection rays) to sample the environment. Other shaders that directly sample the Environment Light (like the car paint shader) may show noise or grain because it’s less efficient. To reduce this grain you have to increase the “Quality” of the Environment Light (or more Unified Sampling Quality at less efficiency)
How can I set this up?
You can use the regular Maya procedure for adding an HDRI or a Texture to light the scene including the flags. You can also attach any environment to the camera such as an environment shader, environment switch or Sun & Sky.
What are the controls?
First: I want you to realize this is NOT the Maya IBL “emit light” option.
Here are the important exposed controls: (please excuse their existence in the Indirect Lighting Tab. It is a form of direct lighting as illustrated above.)
- On: This means “Automatic” mode. Until the correct Progressive API is used in Maya, there is little point in using Approximate lighting mode. (Approximate generates probe rays and then fills in ambient lighting from the HDR for fast but inaccurate lighting. The other modes sample the light as a light source.)
- Quality: Scalar slider for quality/grain control. Many scenes with sufficient texturing/complexity can get away with as little as 0.3. Increasing this value decreases grain/noise at the cost of speed, more shadow rays are traced. (Too much noise is insufficient Quality)
- Scale: Multiplier for the light. Used to control tint/shade and brightness (value) The visible environment is not changed, only the light contribution. Environments are typically assumed to be comped later.
- Shadow: Optimization, solid shadows are faster but treats all objects as opaque. Off is no shadows cast.
- Cache: on/off, this creates an acceleration structure for faster lookup on complex lighting. This means the lighting lookup is done through a non-texture mechanism. Reflections may show artifacts at insufficient baking resolution (described below) but should light faster. Off = baked to texture with the usual implications/mechanisms for texture lookups.
- Resolution: The lower the resolution, the faster the baking, less memory usage, and quicker rendering at the cost of details in reflection and lighting. (Works with iray as well) Below is an example where the resolution is so low the shadows are muddy and the lighting is very dim because the baking process missed some of the light sources. Below that is a correct version with higher resolution.
Resolution involving Cache “On”. Notice how the same resolution for the cache “on” mode did a better job by default for the lighting and shadows.
- Shader Samples: When baking the IBL, this will take more samples before baking to avoid missing details or little hotspots. Useful with complex HDRI or small resolution settings like the above examples(Works with iray) This isn’t typically important with low contrast or low detail maps/procedurals. A ramp for example, will not benefit from more samples. Use this if you find that you are missing some small light sources or shadow details suffer. You can find a happy medium of lower resolution and some extra shader samples typically easily.
Things to note:
- HDRIs with low range or values will not produce crisp or deep shadows. You might want a happy medium range in a tonemapped HDR so you don’t have hot spikes in reflections and poor shadowing.
- HDRIs with multiple light sources will automatically cast multiple shadows.
- Don’t forget to use “light relative scale”!! (MEL at the top of the post)
- Concave objects or interior scenes not lit directly by the Builtin IBL will need the usual higher quality FG and Portal Lights.
- Set-acquired HDRIs work well for this with some additional bounce lights to “sweeten” the look.
- The mip_matteshadow shader can be used to capture shadows for compositing onto live action plates.
- Remember to use HDRI data. JPEGS and other common formats do not have enough range to produce good lighting.
- Remember to reduce your Final Gather accuracy to speed up renders considerably. You do not need high settings since the Environment Light handles the bulk of the work.
Here’s another spot rendered with mental ray.
The environment and car were rendered and composited with live action ink and effects from Houdini.
Originally envisioned as a filmed piece, it was decided later that the car and environment would be replaced with a purely rendered version. The artists matched the shot car so closely it was impossible to tell the difference. This allowed the freedom to change how the commercial was shot while maintaining the original look and feel the director desired.
Take a look here: 2013 Toyota Avalon: Formula
Thought I’d post another spot rendered with mental ray.
From a spot for Proctor and Gamble highlighting the moms that raise Olympic Athletes, The Mill created CG crowds and stadiums using Maya and Massive.
This spot also garnered an Emmy win for The Mill, L.A. Congratulations to the team!
You can watch the ad here: The Mill – Best Job