One of the features artists have asked for is the ability to do dispersion inside mental ray without needing a custom shader.
Unexposed in the main UI is an option for “weight tint” that is a luminance you can control inside a mila_mix or mila_layer node to get the effect. Slightly offsetting the ior of the transmission nodes is what controls the spread of the dispersion effect.
I will update this post again when I have a useful phenomenon to provide to make this simpler, but in the meantime you might come up with your own experiments. Updated below.
You can find the three band (color) phenomenon here. (Copy and paste if your browser doesn’t download the file. Place in your “include” folder for mental ray shaders)
The phenomenon exposes the following controls for the user as a transmission-only component. You can layer this with other components manually by connecting it through the hypershade. The incarnation of MILA in Maya right now isn’t setup to use custom phenomenon flexibly just yet. By altering the phenomenon you can include other controls. For example you can add a tint control instead of having only colorless transmission. Or for nicer looking dispersion you can create six bands instead of the three in this version (Think ROYGBIV).
You can see the power here of Phenomenon and MILA. You can create and store constructs and expose the controls you want for yourself or others. Using the conventions outlined in the MILA documents you can continue to build complex materials with all the benefits of importance sampling and light sharing.
*hint: if you find the result is a little “green” (assuming you have connected this as RGB) then you can reduce the ray cutoff to solve it. It’s not always necessary and is based on the model. The link to the string options for MILA are HERE. Eventually such controls will be exposed in the UI natively.
It’s that time of year again to look at what’s changed in mental ray. While many things are available in mental ray some things may need a GUI to access. GTC showed a possible UI solution to use advanced preview features.
But first you might want to welcome mental ray’s official blog to the blogging world just posted previously:
“Inside mental ray”
NVIDIA has been hard at work on the plug-in with Autodesk support. You’ll likely notice plenty of changes in this version of Maya and this should be nothing compared to the development that’s coming. These UI examples along with demonstration were shown publicly at GTC.
Things to note in 2015:
- The Layering Library is now included with Maya 2015. These shaders should become your daily driver for all your tasks. These will continue to evolve. Tutorials to follow here as well since these have been updated quite a bit since discussed here previously. These shaders are based on the Material Definition Language.
- Ambient Occlusion on the GPU is integrated into the Maya framebuffer system.
- Global Illumination on the GPU is included but not exposed. This feature is in progress. Look for a tutorial coming soon!
- Improved “brute force” final gather that converges more quickly, especially when using fewer rays and Unified Sampling
- Ptex is now handled natively in the Maya file node. Simply open or attach a Ptex file. (Bump not supported since it relies on UVs. Use normal maps instead.)
- Tiled Texture workflow (known as udim or tiled UVs) is now supported natively in the Maya file nodes with options for Mudbox, Zbrush, and Mari conventions. Make sure to use texture formats that support Texture Caching for the best performance!
- The render settings UI gets another small facelift by defocusing rarely used controls. Work here is ongoing.
- The Environment Lighting mode (Native IBL) is now integrated into the Maya IBL system
- Improvements to Progressive Rendering and stability in Maya. Note that this feature is more complete in 3ds Max 2015 than Maya right now.
- mental ray now uses Open SubDiv from Pixar.
- Improvements to built in object lights including fixes.
- Vastly improved Light Importance Sampling (Light IS) that now handles all light types from point sources to object/mesh lights both textured and not textured (not yet exposed in the UI). Scenes with many lights now render much faster (in some cases less than a third of the original time) and with less noise. A simple “Quality” control is provided for overall control.
- The render viewport has updated controls for color space and viewing.
- Shader ball updates are improved and more interactive.
- Fast frame preview for non-progressive renders
mental ray also supports OCIO. This should make it easier for artists to use linear color workflow in the future.
What does the further future hold? As Maya catches up to existing features in mental ray, you can expect development to move forward more quickly with features reaching Maya at the same time as mental ray.
The Layering Library (MILA) shaders are currently in Beta and are not designed for production use. However, here at The Mill in LA, we spent some extra care (and time) in the lookdev process to use the MILA shaders in a commercial spot for Norfolk Southern.
You’ll see many trains rendered using the MILA library for mental ray.
Feedback was very positive and we were able to better layer effects on objects with different material types and labels easily all while keeping render times under control (some complex 1080HD frames with motion blur were about 2+ hours a frame with brute force settings and the Environment Light) Artists also found the glossy reflection much easier to handle and faster to use than the mia_materials.
One artist said of their experience with the MILA material, “I will never use the mia_material again.”
Take a look at the spot below.
As the MILA shaders evolve, features will be updated here when possible. These shaders are inspired by the Material Description Language and you can find more on that from the GTC conference here: nVidia Material Definition Language for Coordinating Materials (Thanks to Saycon for the link in the comments)
This latest release includes some more user-inspired changes like:
- A non-physical global clamp for reflection to eliminate hotspots (fireflies) from lights and interreflections
- Independent direct and indirect contribution sliders on components to create non-physical but art driven look
- Further moving of controls to global Quality String Options both for all MILA Quality as well as specific controls like “mila glossy quality” etc.
- Continued work on the Quality controls adaptability meaning fewer tweaks, just set the shader look and hit render
- Creating a Diffuse Reflection Detail Quality to replace Ambient Occlusion controls
- Moving controls off the shader components to make them clean and easy, a move toward greater simplicity for mental ray (to be continued)
- The elimination of unused or infrequently used controls
- *experimental* propagation of additional buffers in secondary rays
The clamp option can be used as on or off (default is off) for controlling overbright highlights and reflections. It is a non-physical effect. This means it purposefully changes the energy in the render to avoid artifacts from insufficient sampling. This is usually caused by a very hot HDRI used to light a scene or a bright area light and the resulting indirect reflections.
This clamp option may also reduce render times because fewer samples are taken to resolve hotspots that cause variance in the image. The below image had an HDRI with a high value of over 7000 for the sun. Rendered at Quality 0.20 for Unified Sampling to amplify the effect of poor sampling. This also affects the resulting framebuffer contribution. Notice the areas that are not overly hot are maintained.
Direct and Indirect Contribution
A direct and indirect slider can be used to change the look of the material non-physically. Recall that Direct Reflection is the result of the light. Indirect Reflection is the result of the light from other objects in your scene. The easiest way to think of this is the direct diffuse is historically your diffuse pass and indirect diffuse is the indirect pass or “color bleed” from nearby objects reflecting light and measured by something like Final Gathering.
Below is an example with the effect of direct at 0.00 contribution and then indirect at 0.00 contribution on glossy reflection. You can adjust these independently to achieve a non-physical but pleasing artistic look in a material. This will also affect the resulting framebuffer contribution. These can be texture mapped as well for special effects.
Diffuse Indirect Detail
Diffuse Indirect Detail replaces the Ambient Occlusion controls with an On and Off switch globally, a distance parameter, and quality. Below is an example of on and off at default values.
Framebuffer Indirect Contribution Writing
*experimental* Additional buffers can now be rendered as seen in a reflection or refraction (indirect). This means you can get the matte color of an object written to its own buffer even if it is only seen in a reflected or transmitted ray. Below is an example where the turn blinker (indicator) writes its resulting color matte to the framebuffer despite being behind the lens cover of the light. It also contains information where it was reflected as well. This is useful to isolate and alter elements after rendering.
You can also pass the matte through another node like a rayswitch to further define the mattes generated through transmission (blue) or reflection (red) as seen below.
Now apply Glint!
Keep in mind that this object would be “baked” into the transmission framebuffer derived from the beauty. So changes after rendering would still be limited. This may also increase the render time as an extra color framebuffer is considered for anti-aliasing. If you wish to avoid that you can turn off “contrast all buffers” in the Quality Tab -> Framebuffer Rollout of your Render Settings.
Overall the goals for MILA are continued flexibility and simplicity based on MDL. Redundant and unused controls are moved and clarity in settings is being improved.
Why all of the “Quality” Controls?
Also notice the increase in controls for “Quality”. This is important since the underlying method or algorithm can be hidden under a simple control. This is already true of Unified Sampling and the Native IBL.
Why is this useful?
By hiding the method, developers can later change or improve it without introducing new controls or altering the old ones. This was true of the refinements made in Unified Sampling in 3.10. Artists can continue to work as the renderer improves without learning new techniques.
This is part of how mental ray will continue to simplify the workflow for users without sacrificing flexibility or speed. This will also make the integration of new features much easier in OEM products like Autodesk Maya since documentation and UI changes will be unnecessary with added improvements to existing features.
Unified sampling in the Autodesk 2014 products represents a first step toward mental ray use simplification. The layering shaders, etc,next
— mental ray (@mentalray) April 23, 2013