Maya 2016 brings a list of changes to rendering with mental ray. This is the greatest amount of change in several versions and may be disorienting for experienced users. This begins the long awaited move to modern rendering in Maya workflows. You will also notice mental ray is truly a separate plugin for Maya and installs separately. This may allow updates out-of-sync with Maya releases. You can find the plugin download here: mental ray for maya
The emphasis is on simplified rendering using abstracted quality controls and global controls that are easy to find–changing how users interact with mental ray and reducing the decisions or options you might need to consider.
To take the most advantage of new techniques it’s recommended you use the MILA or MDL materials and area lights (including the newly integrated object lights) when possible. Newer techniques begin to rely on the core itself to optimize the scene (Light IS, MDL, GI Next, MIS, etc.) and the use of the GPU, when possible, to speed up rendering. Note that MDL will be covered later as materials are made available.
Improvements to the mental ray 3.13 core include:
- Improved Light Importance Sampling — scenes with many lights of any type can benefit. Especially when there is high contrast lighting and use of HDRI mapping to area lights. Lights must be physically plausible, i.e. quadratic falloff
- Improved convergence of Progressive Rendering. This mode now better matches progressive off.
- Improved texture caching performance of tiled and mipmapped file formats — .tif, .iff, and .exr
- Useful Multiple Importance Sampling when using MDL materials and the option for MIS turned on (string option for now)
- The ability to render scenes in Iray for Maya or mental ray using the same MDL materials (partial support for some effects in 3.13)
- Improved hair shading and rendering
- Deep Data for exr file rendering
- GPU accelerated AO and GI faster than before using updated OptiX Prime libraries
- The separation of the raylib component from the integration, meaning the possibility of updates to mental ray without changing the plugin (Dynamic raylib for integrations)
- Memory management improved for large scenes
- GI “Next”, which is an improvement on the ideas first shown in GI on the GPU. GI Next allows for all mental ray features like motion blur, lens shaders, more light paths, and more to be rendered using a new brute force based algorithm. This prototype is currently CPU only and only requires a single Quality control. GPU acceleration is eventual. More information will be available later. Officially here is the explanation of GI Next:
New Global Illumination “Next” (Prototype)
This version of mental ray offers a new global illumination engine to compute indirect lighting efficiently, and without the restrictions of the previous experimental solution GI GPU. In particular, it supports motion blur and works well with custom shading effects, like lens distortion and depth-of-field, volume effects, and subsurface scattering. It covers all the indirect lighting features of the traditional Final Gathering approach, like “color bleed” diffuse-diffuse bounces even passing through transparent surfaces or seen in reflections. Caustics are currently not handled. In contrast to the traditional GI implementations, the new engine is utilizing a brute-force algorithm without any caching, to guarantee uncompromising and consistent quality in static images and in animations. This also makes it extremely easy to use.
The new global illumination engine is considered a prototype in this version because it is under continuous development. It currently runs only on CPU, but is planned to take advantage of the GPU going forward. It should work reliably with typical scenes and setups, and with most third-party shaders. It can be enabled and controlled with string options and on the command line of standalone mental ray.
The render settings are the most impacted by integration changes. Others will progress over time, but this section has the most effect on users rendering images.
The settings are now separated into fewer and better defined tabs:
- Common – this remains nearly the same except the addition of a control for the much improved color management
- Quality – the most common controls for image quality are here. This has a few main categories:
- Overall Quality – this controls Unified Sampling
- Lighting Quality – this applies to Light Importance Sampling used for the lights in the scene
- Environment Lighting Quality – this applies to the IBL lighting
- Indirect Diffuse Quality – Basically your indirect lighting: GI Next, GI GPU, and Final Gathering is controlled here
- Material Quality – MILA only overall quality for all effects (glossy, scatter, etc) MDL does not use this control as it is completely in the core’s control
- Trace depth for your ray depth including diffuse bounces
- Geometry displacement quality for motion blurred objects
- Scene – scene-wide controls are here. Settings that affect an entire scene or the scene output that are not related to quality are here. Some controls may be greyed out or in progress for revisions. This section includes:
- camera settings
- framebuffers – important – notice that framebuffers are now named framebuffers with correct naming. This may take getting used-to but it is much more flexible and powerful than before. More on this later.
- Framebuffers are named based on correct light interaction
- Allows naming for pipeline reasons
- Improves automatic handling for MILA materials matte creations as well as secondary ray matte creation
- Includes most used utilities (caveat: please change the type of framebuffer for “label” to EXR for correct output until this is corrected later)
- environment shading
- motion blur
- scene lights
- scene materials –
- scene-wide scatter scale (MILA)
- Clamp output (MILA) useful for handling fireflies and hotspots in a render
- scene textures
- user data
- Configuration – controls that affect scene translation and interactivity are here
- Diagnostics – controls for turning on and off features as well as image diagnostics and overrides are here
You will also notice a checkbox for “Advanced” which exposes more controls. These will be covered later. Typically this provides a little more granular control or exposes a useful legacy feature for older scenes.
The creation menu has also added convenient ways to create common mental ray lights, including object lights. Selected objects will be turned into an object light in your scene. Area lights are also easily textured and sampled for rendering using a new light shader. Note that controls for the Physical Area Light are connected to the area light main controls. This is done so that users will begin to use the most obvious controls instead of those hidden as a light or material shader. New users will find this to be easiest while legacy users will need to adjust their workflow to simplify.
Awhile in the making, Andy Kopra, Jan Jordan, et al. have written a handbook for the Material Definition Language.
For those of you not familiar with MDL, you can find the main page describing it here.
Any software, such as mental ray 3.13 and Iray will be able to exchange materials using MDL and render identically. Software integrating mental ray, Iray, or MDL itself will also be able to use and render these materials, Allegorithmic is a noted partner.
Take a look at the MDL Handbook page for more information on using, creating, and integrating MDL.
You can now find the presentation online here: MDL Presentation
Facilities that have had raylib integrations of mental ray have long had access to developer examples of progressive rendering and new features as they are released. Unfortunately this hasn’t been the case with OEM integrations and most users have had to wait for these updates. In addition to that, Maya doesn’t have all the necessary pieces to make true interactive rendering easy to expose.
The Official mental ray Blog “Inside mental ray” has just posted an example of Ambient Occlusion (AO) rendered progressively using GPU acceleration in mental ray. This is a great example of ongoing improvements and scene examples using the correct API for features like progressive rendering.
This is also a good way to see further development in GPU acceleration and where it would be useful for scene rendering and look development.
The video is embedded below but be sure and visit the mental ray Blog to see a great explanation by Rajko.
mental ray – In The Lab
Part of building a better user experience for mental ray in Maya is providing information on how to use features in Autodesk Maya 2015 like Xgen hair.
Sandra and Julia at NVIDIA ARC have written a quick tutorial on using Xgen hair with custom shaders and expressions to control hair rendering in mental ray.
Take a look at their post here. There’s also a comments section if you have a question on the tutorial.