Part of building a better user experience for mental ray in Maya is providing information on how to use features in Autodesk Maya 2015 like Xgen hair.
Sandra and Julia at NVIDIA ARC have written a quick tutorial on using Xgen hair with custom shaders and expressions to control hair rendering in mental ray.
Take a look at their post here. There’s also a comments section if you have a question on the tutorial.
As previously posted, a prototype of the new GI scheme for mental ray was included with Maya 2015.
To be clear what a prototype is, this section from Wikipedia best describes this phase of the feature:
Prototype software is often referred to as alpha grade, meaning it is the first version to run. Often only a few functions are implemented, the primary focus of the alpha is to have a functional base code on to which features may be added. Once alpha grade software has most of the required features integrated into it, it becomes beta software for testing of the entire software and to adjust the program to respond correctly during situations unforeseen during development.
Often the end users may not be able to provide a complete set of application objectives, detailed input, processing, or output requirements in the initial stage. After the user evaluation, another prototype will be built based on feedback from users, and again the cycle returns to customer evaluation. The cycle starts by listening to the user, followed by building or revising a mock-up, and letting the user test the mock-up, then back.
As such there are some limitations to know about in this prototype:
- Motion blur not supported
- Lens shaders not supported
- Hair is tessellated
- Visibility (cutout opacity) not supported
- Specular interaction currently handled by Final Gather
Since the completed feature is expected to become the replacement solution I suspect these limitations will be removed over time.
Below is an example image rendered with the new GI. The main interior is taken from the Architecture Classroom found at
IDST [site appears to be gone]. Other modern mental ray features being used are:
- Layering Library Shaders
- Object Lights (2015 Service Pack 2 or later) 8 in total
- Light Importance Sampling (improving the result of the lighting and simplifying tuning of the scene to a single slider/value)
Also, I am not using portal lights or the Environment Light (Native IBL). Instead I am allowing the new GI to sample the environment without casting more shadow rays and speeding up the render. This also means simplified setup for interiors where you can use the new technique (and a GPU) to power through a scene and render quickly. The brute force nature of the technique will provide you with crisp indirect shadows.
The image below renders in 18 minutes at 1080HD using a K6000. (All of the following renders and time were rendered at 1080HD before resizing to better fit the webpage at 720HD)
The controls are pretty self-explanatory and only expose the main controls for quality. The controls may change in the future:
- Enable – override using the new GI instead of Final Gather (Be sure other features are off, like IP or Photons or they may run unnecessarily)
- Use GPU – this uses the GPU to accelerate the process. The result is identical regardless of mode but the CPU is much slower. Currently CPU usage is less than 100% Also note that CPU usage might be less than 100% using the GPU if using a slow card or high settings. This is because the CPU is waiting on the tile from the GPU. This requires recent graphics drivers and an NVIDIA graphics card (GTX or Quadro)
- Diffuse only – only calculate diffuse rays. In a lot of applications this is all you’re interested in. But scenes with glass walls and similar effects may need more. In the Prototype, Final Gather rays are used for this effect. This will change as the feature adds more abilities and takes over for old techniques
- Override FG (Final gather) Globals – The new GI can take its settings from legacy FG settings including ray depth (bounces) which is still set in Final Gather trace depth.
- Samples Per Pixel - this is the number of samples per pixel (Anti-aliasing) taken per pixel. This also acts as a multiplier of Rays. This number should be a square of another number: 1, 4, 9, 16, 25, 36, etc.
- Rays – Your primary control for quality. This is a brute force technique (at filter 0) where this number of rays is shot per pixel multiplied by samples per pixel above.
Below are some examples of altering the settings.
Increasing the samples per pixel, keeping everything else the same (filter set to 0):
Increasing the rays parameter, all else is the same (filter is set to 0):
The filter parameter is improved from previous ways of filtering/interpolating GI results. As a convenience the measurement is in pixels. You should find this easier to smooth more quickly and easily than other methods. It also has a negligible impact on render time unlike increasing interpolation with Final Gather. In fact, less variance should improve render times since Unified Sampling works less to solve the image. Keep in mind that increasing values will blur indirect shadowing details.
Open this in a new window to see the changes as they are subtle, blending away the noise. Please pardon the gif compression.
Another example of the improved smoothing filter:
Expect this feature to evolve dramatically over the next year as well as improve performance of both the GPU and CPU modes.
Just announced is that NVIDIA is now selling mental ray Standalone directly to users. Previously you would buy Standalone from your integration partner like Autodesk as well as support. This support comes with as few as 10 licenses.
Important things to note about this:
- Support is provided through NVIDIA directly
- Access to private support forum
- Enables DCC application updates
- Types of support based on customer and need
- Current versions of mental ray available as well as fixes sooner
As a side effect this moves mental ray into the realm of a separate product from DCC applications and makes NVIDIA the source of information for mental ray in the future. Feedback from customers now reaches developers at ARC without filtering through an integration partner.
Take a look at their new page here: mental ray Standalone
Official Blog announcement here.
Also a new post with illustrative animation on exposed settings can be found on this blog here.
See Lee Anderson’s original version of the above image at his site here: http://www.leeandersonart.com/
**NOTE: this is a PROTOTYPE of a feature as-yet uncompleted (alpha stage). Testing is being made possible by cooperation with Autodesk and NVIDIA ARC. Your thoughts on controls and most important features to support are very important to collect.**
The prototype is a limited-feature version initially released for simple scenes and testing. It does not support everything you’re used to yet like motion blur or visibility cutouts. However, it operates as a brute force solution on the GPU or CPU, allowing you the flexibility to render it on the necessary hardware or take advantage of the speed offered by GPUs. Note that you should be using an NVIDIA GPU that is a newer generation to take advantage of the feature (able to run Optix Prime) This GI feature is very fast on the GPU and is under active development at NVIDIA ARC. A thread has been started in the 3ds Max and Maya ARC forums along with a simple UI to allow you easier usage of the feature. You can learn how to use the feature from developers and what to expect when it is complete. Find the Maya thread here. Main controls to note:
- Rays: Primary quality control
- Anti-Aliasing Passes: The number of passes per pixel (GPU implementation) Note that this multiplies the rays for each pixel based on its value. If you increase it, think about lowering the ray count. A minimum value of 4 is recommended as well as keeping the integer a square of some number, i.e. 4, 9, 16, 25, etc
- Filter: This does not operate in the same way as the FG interpolation filter and typically provides better results. It is measured in pixels. Higher numbers can destroy shadow details. I recommend trying to keep it low. (5 or less) 0 is off. Hopefully more will be explained about this technique at SIGGRAPH….
- Mode: Diffuse only is the most complete feature. This ignores specular interaction but is typically enough for VFX-type scenes. It’s also the fastest mode. Other modes may not be finished and/or rely on older techniques initially.
In order to use the GPU effectively, your scene must be able to fit all geometry onto the GPU memory. Hair is tessellated right now. Presampling provides the data for shaders so texture data isn’t loaded at the same time. This allows you to use legacy and/or custom shaders without penalty with the new technique. Since motion blur isn’t currently supported I wouldn’t use this for animation unless you use post blur. If your scene is particularly dense, you can use the CPU mode with another option. Improved GI (using GPU)