Facilities that have had raylib integrations of mental ray have long had access to developer examples of progressive rendering and new features as they are released. Unfortunately this hasn’t been the case with OEM integrations and most users have had to wait for these updates. In addition to that, Maya doesn’t have all the necessary pieces to make true interactive rendering easy to expose.
The Official mental ray Blog “Inside mental ray” has just posted an example of Ambient Occlusion (AO) rendered progressively using GPU acceleration in mental ray. This is a great example of ongoing improvements and scene examples using the correct API for features like progressive rendering.
This is also a good way to see further development in GPU acceleration and where it would be useful for scene rendering and look development.
The video is embedded below but be sure and visit the mental ray Blog to see a great explanation by Rajko.
mental ray – In The Lab
Part of building a better user experience for mental ray in Maya is providing information on how to use features in Autodesk Maya 2015 like Xgen hair.
Sandra and Julia at NVIDIA ARC have written a quick tutorial on using Xgen hair with custom shaders and expressions to control hair rendering in mental ray.
Take a look at their post here. There’s also a comments section if you have a question on the tutorial.
As previously posted, a prototype of the new GI scheme for mental ray was included with Maya 2015.
To be clear what a prototype is, this section from Wikipedia best describes this phase of the feature:
Prototype software is often referred to as alpha grade, meaning it is the first version to run. Often only a few functions are implemented, the primary focus of the alpha is to have a functional base code on to which features may be added. Once alpha grade software has most of the required features integrated into it, it becomes beta software for testing of the entire software and to adjust the program to respond correctly during situations unforeseen during development.
Often the end users may not be able to provide a complete set of application objectives, detailed input, processing, or output requirements in the initial stage. After the user evaluation, another prototype will be built based on feedback from users, and again the cycle returns to customer evaluation. The cycle starts by listening to the user, followed by building or revising a mock-up, and letting the user test the mock-up, then back.
As such there are some limitations to know about in this prototype:
- Motion blur not supported
- Lens shaders not supported
- Hair is tessellated
- Visibility (cutout opacity) not supported
- Specular interaction currently handled by Final Gather
Since the completed feature is expected to become the replacement solution I suspect these limitations will be removed over time.
Below is an example image rendered with the new GI. The main interior is taken from the Architecture Classroom found at
IDST [site appears to be gone]. Other modern mental ray features being used are:
- Layering Library Shaders
- Object Lights (2015 Service Pack 2 or later) 8 in total
- Light Importance Sampling (improving the result of the lighting and simplifying tuning of the scene to a single slider/value)
Also, I am not using portal lights or the Environment Light (Native IBL). Instead I am allowing the new GI to sample the environment without casting more shadow rays and speeding up the render. This also means simplified setup for interiors where you can use the new technique (and a GPU) to power through a scene and render quickly. The brute force nature of the technique will provide you with crisp indirect shadows.
The image below renders in 18 minutes at 1080HD using a K6000. (All of the following renders and time were rendered at 1080HD before resizing to better fit the webpage at 720HD)
The controls are pretty self-explanatory and only expose the main controls for quality. The controls may change in the future:
- Enable – override using the new GI instead of Final Gather (Be sure other features are off, like IP or Photons or they may run unnecessarily)
- Use GPU – this uses the GPU to accelerate the process. The result is identical regardless of mode but the CPU is much slower. Currently CPU usage is less than 100% Also note that CPU usage might be less than 100% using the GPU if using a slow card or high settings. This is because the CPU is waiting on the tile from the GPU. This requires recent graphics drivers and an NVIDIA graphics card (GTX or Quadro)
- Diffuse only – only calculate diffuse rays. In a lot of applications this is all you’re interested in. But scenes with glass walls and similar effects may need more. In the Prototype, Final Gather rays are used for this effect. This will change as the feature adds more abilities and takes over for old techniques
- Override FG (Final gather) Globals – The new GI can take its settings from legacy FG settings including ray depth (bounces) which is still set in Final Gather trace depth.
- Samples Per Pixel - this is the number of samples per pixel (Anti-aliasing) taken per pixel. This also acts as a multiplier of Rays. This number should be a square of another number: 1, 4, 9, 16, 25, 36, etc.
- Rays – Your primary control for quality. This is a brute force technique (at filter 0) where this number of rays is shot per pixel multiplied by samples per pixel above.
Below are some examples of altering the settings.
Increasing the samples per pixel, keeping everything else the same (filter set to 0):
Increasing the rays parameter, all else is the same (filter is set to 0):
The filter parameter is improved from previous ways of filtering/interpolating GI results. As a convenience the measurement is in pixels. You should find this easier to smooth more quickly and easily than other methods. It also has a negligible impact on render time unlike increasing interpolation with Final Gather. In fact, less variance should improve render times since Unified Sampling works less to solve the image. Keep in mind that increasing values will blur indirect shadowing details.
Open this in a new window to see the changes as they are subtle, blending away the noise. Please pardon the gif compression.
Another example of the improved smoothing filter:
Expect this feature to evolve dramatically over the next year as well as improve performance of both the GPU and CPU modes.
Just announced is that NVIDIA is now selling mental ray Standalone directly to users. Previously you would buy Standalone from your integration partner like Autodesk as well as support. This support comes with as few as 10 licenses.
Important things to note about this:
- Support is provided through NVIDIA directly
- Access to private support forum
- Enables DCC application updates
- Types of support based on customer and need
- Current versions of mental ray available as well as fixes sooner
As a side effect this moves mental ray into the realm of a separate product from DCC applications and makes NVIDIA the source of information for mental ray in the future. Feedback from customers now reaches developers at ARC without filtering through an integration partner.
Take a look at their new page here: mental ray Standalone
Official Blog announcement here.