Just announced is that NVIDIA is now selling mental ray Standalone directly to users. Previously you would buy Standalone from your integration partner like Autodesk as well as support.
Important things to note about this:
- Support is provided through NVIDIA directly
- Access to private support forum
- Enables DCC application updates
- Types of support based on customer and need
- Current versions of mental ray available as well as fixes sooner
As a side effect this moves mental ray into the realm of a separate product from DCC applications and makes NVIDIA the source of information for mental ray in the future. Feedback from customers now reaches developers at ARC without filtering through an integration partner.
Take a look at their new page here: mental ray Standalone
Official Blog announcement here.
See Lee Anderson’s original version of the above image at his site here: http://www.leeandersonart.com/
**NOTE: this is a PROTOTYPE of a feature as-yet uncompleted. Testing is being made possible by cooperation with Autodesk and NVIDIA ARC. Your thoughts on controls and most important features to support are very important to collect.**
The prototype is a limited-feature version initially released for simple scenes and testing. It does not support everything you’re used to yet like motion blur or visibility cutouts. However, it operates as a brute force solution on the GPU or CPU, allowing you the flexibility to render it on the necessary hardware or take advantage of the speed offered by GPUs. Note that you should be using an NVIDIA GPU that is a newer generation to take advantage of the feature (able to run Optix Prime) This GI feature is very fast on the GPU and is under active development at NVIDIA ARC. A thread has been started in the 3ds Max and Maya ARC forums along with a simple UI to allow you easier usage of the feature. You can learn how to use the feature from developers and what to expect when it is complete. Find the Maya thread here. Main controls to note:
- Rays: Primary quality control
- Anti-Aliasing Passes: The number of passes per pixel (GPU implementation) Note that this multiplies the rays for each pixel based on its value. If you increase it, think about lowering the ray count. A minimum value of 4 is recommended as well as keeping the integer a square of some number, i.e. 4, 9, 16, 25, etc
- Filter: This does not operate in the same way as the FG interpolation filter and typically provides better results. It is measured in pixels. Higher numbers can destroy shadow details. I recommend trying to keep it low. (5 or less) 0 is off. Hopefully more will be explained about this technique at SIGGRAPH….
- Mode: Diffuse only is the most complete feature. This ignores specular interaction but is typically enough for VFX-type scenes. It’s also the fastest mode. Other modes may not be finished and/or rely on older techniques initially.
In order to use the GPU effectively, your scene must be able to fit all geometry onto the GPU memory. Hair is tessellated right now. Presampling provides the data for shaders so texture data isn’t loaded at the same time. This allows you to use legacy and/or custom shaders without penalty with the new technique. Since motion blur isn’t currently supported I wouldn’t use this for animation unless you use post blur. If your scene is particularly dense, you can use the CPU mode with another option. Improved GI (using GPU)
For those of you looking for more tools or updates to plugins you use, MentalCore is releasing their next version, 1.7v1 to add compatibility with Maya 2015. They have also improved and streamlined their support system.
iray for Maya has also updated their release for iray including new features and workflow improvements. (Currently for Windows 64-bit but a Linux 64-bit version is coming soon)
The Material Definition Language (MDL) Spec 1.1 has been released.
Currently used in iray, MDL is coming to mental ray 3.13 in less than a year’s time. Flexible layer-able components with cross compatibility between iray and mental ray will be a reality. Allowing you to render scenes based on your preference rather than a limitation in time to transfer materials and looks between them.
This also opens up many other doors to improved flexible rendering in mental ray using the workflow introduced by the Layering Library (MILA)