Monthly Archives: May 2014
(More information can be found on the official mental ray blog here.)
It’s been mentioned that the Application Manager has been making the mental ray update available before the corresponding Service Pack and this may have caused mental ray to break until the Service Pack is installed. In this case the mental ray plug update relies on the corresponding Service Pack. The Application Manager doesn’t know this. This may not always be the case that there is a corresponding Maya update. I am not sure how Autodesk will make this work in the future.
For those of you upgrading to Maya 2015 Service Pack 2 and updating mental ray there are some things to know:
- Maya 2015 was released with a flaw in the node ids for mental ray materials that was fixed in SP2. This may break some of your scenes. I suggest waiting to upgrade until your projects are complete in SP1 or begin new ones in SP2.
- There is a possible problem with the .mod file when you update causing mental ray to fail to load. Look here for a solution from Autodesk.
You may also find performance improvements in MILA rendering.
Note that this is an important release for a few reasons:
- You see an update of mental ray inside Maya which was previously very rare
- The MOD file bug is an important part of working out the kinks in making mental ray reliably update as a separate Maya plug-in
- Autodesk’s commitment to more frequent updates to mental ray will mean more changes and fixes available to users earlier, making Maya a more valuable package for rendering
Thanks to the users that pushed for this change, Autodesk and NVIDIA can now react to requests and complaints with more agility than before with mental ray.
More to come….
One of the features artists have asked for is the ability to do dispersion inside mental ray without needing a custom shader.
Unexposed in the main UI is an option for “weight tint” that is a luminance you can control inside a mila_mix or mila_layer node to get the effect. Slightly offsetting the ior of the transmission nodes is what controls the spread of the dispersion effect.
I will update this post again when I have a useful phenomenon to provide to make this simpler, but in the meantime you might come up with your own experiments. Updated below.
You can find the three band (color) phenomenon here. (Copy and paste if your browser doesn’t download the file. Place in your “include” folder for mental ray shaders)
The phenomenon exposes the following controls for the user as a transmission-only component. You can layer this with other components manually by connecting it through the hypershade. The incarnation of MILA in Maya right now isn’t setup to use custom phenomenon flexibly just yet. By altering the phenomenon you can include other controls. For example you can add a tint control instead of having only colorless transmission. Or for nicer looking dispersion you can create six bands instead of the three in this version (Think ROYGBIV).
You can see the power here of Phenomenon and MILA. You can create and store constructs and expose the controls you want for yourself or others. Using the conventions outlined in the MILA documents you can continue to build complex materials with all the benefits of importance sampling and light sharing.
*hint: if you find the result is a little “green” (assuming you have connected this as RGB) then you can reduce the ray cutoff to solve it. It’s not always necessary and is based on the model. The link to the string options for MILA are HERE. Eventually such controls will be exposed in the UI natively.
Using the new GI with GPU enabled I rendered the below image in 6 minutes on a notebook PC with a GTX 765M. The leftover noise is from portal lights. (A higher powered machine with a K6000 and more CPU cores renders this scene in less than 3 minutes for the frame)
The new GI is a brute force technique with many improvements to regular brute force rendering including better filtering control. The below image does not use the filter and instead renders as-is.
This feature is not finished and I used “diffuse” paths as it is most complete and fast. You can also render using the same method on the CPU. If you’re curious why it presamples the scene, it is because custom and CPU shaders cannot be rendered automatically on a GPU. Presampling allows you to render legacy scenes without changes if the scene contains supported effects.
There are features not currently supported by this technique since it’s not complete. More conversation and examples will follow. Consider this experimental for now. Begin testing it along with other users, look here.