Category Archives: Example

MILA tip and trick – Dispersion *updated with Phenomenon*

One of the features artists have asked for is the ability to do dispersion inside mental ray without needing a custom shader.

The Material Definition Language contains such an effect but before this is available in mental ray, you can use MILA to create a reasonable approximation.

Unexposed in the main UI is an option for “weight tint” that is a luminance you can control inside a mila_mix or mila_layer node to get the effect. Slightly offsetting the ior of the transmission nodes is what controls the spread of the dispersion effect.

I will update this post again when I have a useful phenomenon to provide to make this simpler, but in the meantime you might come up with your own experiments. Updated below.

MILA Dispersion Phenomenon (3 Bands)

MILA Dispersion Phenomenon (6 color)

MILA Dispersion Phenomenon (6 bands)

You can find the three band (color) phenomenon here. (Copy and paste if your browser doesn’t download the file. Place in your “include” folder for mental ray shaders)

The phenomenon exposes the following controls for the user as a transmission-only component. You can layer this with other components manually by connecting it through the hypershade. The incarnation of MILA in Maya right now isn’t setup to use custom phenomenon flexibly just yet. By altering the phenomenon you can include other controls. For example you can add a tint control instead of having only colorless transmission. Or for nicer looking dispersion you can create six bands instead of the three in this version (Think ROYGBIV).

You can see the power here of Phenomenon and MILA. You can create and store constructs and expose the controls you want for yourself or others. Using the conventions outlined in the MILA documents you can continue to build complex materials with all the benefits of importance sampling and light sharing.

MILA Dispersion Phenomenon, simple 3 band transmission

MILA Dispersion Phenomenon, simple 3 band transmission

Happy exploring!

MILA Dispersion using Specular Transmission

MILA Dispersion using Specular Transmission

*hint: if you find the result is a little “green” (assuming you have connected this as RGB) then you can reduce the ray cutoff to solve it. It’s not always necessary and is based on the model. The link to the string options for MILA are HERE. Eventually such controls will be exposed in the UI natively.

MILA Material - mixed specular transmission

MILA Material – mixed specular transmission

MILA Mix node for simple transmissive dispersion

MILA Mix node for simple transmissive dispersion

 

New GI Preview

Using the new GI with GPU enabled I rendered the below image in 6 minutes on a notebook PC with a GTX 765M. The leftover noise is from portal lights. (A higher powered machine with a K6000 and more CPU cores renders this scene in less than 3 minutes for the frame)

The new GI is a brute force technique with many improvements to regular brute force rendering including better filtering control. The below image does not use the filter and instead renders as-is.

This feature is not finished and I used “diffuse” paths as it is most complete and fast. You can also render using the same method on the CPU. If you’re curious why it presamples the scene, it is because custom and CPU shaders cannot be rendered automatically on a GPU. Presampling allows you to render legacy scenes without changes if the scene contains supported effects.

There are features not currently supported by this technique since it’s not complete. More conversation and examples will follow. Consider this experimental for now. Begin testing it along with other users, look here.

Improved GI (using GPU)

Improved GI (using GPU)

Chipotle

Another example of mental ray in production is this spot for Chipotle.

A nicely art directed piece shows another example of art directed realism using mental ray in Maya. The haunting audio is a remake sung by Fiona Apple. (We’re hoping someone saved that cow. . .)

Find out more in the article on AdWeek: Ad of the Day: Chipotle

Learn more about Moonbot Studios at their site: http://moonbotstudios.com/

Like a Boss

RTT Germany recently released a short animation using mental ray for the exterior shots.

Using Unified Sampling and a combination of the user_ibl_env and a physical sun or the native IBL, they rendered the frames with motion blur, taking anywhere from nine minutes to two hours a frame.

From one of the artists involved on the project, Adrian Chifor:

The unified sampling also allowed me to have 3d motion blur and the biggest render time was no longer than a couple of hours with most of the frames rendering below 30 min per frame. Since the lighting was very simple the render times were never an issue. As far as passes I only had the beauty and on some shots I used an AO pass. I think for one shot I used a normal pass for a post relighting tweak, the rest were just masks and shadow.

Take a look at the finished animation below as well as the RTT Showreel here: RTT Work

chase like a boss from adi chifor on Vimeo.