Using Framebuffers with the Layering Library (MILA)

You can take a brief look at the main structure of the MILA shaders inside Maya in the first post explaining their usage: The Layering Library (MILA)

One of the most important things to remember about MILA is how the framebuffer passes work.

The builtin framebuffers use the modern framebuffer mechanism in mentalray that uses a named framebuffer and type.

Your main framebuffers are additive; this means in compositing you simply add or plus the passes together to create your beauty. This avoids other operations that might cause problems like multiplication. Multiplying in compositing causes problems with edges and makes it impossible to recreate the beauty render. It also complicates compositing objects onto one another or plates.

Your main passes are (First given as a Light Path Expression (LPE)):

  • L<RD>E  or direct_diffuse
  • L.+<RD>E or indirect_diffuse
  • L<RG>E or direct_glossy
  • L.+<RG>E or indirect_glossy
  • L<RS>E or direct_specular
  • L.+<RS>E or indirect_specular
  • L.+<TD>E or diffuse_transmission
  • L.+<TG>E or glossy_transmission
  • L.+<TS>E or specular_transmission
  • LTVTE and/or front_scatter and back_scatter
  • emission (in LPE, emission is actually a light)

Direct effects are usually the result of the light source.

Indirect effects are usually the result of light from other objects.

Why include LPE? LPE makes specifying passes the same for all rendering solutions. This idea unifies the conventions used for getting the same data regardless of renderer used.

You also have the option to add custom shaders on top of this in the material root node. Keep in mind that what is added here may increase render time since they are run separately from the material and we typically reserve them for inexpensive utility passes like noise, fresnel, and ID mattes.

The Root node, the mila_material This includes the ability to create and attach custom framebuffers for output

The Root node, the mila_material This includes the ability to create and attach custom framebuffers for output

Getting these framebuffer passes from Maya requires a bit of a workaround using a legacy user pass system rediscovered by Brenton. I find it to be easier than using Custom Color with the exception you have to keep track of the names of your passes and spell them correctly to match up. MILA also makes it a universal solution since all shaders are set to automatically write to these buffers without more work. This is part of the idea behind LPE: the  light path stored is always the same for the LPE chosen regardless of renderer. Making this automatic is an easy decision in this case.

For the passes built into MILA you simply need to have the framebuffers ready with the correct name and MILA will write to them automatically. Keep in mind that Maya’s current system overwrites data written to their default passes like “diffuse” so you cannot use those or the same names if they are used in the scene.

Fist: Select the miDefaultOptions node

select miDefaultOptions;

Second create a framebuffer:

AEmrUserBuffersAppend miDefaultOptions.frameBufferList;

The above command creates a user framebuffer seen as default below.

A new user framebuffer

A new user framebuffer

You have two selections above: Data Type and whether or not to interpolate (filter) the result.

You typically want to interpolate results for color framebuffers like direct diffuse, ID mattes, fresnel passes, etc. You do NOT want to interpolate data buffers like z-depth, normals, world points, etc.

You can see the typical data types that should not be interpolated at the bottom of the list. These data types are not interpolated by mental ray because it is mathematically incorrect for compositing. They also require high precision so you will notice they default to Floating Point 32-bit data.

Framebuffer Data Types - Data Passes

Framebuffer Data Types – Data Passes

I have not used the LPE for direct diffuse because Maya does not allow angled brackets and other symbols in text fields for names at this point. After creating and naming your passes, you can then add them to the camera Output Shaders as the last step to render them.

mental ray tab, add an output pass

mental ray tab, add an output pass

When you create an entry you will see a Output Pass that looks like the default one below.

Default Output Shader

Default Output Shader

Since we have already created passes, you can select the “Use User Buffer” option, then in the dropdown “user Buffer” menu, select the pass you want. Below is the direct_diffuse example:

Direct Diffuse Output Shader

Direct Diffuse Output Shader

I then select the following options:

  • File Mode: I want to write to the rendered file
  • Image Format: OpenEXR, I have already specified 16-half float and EXR as my rendering format in the main Render Settings editor.
  • File Name Postfix: I leave this blank. This way all of the passes are written to the same EXR and packed together as layers.

You can follow this same method when adding user passes to the mila_material root. Be sure and name them the same as the pass you will create and then reference in the camera Output Shader

Added Color and Vector buffers

Added Color and Vector buffers

Keep in mind that the usual Maya Default passes for data will still work with MILA and you can select those as well instead of adding those default passes here. It’s useful here if you need different or additional data per shader. ID Mattes are very useful in this case. And in fact, this shader can detect and use the user_data shaders for you to assign ID groups and other data to objects. This means you can render complex scenes with fewer shaders and still organize the passes logically. This will be a future explanation since this introduces a new workflow/pipeline for getting information from Maya while avoiding the Render Layers system when possible.

In the example file below you’ll see I am driving some parameters of the shader with attached object data. This has a few benefits. One such benefit is the data follows the object rather than the shader and you can change the result of the shader by manipulating the object user_data. I also have a single sphere in one ID matte group but also included with another group of ID mattes, giving me different ways to handle the object in post.

You can find an example workflow in this Maya File [removed since MILA updates broke it, need to make a new one]. The scene has the default framebuffers and a couple ID mattes set up. You can play with the materials and quality to get other buffers to show a result (for example, emission is empty because I am not using that effect.) I also have single-layer materials. Try mixing and matching and seeing the resulting framebuffers. Be sure and attach your own HDRI to the Maya IBL Image.

(Maya 2013 SP2)

Additional layering/mixing is left to user experimentation.

About David

I am a VFX artist that specializes in Lighting and Rendering. I spend a fair amount of my time supplying clients with artistic solutions as well as technology solutions. With a background in fine art and technical animation training, I strive to bridge the divide between the artist and technologist.

Posted on January 23, 2013, in Example, maya, shaders and tagged , , , , , , . Bookmark the permalink. 30 Comments.

  1. Thanks very much David really good^^

  2. Hi David,
    i tried this, everything seemed to work except all passes but ‘main’ came out black. I tried looking at your test scene, the scene opened with errors and there were no mila shaders, possible they were not the latest build. There may be a thread on this at the arc forums but i did not find it. I will keep poking around, see if i can get it working, but if you have any ideas let me know

    thanks:)

    • Correct, the shaders changed. Some more changes are coming so I’m waiting to update the scene. Main buffer? You mean the beauty buffer?

      • ‘main’ is the name displayed for beauty in imf_disp. Anyway, this is all working, i’m just missing the part where the output shader gets linked to the pass data. I will read up on that.
        thanks

  3. these usersdata nodes could use a relationship editor. The ui posted at arc is a nice start but it’s a mess keeping track of the nodes once they have been created. A relationship editor would also help people to wrap their head’s around the process, by being able to visualize how things connect.
    It’s a great way to work but most people will probably not want to use mel for routine tasks like selecting nodes

    • You can use the array version of the user data nodes. It will reduce the number of nodes in the editor.

      However, we have asked that it be integrated correctly as transform and/or shape attribute for mental ray in Maya. That would be the correct solution. As users begin to realize this potential, they will come to a similar conclusion and ask Autodesk for the same thing.

  4. How did you add the ‘miData’ connections to the transforms?

  5. I have found these shaders brilliant, working in Maya. But after setting up passes (diffuse_direct, glossy_transmission, front-scatter, back_scatter, emission) with the above method and adding an additional camera depth pass. The batch render time is 10+ times as long as the render view render. Have you noticed this at all??

    Thanks

    james

    • Although I was rendering at a 50% test resolution in Maya – that might explain most of it! Sorry!

    • Framebuffers will be a render hit since mental ray samples each buffer for anti-aliasing (triggering more samples through all framebuffers). If you find you don’t need that much work on a buffer, you can turn off “contrast all buffers” in the Framebuffer rollout. Now sampling is based purely on the beauty framebuffer but you may find buffers like glossy reflection are noisy by themselves. That might not be a problem if you aren’t using it for compositing or don’t push it very hard in compositing where it reveals the noise.

      Keep in mind, vector and data passes do not trigger more samples through anti-aliasing. This is why a motion or z-depth pass should not add significant time.

      • Cheers David,

        Is there away to get out diffuse material colour, glossy/specular material colour?

        Thanks

        James

      • You mean without shadow or shading? Not currently. That’s because that presents incorrect composting workflow with multiplication. MILA is designed to use the correct additive passes for composting. This means you can avoid multiplication and the artifacts it causes. The original raw/level outputs used previously should be avoided.

      • Yeah, but not to multiply with raw, to divide the “result” passes to get a sort of raw in the composite. But from what your saying, that’s not possible?

      • Division can introduce the same problems. Add and subtract are safe. I would avoid the other operations. MILA was designed to prohibit those operations to simplify workflow and basically force correct math. This might change in the future but for now it’s not possible. Other render systems are moving in this direction too. Is there a specific case where you’re forced into multiplication or division?

      • The compositor wants to have that extra control. To divide the result by the level and get a version of the raw. Then re-multiply that by the level to return to the result. Grade nodes could then be put in between to treat the level and “raw” separately. Probably not huge changes, that may be why I’ve not noticed artefacts that much with this technique. Although I do agree multiplying a rendered raw and level should be avoided, artefacts are pretty obvious with that technique.

      • When they re-multiply it after the grade they are essentially doing the same thing. At the pixel level this will cause problems. At The Mill we typically composite the beauty and use passes as necessary to adjust the result. But in those cases we subtract them and add them back in the amount we want.

  6. Hey David, These posts have been incredibly helpful and I’m very excited at the progress seen in Mental Ray dev. I had some questions regarding the “emission” and “transmission” passes. Particularly when it comes to objects with some transparency. Do all the layer types output the same type of render? Like Weighted, Fresnel, Custom, does each one deliver the same kind of information? Are there differences in the way these perfom, or is it just fewer options between each type? Every time I render an object with emissive I only seem to get a black frame. The same appears to be the case with transmissive only surfaces. I’m not sure what I’m missing of if I’m asking the right question but I figured I should ask. Thanks for your time.

    • They all output to color buffers. Do you have a simple scene you can upload somewhere? Not quite sure why you’re getting black.

      Also, fewer or more options based on directional weighting choice is based on the nature of the weighting you chose. Just standard weight is like a transparency weighting so there’s only a single control. Others involve directionality that can be controlled in different ways.

      • Hey David, Thanks for your response. After some testing it turns out the alpha was what caused the problem. I had each buffer storing RGBA instead of just RGB, so when importing, I have to manually tell AE to ignore the alpha channel in order to see the pass correctly. The additive passes are fantastic! Very near 1:1 of the beauty render. I do notice some edges that get a bit dark where the image gets anti-aliased over an alpha, but this is probably a premult issue with AE. Thanks for putting all this together!

  7. Oh I also had one other question regarding Linear workflow and 32bit files. (Half or Float) We don’t immediately have access to Nuke, so I don’t know what it looks like there, but when previewing or importing my renders into After Effects, they appear washed out. I have to preserve RGB to maintain their color. All textures have been labeled appropriately as sRGB or Linear, I have no Exposure node Lens Shader, and I have disabled color Management because I’ve found that at least this way I’m getting consistent results. Any thoughts or insight on how to make sure the initial gamma when viewed in AE is correct? As of now I have to use the EXtractoR plugin or Preserve RGB to see appropriate color space on each individual sequence imported. The AE project color space is set to NONE, but it seems “Linear” is not an option under the settings.

    • I would use 16-half. As for AE, I honestly can’t help you much there. It’s been a really long time since I had to deal with that. You might find better luck on and AE forum.

  8. Hi David,

    Do you still have that scene handy for Maya 2016? I really would like to analyze how you’ve set things up..

    I’ve followed along and got things to work but they work only on a per material level. Much like your “statue_id” there I have “facingRatio” with a ramp connected to the color slot there thats connected to a sampler info.. The result being that I get a exr buffer with facingRatio on the objects only with that material. How do I get this to work on a global level without having to this for all mila materials?

    btw.. What do you mean with.. “Keep in mind that the usual Maya Default passes for data will still work with MILA and you can select those as well instead of adding those default passes here. ”

    Can you elaborate more on the.. Maya Default passes for data?

    • Default passes are available in Maya like normals, etc. These are builtin data passes.

      The scene showing that is old and integration has changed how this works, you can see connections in MILA as Extra Framebuffers.

  9. Hi, I have a few questions regarding this post I hope David can answer in full 😉

    >>And in fact, this shader can detect and use the user_data shaders for you to assign ID groups and other data to objects. This means you can render complex scenes with fewer shaders and still organize the passes logically.

    What is meant by, you can render complex scenes with fewer shaders and still organize the passes logically ?

    >>In the example file below you’ll see I am driving some parameters of the shader with attached object data. This has a few benefits. One such benefit is the data follows the object rather than the shader and you can change the result of the shader by manipulating the object user_data. I also have a single sphere in one ID matte group but also included with another group of ID mattes, giving me different ways to handle the object in post.

    I didn’t understand what is meant by this paragraph ?

    >>I also have single-layer materials. Try mixing and matching and seeing the resulting framebuffers.

    What is meant by single-layer materials & by mixing and matching ?

    Finally, David replied to a comment regarding Array version of the user data nodes, I didn’t understand, here is the quote;

    >>You can use the array version of the user data nodes. It will reduce the number of nodes in the editor.

    I hope, someone or David can reply to these questions, thanks.

  10. Hi there, great information found here.
    I have a problem though…, when I use the matte passes feature, sometimes in some shaders, I get the matte pass, but when I render another frame, the former matte pass is diconnected from the shader and a new one is created with an increasing number at the end of the name.
    This behaviour creates infinite matte passes and I can’t nail down the cause of that.
    Is anyone experiencing this kind of errors?.
    Thanks in advance.

  1. Pingback: MILA New Features – May | elemental ray

  2. Pingback: GPU Ambient Occlusion in Maya 2014 | elemental ray

  3. Pingback: The Layering Library (MILA) Part 2 | elemental ray

  4. Pingback: Maya 2016, new features and integration | elemental ray

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: