About

Elemental Ray is a repository of knowledge for artists and developers using mental ray.  This blog will present the work and inspiration of professionals working in the fields of Animation and Visual Effects (VFX).  Contributors to Elemental Ray are working to provide more information to users based on production workflows so you can benefit from their experience as well as the knowledge of Nvidia’s Advanced Rendering Center (ARC).

David Hackett taught lighting and rendering at the college level for several years at Full Sail University before moving to Los Angeles to work on feature film and television. David_mugshotIn the past few years David has worked at places such as Luma Pictures,Scanline VFX, and The Mill as well as others. David graduated with a Fine Art degree in Graphic Design concentrating on photography. He still misses the smell of stop bath. He is currently Senior Lighting and Look Development TD at The Mill in Chicago.

David’s work has appeared in 2012, Thor, X-Men First Class, Hereafter, and more.

Brenton Rayner currently works in Research & Development at The Mill, an animation, post-production, and visual effects studio.  Before The Mill, Brenton spent a year working at the NVIDIA Advanced Rendering Center, the makers of mental ray.  Brenton graduated from Dartmouth College in 2010 where he studied Physics and Digital Art.  He currently lives in New York City, NY.

If you would like to present a topic for learning mental ray on the blog, please contact us through this site.

  1. Thanks a lot for this great blog. If you dont mind I’ll post a link on my website.

  2. Thanks a lot for this blog very interesting and well explained articles.

    I don’t know if you have those in the works but after reading your discussion on the mental ray image forum on progressive rendering and unified sampling I would be very interested to read more on the matter. An article regarding Iray would be nice too.

    Looking forward to your next posts, all the best

  3. Thanks a lot for this! I’ve always been following your posts on cgtalk closely – all highly informative and straight to the point. It is just great that you now decided to take it one step further and make this blog. Thanks a lot to all involved and keep up the grand work!

  4. Would some kind soul be able to upload these 3 files from their 3DS Max 2013 install?
    libiraymr.dll, libiray.dll, cudart_64_*.dll

    They are very small, it would only take a minute! Many thanks!

  5. I’m enjoying this blog way too much. Unified sampling has made me reconsider mental ray for maya as a production option. Wish I had found this information years ago!

  6. Great blog! One question though – are there any plans to publish articles with explanations for 3ds max as well?
    Yes, yes, i know, Mental ray is Mental ray… But we know that integration of it can, and mostly is, very different between Maya and Max.
    Information given here are excellent; there are a lot of Max users, and after reading here, usually we go on a google quest to dig out where we do that damn thing that’s done in Maya 🙂

    Thanks in advance on any info you can give me regarding this issue.

  7. Hi,

    I am big follower of the blog and you guys share a lot of useful topics and articles, much respect for that.

    Thanks.

  8. Hi David, I just came across your post on the NVIDIA/MR forums where you posted this in relation to the builtin_bsdf shaders:

    “I’m hoping to have some more complex scenarios/renders showing these shaders in use but I have been busy lately. Hopefully in the next month or so I can show some more things. I was hoping to do it with some assets I could distribute but that might not be possible.

    However, I could possibly dig up the classroom scene. Haven’t decided yet. (Kinda tired of the classroom scene personally.) ”

    Do you have more complex scenarios/scene files with those renders? I am curious to see them as I am doing some research on those shaders with more complex examples in Maya/Hypershade.

    Thank you,
    Chrisots

    • I used to have one posted at one point with BSDF shaders. But at this point we’re looking for more users testing and looking at the MILA shaders. These will lead to a more flexible and faster implementation of BSDF shading in the future. Their performance for some things might even be a little better.

      • Thanks for the response, David. Do you still have that scene file you said you posted with the BSDF shaders? I’d still be curious to look at it as we are mimicking these shaders in the Caustic Visualizer and would like to test some more complex setups so they match the Maya/Mental Ray renders.

      • I don’t think so. 😦 These materials were designed as example shaders for developers to look at and eventually request more features, etc. But that didn’t happen. Now the Layering Library moves towards component based and flexible BSDF materials. This is the next logical step.

  9. Thanks for sharing your knowledge, very generous. I’ve learn so many things …
    Really appreciate it .

    I noticed that you didn’t cover irradiance particles quite in depth, and I’d be curious to have your opinion on the subject. You guys seem to be using final gather as GI method.

    After testing around I realized that IP was more efficient and also faster than FG (same lighting setup with both methods). What are the downsides of IP exactly, I really don’t see any (except for a problem that I have with SSS, apparently it doesn’t bounce on any subsurface shader) ?

    • IP does not include specular interaction (passing through glass as an example). It’s also more difficult to smooth for most animations than FG and cannot be cached as easily. We also have the fgshooter lens shader to help control FG. IP also ignores incandescent materials like fake light cards.

      IP is possibly a future-thinking lighting method. It samples complex scenes much more intelligently and with fewer rays than FG. But for now it is more time consuming overall to use in a scene. But we do expect it to evolve. . .

      • Thank you for the very fast answer 🙂

        I wasn’t aware of the specular interaction. I actually have some difficulty to grasp this concept. In my test, I didn’t notice that glass blocked any light (direct or indirect).

        My test scene is a simple bathroom interior lite by an HDRI (native IBL) and a simple IES light (archlight), and I really couldn’t get as much detail with FG as with IP, even with quit high settings.

        Anyway, thanks again for your blog. Looking forward to new articles !

        Cheers

  10. David your full beard looks identical to mine, when mine is full 🙂

  11. Is there an email address I can contact David and/or Brenton at?

    • We don’t usually post it, gets spam bots all over us. If you have a question you can use the mental ray ARC forums. We’re in there usually and might be able to answer something.

  12. Hey Guys,

    Great Blog! I am vray user that is working on a mental ray job and I had a question about area light reflections. My goal is to use a area light as a light card for reflection. I do this often in vray and the benefit of using a light is that I can go ahead and light link the reflection card to certain objects. My problem is that in mr if I have area lights visible, they are visible for reflections but also visible to camera. Is there a workflow to make area lights visible to reflections but not to the camera?

    Thanks,

    Michael

    • Hi Michael. Sadly the Autodesk created light shaders aren’t designed to take this affect into account. But there is a way to do this with Maya 2014 using Object Lights

      I would suggest the newest release, Service Pack 3.

  13. Hello Folks,

    Wondering if there’s a blog entry or write-up that you could refer me to which discusses the proper implementation of a Z-Depth pass when using Maya, Mental Ray, a Linear Workflow, and OpenEXR in scenes built exclusively with MIA materials … specifically, a workflow that accomodates transparency and anti-aliasing in the Z-Depth pass. I’ve found numerous posts on other sites that discuss different approaches for use with AE and Nuke, but most are outdated and do not really touch upon the proper workflow when using OpenEXR.

    Understanding that a Z-Depth pass isn’t specific to Mental Ray per se, it would be helpful to see an ElementalRay write-up – should time in your busy schedule ever permit – covering how the professionals do it in the “real world” as well as some strategies to address challenges often presented when rendering a Z-Depth pass. I have opened a support ticket with the kind folks at Autodesk and am told that there is no way to obtain a comprehensive Z-Depth pass via Mental Ray and OpenEXR.

    REF: http://forums.autodesk.com/t5/Shading-Lighting-and-Rendering/Guidance-Z-Depth-Workflow/td-p/4674691

    In the interim, thanks again for the information you share on this blog.

    Cheers,

    • We might be able to explain some things in a few weeks about how it’s usually used inside Nuke. But the output is pretty straightforward in Maya using the pass system.

      As for transparency, all renderers write to the z-buffer when they strike geometry unless the intersection is ignored. Anything using the trace_continue should cause mental ray to ignore the intersection and avoid writing to the z-buffer.

      There’s not technically anything to “fix”, this buffer is correctly storing intersections in the pixel. Making it ignore geometry, while possibly useful, is actually a way to break the pass.

      You could use IDs to punch a whole in the z-depth in post I suppose.

      In the future, deep data will make this very easy to do at the cost of extremely large EXR files.

      • Hi David,

        Appreciate the quick reply.

        Using the test scene from the link I posted above – which is comprised of an angled glass geometry in front of spheres – for relevance purposes …

        From what I think I’m understanding from your reply is that although the Z-Depth pass does not show the objects behind the glass, the geometry data behind the glass is indeed stored in the Z-Depth pass and will be correctly referenced by AE, Photoshop, Nuke, etc. for the purposes of DOF. Realizing of course, that Z-Depth serves more purposes than simply DOF.

        If I’m incorrect, which is likely, in my understanding; using TRACE_CONTINUE, which I assume is an attribute (or “assignable” attributes) to materials, would instruct the renderer to ignore the glass geometry. This however, isn’t recommended as it introduces problems in the pass.

        Assuming I’m incorrect on both cases above, 🙂 , I guess I’m wondering if *not* obtaining transparency on the glass is a “show stopper” if I’m intending to use the Z-Depth pass to determine DOF in AE, Photoshop, Nuke, etc.

        … maybe I’m trying to read more into the proper implementation of Z-Depth than I need to. In all cases, I need to make some extra time to wrap my head around IDs and their purpose.

        Cheers,

      • No, Z-depth will store the first object struck into the buffer. Anything behind it goes away (not stored)

        You can possibly alpha blend it using something like a partial value for trace continue but in the beauty render it will make the object semi-transparent (trace continue decides if an object is there, not there at all, or only partially there)

        Deep Data is the solution that stores all of the information you would need for a true depth of field solution in post, this is one reason it’s popular for composing scenes from different elements and placing them correctly in post. Nuke understands this workflow but it is not part of mental ray in Maya yet (it is standardized in EXR 2.0). It also makes for huge frames (Gigabytes for complex scenes, meaning a single frame would be gigantic in memory consumption)

        For now your solution would be to render in layers and compose in post.

        Or render with depth of field in-camera using Unified Sampling and a single sample for the lens shader.

      • Appreciate you taking time out of your busy day to check in an answer my n00bie questions.

        Keeping to my test scene for relevance … so, I’d be best to render out a layer (and associated Z-Depth) for everything up to and including the glass and render a separate layer (and associated Z-Depth) for everything behind the glass and composite the outputs together as opposed to attempting a single Z-Depth … something I get a feeling you’re going to tell me is a no-brainer in Nuke but likely a challenge in AE.

        I guess I’m struggling with understanding how professionals such as yourself deal with complex scenes where layering elements might be a tremendous work effort to achieve the “feeling” of depth. For example, understanding that glass is still physically present, would a scene that displays the inside of a car (viewed from outside the car) simply have the glass geometry omitted from the Z-Depth pass?

        I guess I’m splitting hairs aren’t I ? 🙂

      • A car is large enough that using Depth of Field for exterior to interior is hard to envision.

        If it were something like a still-life, then it would be layers. And only then if it mattered enough visually. Often times we omit that physical detail because no one will notice in motion.

        Or we render with depth of field in-camera with the renderer.

      • I think I understand;

        I figured since one can see through the front windscreen, to the objects inside the car, and through the rear window at the scene behind the car, omitting the glass would likely be considered poor practice. I assumed that the ‘Z’ numerical data would be relevant to successfully achieve DOF.

        On a semi-related topic, most of the write-ups and tutorials I’ve found on topics such as Z-Depth, seem to fall short – i.e. just enough information to realize how deep the rabbit hole goes. 😉

        Of the entries I’ve read (and digested) from this site, I always seem to obtain insights into real-world applications. As I learn more about Mental Ray, I continue to appreciate the many capabilities it has to offer … it certainly helps me better understand the strengths/weaknesses amongst all of the renderers offered out there.

        Cheers,

  14. David what is your opinion on The Foundry Render in Modo. Most seem to be giving it the WOW Factor. I gave Modo a spin and the render is fast but I don’t see any difference in workflow between Maya and Modo, I feel I can still do more in Maya.

  15. Hi guys, thanks for the info and great site. Unified Sampling has saved me numerous times, lately in rendering some mia metal with very blurry reflections. I was wondering if you would be able to discuss the final gather caching system at all, particularly the secondary final gather file. I can’t seem to find anything at all on the secondary file and how to use it on the web, and the mental ray documentation on it is very, very sparse. Perhaps FG cache is an outdated way to do things and you’d recommend a different way to store and reuse lighting information. Thanks!

    • You mean the cached FG MAP file? We usually recommend the FG Shooter (with script on this blog) for animations where high quality FG is necessary. But for most FX and Animation work we suggest the Native IBL (Environment Light) with minimal/fast FG settings (also on this blog and integrated into Maya 2015). This usually looks best and requires the least amount of work. Only very indirectly lit scenes won’t work well like this.

  16. David, do you, or have you used MentalCore ?

  17. You would recommend it ?

  18. Hey guys. Wonderful site and a great resource that I find myself constantly coming back and referring to. Thanks for putting it together.

    I’ve been working on a project using the new MILA shaders in 2015 and I came across a very strange problem I can’t explain. I’ve been testing to figure out what it is and I still don’t have an answer, so I was hoping you guys may be able to illuminate on it.

    Basically I have 2 cameras that are identical except that one has the MILA LPE frame buffers, and one does not. The MILA renders and passes look great, but my Mattes with another render layer using the non-MILA camera do not line up correctly. They are off by a small amount. I keep testing but can’t seem to find out why or how to get them to line up. Have you experienced any situations where the render is different than the actual camera’s view in the viewport?

    Thanks again guys for the wonderful site.

    • The difference may be if you have “contrast all buffers” on (the default). I would recommend not using that feature under Framebuffers unless you really need it. It increases render times as more samples are drawn through the passes to resolve noise on non-beauty color passes. So the camera without the passes might be sending fewer samples and therefore not match.

  19. Hi David I wrote a reply to a post a few days ago, I suspect you have been busy as I was hoping you could help me regarding setting up gamma on color swatches ?

  20. I read most of the posts on color management which post are you referring too ? I can let you know whether I read it or not.

  21. Hey David. I’m looking for some direction on a snow shader I’ve been working on. I’ve just begun to work with the Mila Material shader, so I’m a little new to the process. Anyway I came across your post about flakes, which you even mention its usage for sparkles in snow, and I’m wondering how I use the node with a Mila material shader?

    I can’t seem to find the right way to connect it to my shader layers to get the effect of sparkling snow.

    Any thoughts would be much appreciated.

    https://elementalray.wordpress.com/tag/flake/

    • It can be connected as a bump shader to a layer to get the sparkle effect. The strength of that can be controlled through the flake shader itself. Be sure amd create a 2d Bump node for it.

      • Thanks for response, however I must not be applying it correctly.

        I’m assuming the sparkles should be applied on top as a ‘specular reflection’ layer; then from there I apply to the bump slot a bump2d node connected to a mila_bump_flakes node. This just results with a sheen across the surface.

        Am I missing something?

      • Make sure it’s the right connections to the bump and that your transform is frozen. Works for me.

  22. Hi guys! Thank you for a great site! Just wanted to say that your articles are very well written and helps alot.

  23. David Halbstein

    I’m trying to learn what there is to know about the MILA shader; I’ve gotten to this point and wonder if I’m missing something:

    I have created a MILA material and applied it to a simple sphere sitting on a checkered surface. I am using an LDR Panoramic Sky image as an IBL node. I have illuminated this scene with a single spotlight.

    Added a “Glossy Reflection” layer to the MILA material, and get the result I expect. However, there is a big, white specular highlight on this sphere that I do not want, or that I at least wish to control. There seems to be no way around this.

    I’ve tried rendering in passes, and separating the “reflection” from the “specular”, but it seems that the passes system now regards all reflection as “specular” (My reflection pass came out black, my specular pass came out like a beauty).

    In the MIA material, under the “Advanced” section, you could adjust the “Specular Contribution” slider down to zero, but in this shader it seems to be tied to the roughness. If I adjust the roughness to zero, the highlight becomes a pinpoint but the reflection is mirror.

    I can find no control for this ubiquitous specular highlight – is this a limitation of the shader, or am I missing something?

    • You want to manipulate the direct amd indirect cobtribution sliders in the MILA material. Reflection pass? Are you not in Maya 2016?

  24. David Halbstein

    Thanks for your quick reply.

    I understand the concept of the direct and indirect contribution (thanks to this blog), but the issue is still this intense, round, white specular highlight that appears on the glossy reflection layer when the direct contribution attribute is increased to a non-zero value.

    The reflection of the IBL node is direct illumination also – so as I adjust that attribute, BOTH the specular highlight AND the reflection of the IBL are affected.

    I can find a way to manufacture the result I want by creating render passes that exclude lights and exclude the IBL node, etc., but this seems counterproductive.

    It feels to me as though there is a very simple solution to this that I’m just not seeing, but I have read through a lot of blogs and docs, and I haven’t seen this issue addressed. The visual result is very undesirable.

    Thanks for taking the time to read this, and I apologize if I’m being obtuse.

  25. Thanks again for your answer; what you are saying is that no, this is not something that can be controlled in the shader.

  26. Hey Guys,

    i am just getting into Rendering and was wondering where to learn about Mental Ray. I understand that some things changed in Mental Ray for Maya 2016.

    Now my question would be if you could recommend anything on learning Mental Ray 2016 if someone has no prior experience.
    Of course i already looked around this page and i saw theres a lot of information, but it feels way too much for me and it is all very overwhelming. I am now trying to decide if i should look into older Mental Ray Books/Courses because there are some out there. They might be outdated though due to the new improvements.

    I am sorry to come to such a high end place to ask this beginner question but if you check for learning resources online like amzon there are only a few books – and specially nothing about Mental Ray for Maya 2016.

    Any kind of help would be highly appreciated!

    Cheers,

    Oli

    • You can try the official mental ray blogs and forum:
      blog.mentalray.com
      forum.nvidia-arc.com/forum.php

      You can ask anything you’d want to on the forum pages.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: