Linear Color Workflow(s) in Maya – Part 2: The Preferred Method
Continued from Part 1: Maya Tools
Now we have the part that is the most robust and effective but more complicated.
The preferred method (and one you may find at a visual effects studio with a color pipeline.)
Photoshop -> Linearize in Nuke based on original color space -> Render (view through LUT) -> Composite (view with correct LUT) -> Output to correct Color space
After painting your textures in Photoshop, take them into Nuke. Nuke has a “colorspace” node where you can specify the output color space and even white point. Write the file out in the linear format. Preferably to an EXR that can be later cached and mipmapped (https://elementalray.wordpress.com/2012/04/17/texture-publishing-to-mental-ray/). To clarify: when writing to a non-floating point format, the write node in Nuke will choose an appropriate colorspace. This is generally sRGB for 8-bit images. If saving to EXR, the write node will understand a floating point format is linear and the colorspace node should be omitted. You can also change the colorspace to be written in the write node as well but not white point.
If you are on a project with a specific LUT/Color space you will have to take care to strip the original color space out (linearize it). This way when it is viewed through the LUT it will look as expected based on what you painted. You notice that the Maya selections mention linearization based on “primaries” as well as just a gamma curve. LUTs may alter the primaries for the correct source such as DCI-P3. Your Digital Supervisor will generate one of these for use. How to make one is beyond the scope of this tutorial since it delves into Nuke too much.
Load these into your texture nodes inside Maya after linearized.
What about the color picker? That’s a sticky problem, Maya colors are stuck to sRGB unless you reverse engineer the color you need or simply drop a gamma node and use its color picker and then correct it (0.4545 for RGB). Generally “good enough”. Then use the Renderview Color Management to load the LUT for viewing as you render.
Take care that your output colorspace is what your LUT is designed to use, be it Linear or Logarithmic (Cineon Log format) Your input will be the best Linear approximation as created by Nuke. Example: Use logarithmic when your LUT expects Cineon formats.
Render away and your passes will automatically be output in linear form (EXR 16-bit half please!) Load these into your compositing package and view the compositing process through the correct color space. Nuke has several processes for this, but the input_process is preferred. (image)
You now have a color correct pipeline where you are rendering with the correct linear textures and viewing them like they will appear in your final project.
This means color correct decisions can be made during all phases. This reduces artist “guessing” and surprises. Your images will operate correctly inside the renderer and with some care in choosing your materials, it will be physically plausible and achieve photo realism more quickly. You also alleviate extra trouble where compositors were relighting scenes as opposed to integrating elements. It should look like the below flow but feel like Heaven. . .maybe.
Paint (sRGB) -> Linearize based on original colorpsace -> Render (linear) -> Composite (linear) -> Output (sRGB or Specific Colorspace)
Some Simple Examples:
The original workflow was simple: Paint your textures and render. The problem here is that the image is dark and the lighting is blown-out in comparison. When complicated with physically inaccurate shaders the result was a look that could take hours to “eyeball” to a plausible solution.
Corrected to sRGB from sRGB painted textures: Quick fix, right? No, now everything is “double” corrected. 2+2=5 so-to-speak. Everything washes out while your black and white areas are unchanged. This also means your lighting will be much too strong and wash out entire areas of your scene.
Rendered with the correct linearized textures but viewed incorrectly. Now it’s certainly too dark. But your overall contrast and lighting information are correct. As a 16-bit half floating point image you can easily correct and view this result.
Linear to sRGB rendered and viewed correctly. You have a wider range of values and contrast without anything being washed out.
Additional notes:
- You do not need to render through a lens shader for sampling reasons. mental ray samples internally in perceptual space automatically. In combination with Unified Sampling, correctly rendered images should be free of artifacts. However, if you are rendering directly for beauty to an 8-bit image format then it would benefit you to render with your color space baked in (in the render). Post operations to correct a lower bit depth image will introduce artifacts and banding.
- No real mention of using a lens shader for anything else aesthetically. Well, when rendering for beauty the mia_exposure_photographic lens shader is very nice. But a 2D package like Nuke or Lightroom has much more powerful tools to give you the look and color grading you desire.
- There is a framebuffer gamma setting in the Render Settings. Ignore it. Using this control will apply a gamma correction to your inputs overall and will cause undesirable artifacts.
- Output passes (diffuse, specular, etc.) are not affected by the lens shader. Correct. These passes are meant for compositing operations. As mentioned previously, these operations should be done in linear color space so that your results are correct. Then output to the desired color space at the end. Ideally these operations should be additive.
- The color picker being sRGB is a bit of an issue that complicates things, it might be nice to log a Suggestion for Autodesk to further refine the linear workflow controls and actions. Under the hood these colors are most likely floating point.
- The easiest way to understand what should be corrected are colors/textures that will be seen as some sort of color in the final rendered effect. Bumps, displacement, cutout maps, etc are not seen as a color in the final render and can be left alone.
- Normalized colors (range 0-1) are best for diffuse textures. In Nuke you can normalize a texture as well as change the color space. Emissive textures (like an HDR environment) should not be normalized. This will defeat the purpose of having that lighting information. It will flatten out your lighting. This is also true of geometry sources of light where you apply a texture as a light. But these textures should still be linear color space.
- If you have a plate you are rendering with (or environment), it needs to be linearized correctly if you are going to use it in the renderer and later corrected. Otherwise you will accidentally double correct it. Maya is primarily a VFX package so it assumes you will composite your results later. It’s a best practice to hide the primary visibility of these elements so they can be added again in the compositing stage and not inside Maya.
- If you always paint a texture in sRGB space and linearize it, then output to a different color space, there will be some difference in what you painted versus the final output. The solution there is to work and view your texture in the final color space as you paint. This isn’t always easy to do in something like Photoshop unless you have a correct working color space and calibrated monitor.
Posted on November 23, 2011, in colorspace, maya. Bookmark the permalink. 54 Comments.
Hi and thank you for another great post. I have a little question: why do you suggest to use the “Input Process” in Nuke? I mean, when I load my (linear) passes into Nuke I already see them in the correct colorspace (sRGB): did I miss something?
You don’t have to use that. Nuke is usually good at guessing the correct standard colorspace. But in most production we have a specific LUT we view things through and that’s not part of Nuke’s standard color spaces. So we may end up using the Input Process for a particular show. (A lot of what you see on the screen has a “look” applied and isn’t straight-up sRGB)
Thanks, now I think I understand!
Hi David I’ve been having a horrible time with this and I was wondering if you could help. Even though all my settings are exactly as you said i seem to be getting horrible texture crushing, almost as if it is low quality. I’ve started a thread on CGTalk in the hope of some advice. Thanks for this 🙂
James
Sorry, the link is here: http://forums.cgsociety.org/showthread.php?p=7598972#post7598972
Looking at the thread: don’t use a tonemapping lens shader. Do that in post. Instead, look at your Render View and use that to view in the correct colorspace as mentioned in the blog topic.
When you removed the tonemapping lens shader you started viewing it in the wrong colorspace most likely.
Hi David,
As usual, a wealth of knowledge to be had at Elemental Ray and from you … thank you! I have a few questions:
Q – In Part 1, you indicate that the Color picker needs to be Gamma corrected. I assume this applies to MIA materials (Diffuse) as well, utilities such as Ramps, etc. ?
I’ve setup my Linear Workflow as per the Maya 2014 documentation with adjustments as per Part 1 & Part 2. I have a very simple scene (for learning purposes) which is comprised entirely of MIA materials, Physical Sun & Sky (disconnected the Lens Shader). I set the Render to Mental Ray Unified /w an Output format of OpenEXR and a Framebuffer of RGBA (half 4×16).
I’m having a heck of a time trying to find an output combination that Maya doesn’t throw up on (rgba_h not supported, z-depth not supported, etc.). I’m trying to output a few frames using Render Layers to try my hand at compositing in Adobe AE. I realise Nuke is the “bomb” at the moment but I haven’t saved up enough to purchase it yet.
Q – Could you point me to a proper workflow which will allow me to output an Alpha Channel, a Z-Depth that I can further read up on?
Would appreciate your insights and expertise on the best approach to take.
Cheers,
Yes, colors in Maya are not currently gamma correct for linear workflow. Only gamma correct colors that are seen in the final image, not the ones used for data like displacement, etc. This applies to anything, mental ray, Vray, Arnold, diffuse, reflection, so on. . .
I am not sure what you mean by Maya having an issue with z-depth, etc. When you view these EXR layers with imf_disp, are they correct? z-depth does not support rgba_h and will instead be float as this is the required precision for depth passes and most data passes.
Appreciate the quick response!
When I look at the EXR file, I get the conventional RGB and the A is pure-white for the objects & black for everything else).
I’m running into to issues; 1) glass geometry in the A channel is indistinguishable from the rest of the geometry – was expecting it be be black/transparent. 2) I figure I’m going to need a Z-Depth pass for use when compositing in AE for DOF.
Being relatively new to this, it’s clearly something I’m not doing correctly.
Cheers,
Looks like I’m running into another issue … bad spelling, sorry.
😉
I can’t type today either, I have to edit my comments a lot.
If using the mia_material, there is an option for “propagate alpha” if you would like an alpha to later composite onto a plate that doesn’t exist in the render.
Working as expected once I selected the same option in the MIA “glass” materials I created; figured setting the option in the Render Settings was sufficient … should have known better {slaps forehead}.
I’d still like to get a Z-Depth pass which leaves me with figuring out why I keep getting “Warning: (Mayatomr.Scene) : format does not support depth, using IFF instead”. I suspect I need to review the Passes tab functionality – specifically, Camera Depth or Camera Depth Remapped. Managed to have gotten away not using the Passes tab so far … game over.
I’m certain there’s a definitive how-to (workflow) guide/book out there for newcomers but I haven’t found it yet. I’ll say this: “the devil is most certainly in the details” … thank goodness for online resources such as this.
Cheers,
Use the passes depth. And most people prefer remapped for AE. The old check box for depth is deprecated. It should be removed.
Much appreciated!
Although I suspect it will make little difference in the big AD machine, I did add a comment “… better integrated Linear Workflow for Color, Ramps, etc. and more Mental Ray ‘awareness’ …” to my AD Support Survey comments. 😉
I have another AD Support Survey to reply to so I’ll try and find a way to slip in something about removal of deprecated options from dialog boxes … or at least an option in preferences to hide deprecated options in UI.
😉
The 2014 UI did adopt a lot from our community UI. But the feedback that more can be done is very useful now that nVidia works on integration under Autodesk direction. Correct options and workflows can be implemented better with more user feedback.
From what I understand my rendered image should be popping out darker so I can gamma correct it to 2.2 inside nuke. However, my batch renders keep coming out gamma corrected. Color Management is On. Input is sRGB. Output is Linear sRGB. My framebuffer is set to 16 bit half. 32 bit floating point is Enabled in options. The image color profile is Linear sRGB. The Display Color Profile is sRGB. When I render in my render view the image looks exactly like it should(gamma corrected) So my problem lies in the batch render, because it is spitting out what I see in the render view instead of the Linear sRGB that it should be. I’ve tried opening a new scene and setting everything up fresh and I still have the same problem. Is there something wrong with my Maya or am I making a mistake somewhere? Also I am Not using a lens shader and my gamma is set to 1 in the framebuffer. And I’m using the unified sampling for maya 2014, which is so awesome. Thanks a bunch
My .exr comes out looking like this http://www.flickr.com/photos/77045870@N04/11915539626/
If I switch my Display color profile to Linear sRGB my image looks like this in the render view(which I’m pretty sure is correct) but it won’t batch render out like this.

I’ve not had this issue. Does this happen with a very simple scene like a sphere? If so maybe I can look at it. You can also export a renderable .mi file and look for the camera block (section) and you should see colorprofile micsSRGB. If you see micsSRGBg then somewhere it’s not outputting the correct color management choice. I might try turning off color management (you can leave the display the same) and render that way if you’re textures are treated correctly.
Thanks for getting back to me so fast. I tried exporting the .mi file, but I don’t know how to locate the camera block section or even open the file.
here’s the link for the simple scene file
https://www.dropbox.com/s/r2pdn695qwydzy2/sphereTest.zip
I was able to open it with notepad and did a search and found my camera shape / framebuffer and it said micsSRGB
That’s technically correct. The rendering colorspace at the top should be the same. What are you viewing the renders with?
The default Maya Render View
ohh sorry. I tried viewing it both in photoshop and nuke and they look like the first picture I posted. Which looks as though gamma had been applied.
Nuke and Photoshop try and auto correct for your viewing space. Open with Nuke set to linear viewing or (easier) imf_disp which defaults to gamma 1.0
Nuke automatically sets it to linear, when I read it.
I opened it through the imf_disp and it looks exactly as it should (linear)
So is it a problem with nuke then?
Nuke opens as linear, but the Nuke viewer is set to sRGB by default. This does the same thing as the Maya Render View. It’s operating as designed. You can change the viewing colorspace in Nuke to be what you want by default. I recommend whatever your destination is.
So right now nuke is gamma correcting the image for me through the viewer? Do I just tweak the image how it looks now or should I change the colorspace in nuke to linear, apply a gamma of 2.2 and then make color corrections to the image? My final renders would be in sRGB.
Use the viewer through your correct colorspace. Then make sure when you write files the write nide is the correct colorspace and/or passes through a LUT you have. Nuke defaults colorspace like exr = linear output, JPEG = sRGB output.
Awesome I got it. Thanks so much. I was going crazy trying to figure it out.
Hi David,
Again thank you so much for what you do. This is very valuable knowledge. Those things are generally quit badly explain over the internet, and unless you work in an actual VFX/post studio, it’s very hard to understand all this, at least for me.
Anyway, there still one thing that I can’t seem to grasp. I’m documenting on DCP and try to setup a workflow to get all my pipeline ready for DCI P3 delivery. And even though I understand the concept of colorspace (even though the P3/XYZ’s weird color space thing is still a bit blurry to me), I still have problem with bit depth.
I know, that the more bit depth you have, the more latitude you’ll get for the grading stage.
But in all the “tutorials” and routes that I’ve seen so far, it seems best to feed the JPEG2000 converter with 16 bit TIFF. Even though you’ve worked in 8 bit. And that’s a total nonsense to me. Is there any advantage to export an 8 bit project into 16 bit TIFF ?
Thanks for your time !
That part is a little outside my scope. Typically for film we deliver DPX files. From there at some point they go through even more work and end up delivering in JPEG2000 to theaters (or at least they did)
Going from 8-bit to 16-bit won’t create more fidelity in the original file, but if anything is done to the file to make changes or transform it, then 8-bit would show artifacts sooner.
Thanks for the super fast answer 🙂
To get back to your scope, about colorspace, for the sake of a better understanding let say there is no CG. Say we just have to grade some 5D DSLR footage, we work on sRGB displays and the target is DCI-P3 (theatre release).
And to make it even simpler, let say I work on a properly calibrated HP Dreamcolor (which is able to display 97% of DCI-P3 if I’m not mistaken).
So that’s where I need your expertise, from what I understood so far, all I have to do is set my monitor to DCI-P3 colorspace (which is my target colorspace), grade my 5D footage (which is 8 bit by the way) through it, and that is all.
Is it that simple ?! I read so many things about LUTs, gamma curves, white point, P3 D55/65…
I know it’s still out of your scope, but I assume you’re kinda dealing with that kind of things at the VFX stage, and if you could enlighten me with your experience on that subject, that would be joly helpful (you have no idea) !
What are you grading this in? Make sure whatever you’re grading it in lets you look at it in the correct colorspace for P3 on the Dreamcolor. Nuke lets you chose this easily. You can even create a LUT and use it later as an input process.
(I’m replying here because I couldn’t do it from your last post)
That’s exactely when I ‘m getting confuse. When you set your monitor to a particular colorspace (for instance P3), I thought there shouldn’t be any color space applied on what you’re working on. So the conversion is straight sRGB>DCI-P3, right ?
Isn’t it a better way to work directely on the targeted colorspace from the beginning; even before the grading stage ? I’m trying to figure things out (empirically) here, and I don’t see the point using LUTs, as long as you work in the colorspace your image will be displayed on. This is how I see it :
-You start working on your RAW footage on a calibrated monitor set on the DCI-P3 preset (so your footage is still untouched so far).
-Say you work an some VFX on maya (and Nuke), still with the DCI-P3 preset from your monitor. At this point, all the pipeline is free from color interpretation, except for the monitor.
-Grading step, again with no other colorspace conversion than the one of the monitor.
-And finally, you apply the final conversion from your image that has been seen through a sRGB DCI-P3 colorspace through out the all process, to a XYZ DCI-P3 colorspace ready for theatrical release.
So as I see it, every software of the pipeline should be set to “no colorspace”. Because the monitor is handling it. I really don’t see the point of using LUTs in that particular case. But I might be wrong.
Thanks again for your patience, that’s very nice of you to take the time to explain.
(and please excuse the weird english, I’m french :))
The monitor still comes into play because if it’s set wrong then you can be transforming the image wrong. (Looking at a linear image on an sRGB monitor without the correct software or monitor setting will still look wrong.) You also need to inform the software of the correct colorspaces or it might also interpret incorrectly for the same reason. This article seemed alright at first glance: http://vfxio.com/PDFs/Nuke_Color_Management_Wright.pdf
I have a question regarding linear workflow. I know to save your images in Photoshop as sRGB-Linear. David you mention to save your images as 16-bit float EXR can you recommend any EXR plugin for PsCC-2014 as there are none pre-installed with PsCC-2014 ?
Also, Maya has improved Color Management with the Extension Release and it does appear to be a positive major overhaul. Unfortunately I’m not on extension, which raises this question; using the gamma correction node to gamma correct colors on a olde Lambert shader & on the milA shader, the gamma values for both are set to .454 x 3 yet the shader ball is black ? I can’t attach a screen shot, of my setup unfortunately, unless I can send it to you another way ?
We don’t use Photoshop for that, and we don’t have any plug-ins. 😦 Always Nuke.
No idea the swatch issue, possible a bug or double correction. The new color system will continue to improve and make things much better than before.
Everything is Nuke !
You don’t have any idea as to how to fix my problem ?
Are you having an issue with 16 half exr in Photoshop?
I want to make my color swatches linear, instead of non-linear ?
Also color correcting the milA shader using the gamma node, as I’m on pre-extension release for 2015. I’d have to create a quick setup and send you the image as I can’t attach an image to these messages.
Just a gamma code into the color swatch/picker works fine. But if you mean the little shader ball, then no, it cannot be corrected.
What do you mean just a gamma code ?
Sorry, I meant “node”
How do you attach a gamma node into the color swatch picker, it is strictly a little pop-up menu ?
On the color or tint connection button you apply a gamma utility. Then choose the color there and adjust the gamma setting.
There is no color or tint connection button on the color swatch popup ?
Hit the connection (square checker) not the color swatch. Connect a gamma utility there. Use the color picker inside the gamma node.
Then it’s safe to assume all textures for color get plugged into the gamma node !
As long as you’re not also using color management (which applies to textures)
Do you mean having linear textures ? If so, yes I will !
I want to verify, if the Default output profile for a pass is set to sRGB and there is a gamma node plugged into the color slot, the color, in this case a red sphere, should be lighter, correct ? If I change the output profile to Linear the red sphere will be darker and therefore everything is setup correctly ?
If looking at it as sRGB, then it should be correct for *viewing. But you typically want linear for compositing (linear will be darker)
Pingback: Linear Color Workflow(s) in Maya « elemental ray
Pingback: Linear color workflows in Maya | JIN
Pingback: Photoshop / Nuke DMP Workflow | moving image arts