Category Archives: colorspace
Linear Color Workflow(s) in Maya – Part 2: The Preferred Method
Continued from Part 1: Maya Tools
Now we have the part that is the most robust and effective but more complicated.
The preferred method (and one you may find at a visual effects studio with a color pipeline.)
Photoshop -> Linearize in Nuke based on original color space -> Render (view through LUT) -> Composite (view with correct LUT) -> Output to correct Color space
After painting your textures in Photoshop, take them into Nuke. Nuke has a “colorspace” node where you can specify the output color space and even white point. Write the file out in the linear format. Preferably to an EXR that can be later cached and mipmapped (https://elementalray.wordpress.com/2012/04/17/texture-publishing-to-mental-ray/). To clarify: when writing to a non-floating point format, the write node in Nuke will choose an appropriate colorspace. This is generally sRGB for 8-bit images. If saving to EXR, the write node will understand a floating point format is linear and the colorspace node should be omitted. You can also change the colorspace to be written in the write node as well but not white point.
If you are on a project with a specific LUT/Color space you will have to take care to strip the original color space out (linearize it). This way when it is viewed through the LUT it will look as expected based on what you painted. You notice that the Maya selections mention linearization based on “primaries” as well as just a gamma curve. LUTs may alter the primaries for the correct source such as DCI-P3. Your Digital Supervisor will generate one of these for use. How to make one is beyond the scope of this tutorial since it delves into Nuke too much.
Load these into your texture nodes inside Maya after linearized.
What about the color picker? That’s a sticky problem, Maya colors are stuck to sRGB unless you reverse engineer the color you need or simply drop a gamma node and use its color picker and then correct it (0.4545 for RGB). Generally “good enough”. Then use the Renderview Color Management to load the LUT for viewing as you render.
Take care that your output colorspace is what your LUT is designed to use, be it Linear or Logarithmic (Cineon Log format) Your input will be the best Linear approximation as created by Nuke. Example: Use logarithmic when your LUT expects Cineon formats.
Render away and your passes will automatically be output in linear form (EXR 16-bit half please!) Load these into your compositing package and view the compositing process through the correct color space. Nuke has several processes for this, but the input_process is preferred. (image)
You now have a color correct pipeline where you are rendering with the correct linear textures and viewing them like they will appear in your final project.
This means color correct decisions can be made during all phases. This reduces artist “guessing” and surprises. Your images will operate correctly inside the renderer and with some care in choosing your materials, it will be physically plausible and achieve photo realism more quickly. You also alleviate extra trouble where compositors were relighting scenes as opposed to integrating elements. It should look like the below flow but feel like Heaven. . .maybe.
Paint (sRGB) -> Linearize based on original colorpsace -> Render (linear) -> Composite (linear) -> Output (sRGB or Specific Colorspace)
Some Simple Examples:
The original workflow was simple: Paint your textures and render. The problem here is that the image is dark and the lighting is blown-out in comparison. When complicated with physically inaccurate shaders the result was a look that could take hours to “eyeball” to a plausible solution.
Corrected to sRGB from sRGB painted textures: Quick fix, right? No, now everything is “double” corrected. 2+2=5 so-to-speak. Everything washes out while your black and white areas are unchanged. This also means your lighting will be much too strong and wash out entire areas of your scene.
Rendered with the correct linearized textures but viewed incorrectly. Now it’s certainly too dark. But your overall contrast and lighting information are correct. As a 16-bit half floating point image you can easily correct and view this result.
Linear to sRGB rendered and viewed correctly. You have a wider range of values and contrast without anything being washed out.
Additional notes:
- You do not need to render through a lens shader for sampling reasons. mental ray samples internally in perceptual space automatically. In combination with Unified Sampling, correctly rendered images should be free of artifacts. However, if you are rendering directly for beauty to an 8-bit image format then it would benefit you to render with your color space baked in (in the render). Post operations to correct a lower bit depth image will introduce artifacts and banding.
- No real mention of using a lens shader for anything else aesthetically. Well, when rendering for beauty the mia_exposure_photographic lens shader is very nice. But a 2D package like Nuke or Lightroom has much more powerful tools to give you the look and color grading you desire.
- There is a framebuffer gamma setting in the Render Settings. Ignore it. Using this control will apply a gamma correction to your inputs overall and will cause undesirable artifacts.
- Output passes (diffuse, specular, etc.) are not affected by the lens shader. Correct. These passes are meant for compositing operations. As mentioned previously, these operations should be done in linear color space so that your results are correct. Then output to the desired color space at the end. Ideally these operations should be additive.
- The color picker being sRGB is a bit of an issue that complicates things, it might be nice to log a Suggestion for Autodesk to further refine the linear workflow controls and actions. Under the hood these colors are most likely floating point.
- The easiest way to understand what should be corrected are colors/textures that will be seen as some sort of color in the final rendered effect. Bumps, displacement, cutout maps, etc are not seen as a color in the final render and can be left alone.
- Normalized colors (range 0-1) are best for diffuse textures. In Nuke you can normalize a texture as well as change the color space. Emissive textures (like an HDR environment) should not be normalized. This will defeat the purpose of having that lighting information. It will flatten out your lighting. This is also true of geometry sources of light where you apply a texture as a light. But these textures should still be linear color space.
- If you have a plate you are rendering with (or environment), it needs to be linearized correctly if you are going to use it in the renderer and later corrected. Otherwise you will accidentally double correct it. Maya is primarily a VFX package so it assumes you will composite your results later. It’s a best practice to hide the primary visibility of these elements so they can be added again in the compositing stage and not inside Maya.
- If you always paint a texture in sRGB space and linearize it, then output to a different color space, there will be some difference in what you painted versus the final output. The solution there is to work and view your texture in the final color space as you paint. This isn’t always easy to do in something like Photoshop unless you have a correct working color space and calibrated monitor.
Linear Color Workflow(s) in Maya – Part 1: Maya Tools
I previously explained the sRGB color space here: Color space and Your Screen
Now I will talk about ways to render your images correctly from inside Maya.
Renderers operate in linear color space. Their color calculations are designed such that 2+2 = 4 in a direct fashion without applying curves, etc to the color inputs and outputs. Here are a few reasons you will need to understand this and appreciate it.
- Color correct workflow insures that your result is predictable.
- Color decisions can be made in each phase without major disruption since you aren’t using your 3D package in a way 2D software is best suited.
- Viewing your textures as you create them in the correct color space will have a consistent result when you output them. Paint in rec 709 > linearize > output to rec 709 will look the same.
- Using the correct physically plausible materials (like mia_materials) will respond correctly to light giving you photorealism more quickly and reducing tweaking time.
- Tweaking settings with curves baked in them is counter-intuitive.
There are two main ways to deal with this situation inside Maya. The first is possibly the easiest. But the final solution, while more complicated, is generally preferred for reasons I will explain.
I’m going to assume a few things (I know assuming is bad, but hey, gotta start somewhere.)
1. You are using Photoshop to paint your textures. Photoshop assumes you’re painting in Perceptual Space (sRGB) but you will probably want to turn off color management to make sure it’s not making too many decisions for you. This is fine because you want to paint what you will expect to see later.
2. You have a decent monitor that has been calibrated. CRTs have great reproduction but you have probably migrated to LCD by now. IPS monitors are best because their viewing angles are wider and color reproduction is better. Higher-end monitors like HP’s Dreamcolor Monitor will also allow greater bit depth to be displayed (30-bit color, etc) when combined with supporting software and hardware.
3. You know your destination color space. If you are generating content for most sources it’s probably sRGB again. If it’s HDTV then it’s probably rec 709 and for film and special projects (shot on film or otherwise) you can have a specific color space/LUT you need to view your images with. (Sidenote: film is often in Log(arithmic) space because of how film responds to light. The Cineon format is often used here and is well documented. Cineon File Format )
Basically:
- 1. Paint in Photoshop
- 2. Linearize your image (based on your destination colorspace)
- 3. Render (this is a linear process) and view in the correct destination colorspace
- 4. Composite with floating point linear files viewing them in the correct destination colorspace
- 5. Output to the correct colorspace from the compositing program
Let’s use Maya to help us this time.
Step 1 is easy, paint your textures in Photoshop. The majority of images used for this are collected from other sources that are sRGB. Like the Internet or texture collections. HDR formats however are floating point and by standards those are assumed to be in linear color space (avoid correcting them for anything other than just viewing) Remember: floating point does NOT mean it is linear. Bit depth and color space are different concepts. But floating point images are assumed to be linear color space.
Step 2, linearize the file. Current Maya versions provide a mechanism for correcting your images to be rendered in linear color space. Renderers, mental ray here, will assume the data you are feeding it is linear. Maya has an option in the Render Settings called “Enable Color Management”
You have a selection of color spaces to choose from: Linear sRGB, sRGB, Linear Rec. 709, HDTV Rec. 709 and additionally output for CIE XYZ. (image)
An explanation can be found here: Maya Render Settings Docs
The recommendation is that you render “sRGB non-linear. Applicable for use with most viewing arrangements. Implicit D65 white point.” This means you painted in Photoshop on a monitor and are viewing in a color space of sRGB.
So far so good.
Output is Linear sRGB. This means your output will be in linear color space. This is preferred (required in order to be correct) for compositing. Compositing packages like Nuke will operate linearly when given floating point files. You will also notice the file nodes have similar controls for overrides, etc. (image)
Ok, so far the Maya controls seem to do the job. But there’s a problem. The color picker for Maya is in sRGB colorspace (despite internally being floating point). This means that red you chose just won’t do! (image)
How do you fix this? Well, sadly for now you must attach a gamma node to the color connection and apply a correction of 0.4545 (1/2.2) for the color you want that is now the input color of the color picker. Now things should be linear through and through. (Applying an inverse function will flatten out the sRGB curve.) (image)
But you have a bump texture? Textures provided as data for the shader to use like displacement, bumps, and maps for controlling things like glossiness can be left alone. They are not going to be visible as a color in the render but provide data to the shader to produce a result not related to color directly. The management for these textures should be negated.
Displacement maps should be floating point and therefore linear color space by default.
Be sure your Render Settings > Quality > Framebuffer is set to RGBA 16 half. (image) Also select 32-bit in the Renderview > Display menu (requires restart). Render to the Renderview and using the Display > Color Management, choose Image Color Profile = Linear sRGB (image) You will now view your image in the correct colorspace without making alterations to the rendered file (it will still be written as linear) This is a preview of what your image will look like when composited and output as a final to sRGB.
Now let’s recap this:
–Paint in Photoshop and save. (Your texture is going to be in sRGB format if saved to a standard format that is not floating point.)
–Enable color management in Maya as default (sRGB to Linear sRGB) and render to a floating point format, generally speaking OpenEXR RGBA 16-half. (You can render to 16-half because it is still considered a floating point format but saves space compared to 32-bit by losing some unnoticeable precision) Take care not to alter your bump and displacement, etc.
–Your images are linear and ready to composite in something like Nuke.
That’s one solution. But here’s the problem(s) I have with that solution: It’s tied to Maya. This means that your success is tied to using Maya’s mechanism even if it’s faulty or changes from one version to the next. And what if you change packages for rendering? What about those nodes reading in bump and displacement to fool with?
Well, you can use the gamma nodes attached to nodes and omit them for data type textures like bumps. But why?! This not only increases your workload for every texture and color picker, but what if you forget one or fumble thumb a setting? So let’s not go there. I’ve never quite understood that workflow. (I try to name my nodes and all those gamma nodes become an accounting nightmare.)
So why not linearize before taking the image to Maya? Great! Maya is a 3D package. Try not to make it your color package too. There are much better pieces of software for that. (Sidenote: You can generate color profiles for Maya using a colorProfile node. More information can be found here: colorProfile Node But this may be a bit complicated for most users. And again you are tied to the internal mechanism of a single package for rendering.)
Can you make this a little easier?
You can linearize a color texture from Photoshop to Linear sRGB easily.
In Photoshop you should change your image bit depth to 32-bit float and save as EXR when you’re done. (Image ->Mode) Remember that floating point files are assumed to be linear. This means Photoshop saves a linear color space file you can use for rendering. Now you can ignore the Color Management on your texture nodes and Render Settings. View your render as you did before with the Color Management in the Renderview. You still must correct the color picker.
Now you also have a library of textures that can be rendered in any package for sRGB because they are saved correctly in Photoshop.
Photoshop -> Linear sRGB -> Render (view as sRGB)-> Composite (view as sRGB) -> Output to sRGB
But what about a project where you are rendering to a specific LUT? (Film Still) Maybe you have a project shot on film. Your color space is not sRGB. Now what?!
For the preferred workflow, look here: Part 2: The Preferred Method
. . . .
Color space and Your Screen
We live in a world of color and technology. And because technology handles colors in a different way than Humans perceive them, we have to account for that when we work with color. Below is a brief summary of the how and why of colorspace.
sRGB or Perceptual Color space
The standard for viewing most content is called sRGB. This color space is defined by a specific chromaticity and gamma curve. Wikipedia sRGB Most viewing devices are designed with this standard in mind and even the internet has adopted this color space as the standard for viewing.
Why this one and not another one?
The original 1953 standards (NTSC) selected a camera gamma of 1.0/2.2 This is chosen based on assumptions regarding the viewing conditions of the image as well as the actual non-linear light response of the human eye. So there has been some thought given to the best calculations for reproducing an image. You are also taking information captured by a camera that cannot possibly be displayed on a screen, like the actual luminance of the Sun for example. This needs to be correctly represented to the human eye as well. An 8-bit image, for example, only has a certain amount of space to represent a realistic image (bit depth or reproducible colors). A perceptual adjustment is made based on Human vision to make this possible. For a more detailed breakdown of available color depth, look here: Wikipedia Color Depth
CRTs also have a non-linear power transfer parameter called gamma (CRTs used an electron “gun” to light up your pixels and this was not a linear transfer of energy.) Wikipedia Gamma Correction
I have and LCD monitor, why do I want to deal with this?
Keeping in mind the history of a 2.2 gamma curve, there are a myriad of things that use this sRGB color space as their default. Everything from analog video to TIFF has been based on this color space as the default. Changing this to something else would cause a great deal of confusion and loss of compatibility.