Linear Color Workflow(s) in Maya – Part 1: Maya Tools

I previously explained the sRGB color space here: Color space and Your Screen

Now I will talk about ways to render your images correctly from inside Maya.

Renderers operate in linear color space. Their color calculations are designed such that 2+2 = 4 in a direct fashion without applying curves, etc to the color inputs and outputs. Here are a few reasons you will need to understand this and appreciate it.

  • Color correct workflow insures that your result is predictable.
  • Color decisions can be made in each phase without major disruption since you aren’t using your 3D package in a way 2D software is best suited.
  • Viewing your textures as you create them in the correct color space will have a consistent result when you output them. Paint in rec 709 > linearize > output to rec 709 will look the same.
  • Using the correct physically plausible materials (like mia_materials) will respond correctly to light giving you photorealism more quickly and reducing tweaking time.
  • Tweaking settings with curves baked in them is counter-intuitive.

There are two main ways to deal with this situation inside Maya. The first is possibly the easiest. But the final solution, while more complicated, is generally preferred for reasons I will explain.

I’m going to assume a few things (I know assuming is bad, but hey, gotta start somewhere.)

1. You are using Photoshop to paint your textures. Photoshop assumes you’re painting in Perceptual Space (sRGB) but you will probably want to turn off color management to make sure it’s not making too many decisions for you. This is fine because you want to paint what you will expect to see later.

2. You have a decent monitor that has been calibrated. CRTs have great reproduction but you have probably migrated to LCD by now. IPS monitors are best because their viewing angles are wider and color reproduction is better. Higher-end monitors like HP’s Dreamcolor Monitor will also allow greater bit depth to be displayed (30-bit color, etc) when combined with supporting software and hardware.

3. You know your destination color space. If you are generating content for most sources it’s probably sRGB again. If it’s HDTV then it’s probably rec 709 and for film and special projects (shot on film or otherwise) you can have a specific color space/LUT you need to view your images with. (Sidenote: film is often in Log(arithmic) space because of how film responds to light. The Cineon format is often used here and is well documented. Cineon File Format )

I should upgrade

Basically:

  • 1. Paint in Photoshop
  • 2. Linearize your image (based on your destination colorspace)
  • 3. Render (this is a linear process) and view in the correct destination colorspace
  • 4. Composite with floating point linear files viewing them in the correct destination colorspace
  • 5. Output to the correct colorspace from the compositing program

Let’s use Maya to help us this time.

Step 1 is easy, paint your textures in Photoshop. The majority of images used for this are collected from other sources that are sRGB. Like the Internet or texture collections. HDR formats however are floating point and by standards those are assumed to be in linear color space (avoid correcting them for anything other than just viewing) Remember: floating point does NOT mean it is linear. Bit depth and color space are different concepts. But floating point images are assumed to be linear color space.

Step 2, linearize the file. Current Maya versions provide a mechanism for correcting your images to be rendered in linear color space. Renderers, mental ray here, will assume the data you are feeding it is linear. Maya has an option in the Render Settings called “Enable Color Management”

Maya Color Management

Maya Render Settings

You have a selection of color spaces to choose from: Linear sRGB, sRGB, Linear Rec. 709, HDTV Rec. 709 and additionally output for CIE XYZ. (image)

An explanation can be found here:  Maya Render Settings Docs

The recommendation is that you render “sRGB non-linear. Applicable for use with most viewing arrangements. Implicit D65 white point.” This means you painted in Photoshop on a monitor and are viewing in a color space of sRGB.

So far so good.

Output is Linear sRGB. This means your output will be in linear color space.  This is preferred (required in order to be correct) for compositing. Compositing packages like Nuke will operate linearly when given floating point files. You will also notice the file nodes have similar controls for overrides, etc. (image)

File Node Overrides

Ok, so far the Maya controls seem to do the job. But there’s a problem. The color picker for Maya is in sRGB colorspace (despite internally being floating point). This means that red you chose just won’t do! (image)

Maya Color Picker

How do you fix this? Well, sadly for now you must attach a gamma node to the color connection and apply a correction of 0.4545 (1/2.2) for the color you want that is now the input color of the color picker. Now things should be linear through and through. (Applying an inverse function will flatten out the sRGB curve.) (image)

Maya Gamma Utility

But you have a bump texture? Textures provided as data for the shader to use like displacement, bumps, and maps for controlling things like glossiness can be left alone. They are not going to be visible as a color in the render but provide data to the shader to produce a result not related to color directly. The management for these textures should be negated.

Displacement maps should be floating point and therefore linear color space by default.

Be sure your Render Settings > Quality > Framebuffer is set to RGBA 16 half. (image) Also select 32-bit in the Renderview > Display menu (requires restart). Render to the Renderview and using the Display > Color Management, choose Image Color Profile = Linear sRGB  (image) You will now view your image in the correct colorspace without making alterations to the rendered file (it will still be written as linear) This is a preview of what your image will look like when composited and output as a final to sRGB.

Render View Color Manager

Now let’s recap this:

–Paint in Photoshop and save. (Your texture is going to be in sRGB format if saved to a standard format that is not floating point.)

–Enable color management in Maya as default (sRGB to Linear sRGB) and render to a floating point format, generally speaking OpenEXR RGBA 16-half. (You can render to 16-half because it is still considered a floating point format but saves space compared to 32-bit by losing some unnoticeable precision) Take care not to alter your bump and displacement, etc.

Render Settings Quality Tab

–Your images are linear and ready to composite in something like Nuke.

That’s one solution. But here’s the problem(s) I have with that solution: It’s tied to Maya. This means that your success is tied to using Maya’s mechanism even if it’s faulty or changes from one version to the next. And what if you change packages for rendering? What about those nodes reading in bump and displacement to fool with?

Well, you can use the gamma nodes attached to nodes and omit them for data type textures like bumps. But why?! This not only increases your workload for every texture and color picker, but what if you forget one or fumble thumb a setting? So let’s not go there. I’ve never quite understood that workflow. (I try to name my nodes and all those gamma nodes become an accounting nightmare.)

So why not linearize before taking the image to Maya? Great! Maya is a 3D package. Try not to make it your color package too. There are much better pieces of software for that. (Sidenote: You can generate color profiles for Maya using a colorProfile node. More information can be found here: colorProfile Node But this may be a bit complicated for most users. And again you are tied to the internal mechanism of a single package for rendering.)

Can you make this a little easier?

You can linearize a color texture from Photoshop to Linear sRGB easily.

In Photoshop you should change your image bit depth to 32-bit float and save as EXR when you’re done. (Image ->Mode) Remember that floating point files are assumed to be linear. This means Photoshop saves a linear color space file you can use for rendering. Now you can ignore the Color Management on your texture nodes and Render Settings. View your render as you did before with the Color Management in the Renderview. You still must correct the color picker.

Now you also have a library of textures that can be rendered in any package for sRGB because they are saved correctly in Photoshop.

Photoshop -> Linear sRGB -> Render (view as sRGB)-> Composite (view as sRGB) -> Output to sRGB

But what about a project where you are rendering to a specific LUT? (Film Still) Maybe you have a project shot on film. Your color space is not sRGB. Now what?!

For the preferred workflow, look here: Part 2: The Preferred Method

. . . .

About David

I am a VFX artist that specializes in Lighting and Rendering. I spend a fair amount of my time supplying clients with artistic solutions as well as technology solutions. With a background in fine art and technical animation training, I strive to bridge the divide between the artist and technologist.

Posted on November 23, 2011, in colorspace, maya. Bookmark the permalink. 73 Comments.

  1. Hey David, I just wanted to report that your first technique is apparently broken in Maya 2012. I was using this technique in 2011 without any problems as far as I remember, but I recently tried in maya 2012, and I get bad compression artifacts on my textures. If I make the gamma nodes manually, it’s fine, if I use EXRs, it’s also fine. But the maya way… is now wrong. I tried on different PCs, can you confirm that or is it just me?

    • I almost never use that technique but somewhere I was earlier was not having problems with 2012. Are you on a service pack? I’ll see what happens here.

      • I used Subscription Pack and SP1 but got the same problem. I tried rendering an old project from 2011 with 2012 and didn’t see any these artifacts. Actually I think that was specific to one of my textures. I tried this same texture in different projects and scenes and had the same bug, while the others were fine. But why would it do that only with the maya color management and not with gamma correct nodes or EXR? No idea. I don’t have the time to investigate at the moment unfortunately. I already switched my pipeline to the “EXR 32bits from photoshop” way and it works great, as I’m only working in sRGB.

  2. So you narrowed this issue to a specific texture? What type of texture is this?

    • I made some quick tests tonight, it’s only visible in the dark parts of the textures (colors with a value around 20%). That’s maybe why I couldn’t see it on other textures. I tried png, tiff, tga, jpg, and get the same banding effect. Also tried a lambert instead of a mia_material_x.

      So I think I can say that it’s related to maya color management.
      It looks like the render view is actually not in 32 bits mode, but it is. And the exr I get from a batch render has the same problem. You can easily try it by yourself with just a light or a physical sky (I didn’t forget to put the gamma back to 1 in the camera lens). Or I can send you a test scene if you want.

    • dot87//chafouin

      I know this is a little weird but dot87 and chafouin are the same person. I’m in a nickname transition period, sorry about that 🙂

      So I already read your message but this is not the problem I’m having. Tomorrow I’ll be posting images and a scene so you guys can see the difference.

  3. Hi Bitter, did you have the opportunity to take a look at the scene I uploaded on cgtalk? http://forums.cgsociety.org/showthread.php?f=87&t=1022895&page=2&pp=15

    • I did awhile back. I think if the rendered images are correct then there is something wrong with how the Maya Render View displays them. A bug should be reported to Autodesk for that.

  4. Wow. Your posts are well written and very helpful. I’ve been learning a lot!

  5. i have to say, im really puzzled by this workflow. Ive never used linear exrs for texture, just the old gamma correct method and never had a problem. But tonight, after six hours of playing with exported exrs from nuke AND photoshop, this just doesnt seem to work. Exrs looks awfully (for awfull i mean, dis-freakin-gusting) no matter what viewport tonemap or whatever i apply to them. Of course im doin first everything letter by letter (by no applying anything in special to nothing, just rendering exrs from nuke and disabling color mangmnt except for viewport), but it doesnt work, it doesnt no matter what. I would reaaaallly like to see an actual escene with exrs as diffuse textures and this color workflow working out, because, at least in my maya 2012, the only working workflow is by either color managment with srgb textures or plain old gamma correct nodes.
    Thanks for the info though, the blog is really top notch
    Regards, martin

    • Hi Martin,

      I’m curious what exactly the issue is you’re seeing. This is the general method employed at several studios as well as our own with pretty good success. (The method being the one found in Part 2.)

      Do you have a simple example you can upload somewhere?

      • problem is it looks like the textures are not being interpreted “correct” by maya, they look awfully color squashed (ultra-saturated and dark), like when you double gamma correct a texture, but tripled or more. I will post a sample a recheck everything as soon i finish a freelance work, thanks!!

  6. Hi David,

    Thank you for what you guys are doing here and other places for mental ray users.

    Just to get this correct in my head and make sure I’m not making a rookie error….

    Colour textures etc for a quick LWF outside of Maya in Photoshop = create texture in 8bit mode until satisfied. Convert to 32bit mode and save as an exr for use in maya, without colour management. I’m good with that, if that is correct.

    Bump/gloss/data created in 8bit should not be converted to 32bit float and then saved as an exr – otherwise the painted bump depth (for example) would be altered by converting to 32bit float and would not give the painted values when used in Maya. e.g. I paint 50percent grey in 8bit/sRGB space in Photoshop for 0.5 bump depth, but after converting to 32bit float in Photoshop this is more like 0.22 bump in Maya.

    If you were painting a bump map or making a bump map from photograph in photoshop how would you approach this. Would you work in 8bit and save the texture in 8bit as well? or do you have a different approach?

    Thanks again for your time

    Best,

    James

    • I would still save as an EXR when possible because of the benefits of tiling (caching) and filtering. But for data textures you would like the 0.5 painted to stay 0.5 because it’s not going to be corrected visually later. (There’s no curve applied to that when you view the rendered image in sRGB because it’s not seen as a color, only an effect on the surface.)

      I’m not sure about Photoshop, I would have to look, but I know I can save out an EXR in Nuke with the sRGB intact.

  7. I tried your workflow of using colormanagement in maya to input sRGB, and output as Linear, and i get really bad results. The render i get even after setting up the right gamma color profile to all my colors and textures of my scene and switching the post display settings after the render to linear, i get a render where my subject just doesn’t comply or look like it’s being lit properly by the light source. It usually appears too dark, whereas the parts right in front of the light look blown out, it defeats the whole use of the gammatizing option in the first place and that is to see more elements of your scene that’s being lit by the light source.

    I did however try switching the input output around, to linear input and SRGB, now my scene is rendering the way it should, and the main subject of my composition is lit properly, while objects further into the background away from the light seem to be lit and not too dark. And you don’t even have to touch on the display settings of your render in post at all. All i am saying is, the right workflow should be the other way around, shouldn’t it not? Here’s a tutorial video i followed which states the exact opposite of yours, http://cg.tutsplus.com/tutorials/autodesk-maya/correctly-setting-up-a-linear-workflow-in-maya/

    • If your textures are sRGB from either the internet or your paint program (most always true for Photoshop) then your *input* should be sRGB. This tells mental ray that your textures are current in sRGB colorspace. When you export to .mi file you will see the texture tagged as: micsSRGBg

      Your output colorspace is Linear sRGB. In the exported file it is: colorspace micsSRGB (notice no little ‘g’ at the end). This is necessary for correct linear workflow in most compositing packages that expect linear data in EXR files.

      You should then view as sRGB in the viewer. (Taking the linear data and viewing it as sRGB for your monitor unless you have a different destination colorspace like rec 709 or a film lut, etc.)

      If you do as you have done, Linear sRGB as input and sRGB as output, you are telling the renderer that your textures are *already* linearized and that you want to output a non-linear image, sRGB with the gamma 2.2 baked in. This is correct *if*:

      1. Your textures are already linearized
      2. You are rendering to a low bit-depth image and do not expect to composite later, therefore baking in the gamma

      If this looks correct in your viewer it is either because your textures are indeed already linear and you are viewing them expecting sRGB or perhaps you just prefer that output artistically.

      • Yes, that is the reason why i chose to turn off the mental ray shape light, because i happen to like the sharper/less scattered shadows i get from it than if i used the shape light. The banding doesn’t show that much at all under gamma 2.2 for me as well, but will show more in a dimmer light setting, i could accept the banding that is in my renders but it’s just my personal interest to want to know if i could rid of it entirely through my settings, but i guess not.

        Thanks for all your help and taking the time, learned a lot here. Looking forward to knowing Corey or Brenton’s reply on the python script. To confirm I do have a .pyc that’s been created inside my scripts folder.

    • In looking at the tutorial, he is using the Colormanagement in the render globals as part of the display. This is incorrect.

      There is a separate color display control in the render view for viewing linear images.

      You can find this under: Render View -> Display -> Color Management

      Besure to also select 32-bit float in the Display menu and restart Maya.

      He is also reversing the desired workflow for textures. His workflow requires that you globally tell Maya everything is linear and then go file by file and reverse that to sRGB. In fact, you should tell Maya everything globally is sRGB and skip the file by file step. This is much easier. I would then only negate the global sRGB setting on mask, displacement, and bump textures.

  8. I see what you mean. And yes, i have gone step by step and linearized all my colors and textures, that is why i am getting correct results even if in my globals i have the reverse setting of yours, which is input linear, output Srgb. The thing is, when i try doing it like what you have described in your tutorial, i just get darker results in how my textures react with the light and gamma in the final render(with post display output of the image set as srgb). The result ends up where the main subject of the render is still too dark compared to the light source that is illuminating the scene.

    I will send you a link of what i mean to illustrate to you the results i am getting.

    • Might be even easier if you have a couple simple example files you can upload somewhere.

      • have a look at this link here, http://konginchains.files.wordpress.com/2012/10/why-linear-to-srgb-is-better_.jpg

        As you can see, clearly i am getting better results with linear input, output SRGB in my render global color management settings. look how my character(main subject) responds to the light, it looks like he is being illuminated more accurately and specular highlights are responding better to the intensity of the light. And i didnt even need a post display output to the image rendered in maya at all. I only had to change the texture of the character to read as sRGB(just like in the tutorial video i linked), so that the texture won’t look blown out when rendered under linear in the global settings. I am using maya 2012 by the way, and the textures i used are painted in PS as 8bit tga files. The final rendered image i output is in 8bit jpg.

        I cannot no matter how many settings i changed get the same satisfying results with input SRGB, output linear in my render global color management settings.

        Perhaps the workflow has changed for this version of the software.

      • i see as part of your workflow you linearized those srgb textures in nuke before importing them into maya. maybe that’s the reason why my renders dont look correct with your workflow since i am importing my srgb textures into maya as srgb and then rendering it as srgb and then my post display settings of my render is input linear/ output SRGB.

        Perhaps the linearizing of textures inside of nuke is better than how maya does it internally, but unfortunately i dont use nuke. Would you know of a way to linearize 8bit tga SRGB textures within inside of photoshop?

      • Ah, I thought you mentioned linearizing your files before using them in Maya.

        In Photoshop you can convert them to 32-bit images and then save as OpenEXR. This linearizes them for rendering. I would then recommend using imf_copy to cache and mipmap them: imf_copy -r -p -k zip Original.exr NewTexture.exr

        I would have to look a little further into the color profile settings to see if there’s a major difference for workflows that should be similar. mental ray uses the micSRGBg as the colorprofile for incoming sRGB textures. This might be slightly different than what Nuke or Photoshop use to linearize them. In many cases consistency is key as opposed to a specific approach, as long as the approach is similarly correct for rendering in linear space.

      • Thanks for the reply. It works now! Now i am getting the intended results i wanted to get with a linear to srgb workflow. I finally figured the problem was the bump map was still at srgb when it supposed to be linear, that’s why no matter what my light was, the character was still too dark lit. I didn’t even need to convert my 8 bit tgas to exr’s anymore. If i may ask, what would you needed it to be mipmapped for with that script you gave?

        The final problem i am having though is there is banding in my render where it is of a pure color and takes up space in my composition, like the green floor my character is standing on. see this pic here, http://i1124.photobucket.com/albums/l567/Jinian15/bandingprob.jpg

        take note, i am still getting this even if 32bit floating point is enabled in my render display settings, and my framebuffer has dithering/premultiply on under rgba 16bit(half). my antialiasing settings is already production fine trace with jitter on. The banding only shows up in very low dim lights where not much of the room is illuminated, like before i set my image color profile to linear in my render display settings.

      • Mipmapping and caching the textures can be used to improve both memory management and filtering of textures to keep them from aliasing in an animation or a large still-frame. You should be able to render with a large amount of textures when it is cached correctly as well.

        I don’t think dithering has an effect on floating point images.

        What types of lights are you using? We did find that the Autodesk area light shader can cause banding. A solution was to use Unified Sampling and a more brute-force approach to rendering. Use 4-6 samples for your area lights High Samples. Allow Unified Sampling to do the rest. We had clean results then. I would suggest using Unified Sampling for all rendering in Maya 2013 and future.

        Note that this is only really a problem when the area lights affect a flat-colored area. More textured and complex scenes do not expose this problem.

      • The problem with the banding is the lack of dithering. When rendering 8bit images I believe dithering is automatically applied, (this is the same when viewing images in Nuke, by default at least) and After Effects probably does the same thing which is what you don’t see a problem when rendering from after effects.

        Your floating point images are fine, there is no banding in them, the issue is that when those images are converted to 8 bit to display on your screen, you will see banding, especially in soft dark gradients. Its really easy to fix – you just need to add dithering (noise) to your image. In Nuke there is a dither node for this, I always add one to the end of the comp. Its subtle enough that you can’t actually see the noise, but the banding will magically disappear.

      • Also, just to add to that, the banding is really just a rounding issue from converting floating point images to 8bit. The dithering introduces a small amount of error into the image which hides the problem.

        http://en.wikipedia.org/wiki/Dither

      • Yes you are correct, i am using an area light, and my scene is really dim. The problem only does occur on monocolored objects that take a lot of screenspace in my render like the green floor. if i were to put some textures in there however, the problem banding dissapears.

        What unified sampling are you talking about, and brute force? Currently i dont see a unified sampling option in maya 2012, in both the render settings and my light settings. I dont have maya 2013 yet. My current render display settings, already have framebuffer at rgba 16bit(half) and is set to 32 bit floating display. would using any of the mipmapping and filtering methods work to reduce the banding?

        @ corey: unfortunately, i dont have nuke, and i was wondering if there was such a similar plugin for photoshop to post correct your renders to remove that banding by replacing it with noise. Currently if i render my images to rgba 8 bit, the banding gets replaced with noise, but the results are much less attractive than a 32 bit floating point display with banding in maya2012.

      • You can find more on using Unified Sampling here: Unified Sampling for the Artist I would not use it in 2012 if you are rendering to passes since there was a bug with filtering. Since then it was fixed for Maya 2013.

        As Corey mentions it could be the display showing the banding as well. There’s only so much information the display will show you as opposed to the data in the floating point file. Unified Sampling adds the noise to the area light shading naturally because of how it samples the scene.

      • I am currently studying and trying out that unified sampling, but it just doesn’t seem to take away the banding at all no matter how high the quality or sampling i increase it to. Also there is totally no difference at all between unified and adaptive sampling when it comes to reducing the banding. It doesn’t replace it with noise instead like you stated. I’m sure it is not just with my display, since my monitor is properly calibrated and the banding only appears with dark lit dimmed scenes with smooth objects of a pure monocolor with no texture. Raising the exposure of my image takes away the banding but also blows the highlights way off value. I was wondering if i could just use a photoshop or after effects filter to rid of this banding if you know of any?

        I have already set my render display settings to 32 bit floating point, and have looked at the renders in photoshop, both as linear 32 bit exrs as well as 8 bit tiff and jpgs, all have the banding in it.

      • A few things here:

        1. What are your Area Light settings? Reduce them to a low number. Anywhere from 1 to 8 for High Samples and use Unified Sampling. If your area light settings have not changed, then Unified cannot do more of the work for you.

        2. Since you mentioned combining different UIs, I cannot be sure your Unified Sampling is really on. Look in the options block as it prints when your render starts.

        3. Even with a calibrated monitor, you are still looking at the image with a lower precision as defined by the monitor’s capability.

      • Try making a new layer in photoshop, fill it with black, add monochrome noise to it, set the blending mode to add and starting with the layer opacity at 0, slowly increase it till the banding disappears. Just make sure it’s in 32bit mode. I haven’t tried this myself, but should be similar to what I do in Nuke.

      • Thanks for the prompt reply David.
        My area light settings are set with a decay rate quadratic, 1500 intensity. see more of my settings in this pic here.

        As for tuning it to 1 intensity to 8 for the area light, that would give me a completely black render as that is too low for a quadratic decay on the area light. But i tried this setting around that number with no decay on the light, and it still gives me the same banding that the unified settings failed to sample noise on. I did try to set my quality of my unified settings to 2-8 and min 1-25 and max 50-100.

        I took your advice and ditched “enjoy mental ray stringoptions” by deleting it off my scripts folder and just sided with your UI. This did finally activate a unified settings string in my midefault options, but the results i had were still the same. I am certain my unified settings are working since its calculations and sampling time differs from when i try adaptive sampling. Odd thing is, i could get just as good of a clean render using unified sampling with adaptive, but about twice as fast. Isn’t unified settings suppose to be faster?

        By the way i am not using mental core, just curious as to the options there compared to the new UI you guys developed.

        @ Corey:
        Thanks for that suggestion.
        I tried it and it works, it definitely helped a lot fading that banding away, but not completely. Funny thing is I accidentally copied my linear 32 bit render into an sRGB 8 bit document in photoshop, and instantly my banding faded partially on its own. It’s a technique i’d probably want to use regularly from now on.

      • Ah, you’re not actually using the mental ray parameters of the light. Just shadow rays. That’s the way you would use it in the old Maya days.

        Take a look here for better examples: Area Lights 101

        It’s possible you’re still using some settings that are deprecated. And 40 shadow rays for any light can quickly become expensive. Unified Sampling speed decreases in many cases where your local samples are increased. Unified Sampling is designed to intelligently pick away at the scene, occasionally finding areas where pounding a pixel will remove noise. But in doing so you need lower samples, not higher, to let Unified make a better decision on when to stop.

        I would also observe that this scene isn’t very complex right now. Unified render times are less beneficial for this. Scenes you may have avoided doing before because of complexity and render time should now be easier with Unified Sampling. So take a leap of faith and try a few complex scenes you’ve shied away from; after learning some more basics of how it works.

        I recently rendered a scene (that may be released soon, I will share if I remember later) with upwards of 40 area lights and animation in about 40-50 minutes a frame at 1080HD resolution.

      • I don’t know exactly which settings i am using that may be considered deprecated. But if not using the light shape of the area light is what you are talking about(like in the tutorial you linked me to), i did initially use it. I just happened to like the shadows casted from disabling that setting better, which is why i ticked it off. I didn’t really get any better of a render using it compared using no light shape at all.

        Overall, I don’t really see anything from the tutorial that i did any differently. Regardless, i am still unable to remove the banding, but i guess corey’s noise trick in photoshop would do for now.

        I am still wondering about that miUpdateStringOptions.py script that is part of the mr-rendersettings .2 package though. This script is the only one i feel unsure if was really implemented when i installed my scripts, and wondering if calling it into maya would have anything to do with the performance and effectivity of my unified sampling settings. So far it is inside my scripts directory with all the rest of the mel scripts of your new UI. Is there a code i input into python to call it in, just like midefault options and the enjoy mental ray string options i have now excluded?

      • This image on my monitor does not appear to have significant banding when viewed through imf_disp at gamma 2.2: Shadow Area Light

        Couple things I notice:
        1. When not using the mental ray shape the shadow shows in the Final Gather prepass, seems like the shape bug still exists.
        2. If I do not use the mental ray shape, the shadow does not correctly soften the further away it gets from the cylinder. That’s incorrect but possibly useful as a hack for something. This is why I recommend using it among other reasons.

        For comparison, the non-mental ray shape light: Use shape off

        Corey or Brenton can explain the operation of the miUpdateStringOptions.py. You may find that after running Maya there will be a .pyc in the folder.

      • Yes, that is the reason why i chose to turn off the mental ray shape light, because i happen to like the sharper/less scattered shadows i get from it than if i used the shape light. The banding doesn’t show that much at all under gamma 2.2 for me as well, but will show more in a dimmer light setting, i could accept the banding that is in my renders but it’s just my personal interest to want to know if i could rid of it entirely through my settings, but i guess not.

        Thanks for all your help and taking the time, learned a lot here. Looking forward to knowing Corey or Brenton’s reply on the python script. To confirm I do have a .pyc that’s been created inside my scripts folder.

      • Also, i am going to try the free trial of mental core and may eventually buy it. I was wondering if you can get free updates when a new mental core plugin comes out in the future?

      • I’m not sure. You can try emailing them here: sales[at]core-cg.com

        Replace the [at] with @. I’m trying to keep auto-spam from grabbing the address. 😉

      • sorry to have to bring this up only now, but i just decided to try this mipmapping out today.

        From your last post you mentioned, “In Photoshop you can convert them to 32-bit images and then save as OpenEXR. This linearizes them for rendering. I would then recommend using imf_copy to cache and mipmap them: imf_copy -r -p -k zip Original.exr NewTexture.exr”

        where do i input this command? I tried it in both mel and python and it didn’t do anything to convert my character’s texture exr file.

        Thanks.

      • imf_copy is an executable that comes with mental ray. In Maya it’s included in the mental ray folder. You need to have this in your command path for whatever OS you’re on. In Windows it needs to be listed in the Path environment variable.

        C:\Program Files\Autodesk\Maya2013\mentalray\bin\imf_copy.exe

        You use the command from the terminal or command prompt.

        *DANGER Will Robinson!* Alternatively, if you feel a little more brave, you can alter the commands in the Maya python that converts textures automatically into .map files (in the preferences menu for converting to an optimized format) and instead make them .exrs with the commands you listed above. It’s found in the path below:

        C:\Program Files\Autodesk\Maya2013\mentalray\scripts\mentalray\textureFileConversionUtils.py

      • Alright, now i got it to work. It copied the newtexture.exr into the same path, but now it’s like 3 times the size of the original.exr. I don’t a difference at all when i render, so i am not quite sure what that new exr is suppose to do differently, but i remember you stated mipmapping is a procedure where the renderer will stop aliasing areas where the texture is too far or too small from the camera’s view to prevent swizzling or swimming in the textures right? but isn’t that what a .map file is suppose to do instead?

      • It’s larger because it also contains the mipmap pyramids as well.

        It does operate the same as a .map file but in an effort to standardize file usage, EXRs are the best for most anything. Be it textures or output files for compositing later. Caching EXRs is also a native attribute of EXRs. And since this file type is a universally accepted type, you can use EXRs in other software or renderers without conversion.

  9. Hey David.. Im getting some banding in my file 4 some reason im getting some banding issues while viewing in the maya renderview.. also when i bring it into after effects and i still see the banding.. but if i render out an still in after effects as jgp.. no banding. .also rendering as a quicktime animation codec. no banding.. any thoughts..

    • It’s possible you are rendering out to an 8-bit file and then viewing it with color correction.

      Set the framebuffer to RGBA (Half) 16-bit. This is floating point and use EXR filetypes. In the Color Management of the Render View make sure it says 32-bit. If you render to 8-bit, you should bake in your colorspace with either a lens shader (like the exposure node) or use the Maya Color Management in the Common tab to specify your output color space. Make sure the Render View Color Management selection is correct or you’ll be looking at it wrong.

  10. The article says to get a decent monitor. As much as I’d like to buy $2K monitor it’s not going to happen besides the refresh rates of those monitors are horrible, to say the least, I don’t game but I don’t want to see ghosting.

    With that in mind, will your typical LCD TFT (better quality) monitor suffice to be calibrated ? This is a issue that feels like a swing, you read, calibrate your monitor, then what appears to be in small print is (but isn’t) “make sure you have a decent monitor” which makes me doubt whether to calibrate my monitor at all. I feel like I’m going insane, do I, I don’t I that is the question 🙂

    • Decent means you should at least shop smartly and look for IPS screens and as good a color response as you can afford as opposed to just anything from off the shelf. I don’t own Dreamcolors, but I do have a couple decent and calibrated IPS flatscreens.

      • What if you don’t own any IPS flatscreens, should you still calibrate ?

      • I would try and keep it as accurate as possible. But keep in mind your destination, it might be that your work will land in a lot of different places and not everyone has a calibrated TV. 🙂 We work in rec 709 mostly.

  11. In Photoshop you can convert them to 32-bit images and then save as OpenEXR. This linearizes them for rendering. I would then recommend using imf_copy to cache and mipmap them: imf_copy -r -p -k zip Original.exr NewTexture.exr

    – Didn’t you say that OpenEXR files are mipmapped, then you say to convert them to mipmapped, the only difference is that it pyramids the images, that’s all.

    • EXR files are not mipmapped unless you tell them to be mipmapped (pyramid) using a utility like imf_copy.

      Photoshop will save them as linear but it will not mipmap or tile them.

      The above explains how to do both at once using imf_copy.

      • Argh, there is absolutely no way in Ps to save OpenEXR to be properly saved with a pyramid embedded into the image. It would be nice if imf_copy could be done within the save in Ps, probably wishful thinking 🙂

  12. I would try and keep it as accurate as possible. But keep in mind your destination, it might be that your work will land in a lot of different places and not everyone has a calibrated TV. 🙂 We work in rec 709 mostly.

    – To conclude you recommend to calibrate your monitor regardless what type of monitor.

    • Yep. Certainly doesn’t hurt. For all you know it could be too yellow or something if even subtly. So a good calibration will help you be sure.

      • I want to first say that my linear workflow has been completely different then what I have read on this blog. What I read on this blog is more easier to setup then my current linear workflow setup. I have a few questions, first I created a linear profile in Photoshop from this web page;

        http://fnordware.blogspot.co.uk/2008/05/converting-to-32bpc-in-photoshop.html

        As the web page states, you have to assign the linear profile to the image, then you can convert the image to 32-bit, this makes the image brighter and I assume in a linear space, correct (similar to my old linear workflow) ? All displacement maps, bump maps, reflection maps are all in linear space as well ?

        When the image is brought into a 3D package, (Maya, Max, Softimage) it gets darkened as a gamma of 2.2 is applied to the images.

        When painting in Ps, if I paint in linear, I have to create a Gamma 2.2 profile in Ps which I have, to preview how Mental Ray will render the image, arrghh.

        Also David I read in one of the replies you don’t use a linear space instead you have Ps set to HDTV (Rec.709) this works the same ?

      • I want to mention Softimage has a built in feature to convert textures to memory map (pyramid) unfortunately my current projects textures don’t use OpenEXR to enhance the color, next project will do 🙂

      • Maya does as well, I usually alter it locally to generate tiled/mipmapped EXRs instead. It’s a python script that ships with Maya.

  13. I don’t know if Softimage can do the same thing to OpenEXR, I should look into that 🙂

  14. David, what is the difference between a linear workflow and HDTV (Rec.709) or are both one in the same ?

    If you check your RGB values in sRGB and all the values are 1.0 that means it’s in linear gamma color space but if it’s less then 1.0 as in .392 it is in non-linear color space ?

    • HDTV is the destination colorspace for what we do at The Mill. Meaning we transform our linear renderer output to rec 709 colorspace for delivery.

      Not sure what you mean by “checking the values”. You can’t necessarily run a colorpicker over and image to see what the colorspace is or what gamma is applied.

      • Reading the values for only a color within RGB/HSV/HLS in Maya, unless your color is gamma corrected, as can be done in Softimage. If the values are less then 1.0 then the values are not in linear gamma. I can’t find the video link, arghhh.

  1. Pingback: Linear Color Workflow(s) in Maya – Part 2: The Preferred Method « elemental ray

  2. Pingback: Area Lights 101 « elemental ray

  3. Pingback: Texture Publishing to mental ray « elemental ray

  4. Pingback: Mental Ray Colorspace Shader « Luca Fiorentini's Blog

  5. Pingback: Maya 2014 – mental ray changes | elemental ray

  6. Pingback: Linear Color Workflow(s) in Maya | StudioLab

  7. Pingback: StudioLabNew » Linear Color Workflow(s) in Maya

  8. Pingback: 30 Second Animation -First Draft | Jakub Bojanowski Animation

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: