Category Archives: Optimization

“My render times are high. How do I fix that?”

If you don’t have access to an infinite render farm, chances are you might be concerned about render times.

With a certain amount of flexibility and exposed controls you may be tempted to try lots of different things or even combine techniques seen on the internet. In some cases this can be useful and in others this combination doesn’t work well.

For example:

If you use an inexpensive Final Gather solution you may increase the quality of, or add, Ambient Occlusion to increase details. If you then find that Final Gathering has splotches or hotspots caused by some other effect, your first instinct may be to increase the quality of Final Gather. Well, it may be that now you can reduce or eliminate the Ambient Occlusion. In some cases we forget to do that and suddenly our render takes much longer. This is both the benefit and downfall of flexibility: keeping track of your decisions.

Where’s a good place to see what might be eating your render time?

The Output Window and the Time Diagnostic Buffer with Unified Sampling.

The Maya Output Window

What effects cost you the most time?

Well, that depends on what you are rendering. Hair can be difficult. Or scenes that reach your memory limit. Layering shader on shader can also increase or double some ray counts (this will change with the introduction of the layering library in mental ray 3.11) Even texture input/output (I/O) can make rendering slower. I will try and touch on some of the more common cases and solutions.

Let’s look at some output from a render. How can you find it? Well, you can increase the verbosity of the output in the Maya Rendering Menu > Render > Render Current Frame (options box)

Render Current Frame Options Box

I usually choose “Progress Messages”. The option below that is “Detailed Messages” and gives you more information, but also tells you every time mental ray blinks and isn’t usually necessary. Also, the more messages it prints, the more it might impact render time as a debug process.

So, I have rendered a decently complex scene from a project at 1280 by 720. I have quite a few lights in the scene, most of which are area lights (about 46 of them, most are small). I have wide glossiness and I am using the Native IBL to render the environment lighting.

I haven’t included the image here because we’re going to look at the numbers. (I know, really boring.)
RC 0.9 1072 MB info : rendering statistics
RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : eye rays                 6613564            1.00
RC 0.9 1072 MB info : reflection rays         65049860            9.84
RC 0.9 1072 MB info : refraction rays          3693155            0.56
RC 0.9 1072 MB info : shadow rays            501916475           75.89
RC 0.9 1072 MB info : environment rays        69498575           10.51
RC 0.9 1072 MB info : probe rays              33284793            5.03
RC 0.9 1072 MB info : fg points interpolated  31840843            4.81
RC 0.9 1072 MB info : on average 34.21 finalgather points used per interpolation
RC 0.2 844 MB progr: writing frame buffer mayaColor to image file D:/untitled_project.exr (frame 12)
RC 0.2 844 MB progr: rendering finished
RC 0.2 844 MB info : wallclock 0:31:52.00 for rendering
RC 0.2 844 MB info : current mem usage 844 MB, max mem usage 1091 MB
GAPM 0.2 844 MB info : triangle count (including retessellation) : 5240633
IMG 0.2 844 MB info : total for cached textures and framebuffers:
IMG 0.2 844 MB info :                 4656815552 pixel block accesses
IMG 0.2 844 MB info :                     535270 pages loaded/saved, 0.0114943% image cache failures
IMG 0.2 844 MB info : maximal texture cache size: 2700 pages, 298.781 MBytes
IMG 0.2 844 MB info : uncompressed cached texture I/O: 16650.313 MB
PHEN 0.2 726 MB info : Reflection rays skipped by threshold: 17691563
PHEN 0.2 726 MB info : Refraction rays skipped by threshold: 2272489

What can you gain from this?

In a raytracer, you are shooting quite a few rays around in your scene. These strike other objects and more rays are sent, etc. This can grow geometrically in a scene where you are using some expensive effects.

Eye Rays:

RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : eye rays                 6613564            1.00

These are the rays shot from the camera used to sample the scene. These strike objects and cause other rays to be cast. This is part of why they are listed first. These may be more or less depending on a couple things:

1. Motion blur will call more of these to smooth blur. Each ray is jittered temporally during the frame (shutter interval) to catch changes in a pixel as objects pass through them spatially (objects crossing the frame in movement.)

2. Depth of Field will call more of these to resolve blur

3. Scenes with high detail or contrast will need more to improve anti-aliasing.

Tuning: Reducing these is only an option if you can live with less quality (more grain or aliasing). Reducing these in Unified Sampling is done through decreasing the Quality parameter or by artificially capping the maximum through Max Samples. You can try using a Render Region on a noisy area of the most complex/blurry frame.

-Gradually lower the Quality until you reach a limit of what you’d accept.

-Introduce small amounts of “Error Cutoff”

-Lastly, alter per-object samples as needed for difficult objects.

Reflection Rays:

RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : reflection rays         65049860            9.84

These are rays cast into a scene from an object/shader to collect indirect reflection (light from other objects seen as a reflection). When these strike another object they run that object’s shader to get color information.

1. You may have more of these when you increase the shader ‘Samples’ parameter for glossy reflections

2. You may have quite a few if you have a high trace depth set by either the shader or render settings to allow more than one ‘bounce’ of the ray. This is necessary for things like a reflection in a reflection (imagine a hall of mirrors).

Tuning: You can reduce these in a few different ways. You can:

-reduce the samples for glossy rays based on acceptable quality (the appearance of grain) For objects and scenes where this is textured or there is motion blur and/or depth of field, we recommend a brute force approach (Using Unified Sampling) with a setting of ‘1’ sample. You may need more for very wide glossy lobes on perfect surfaces without textures that are very reflective or blurred.

mia_material Glossy Samples

-reduce the trace depth where it makes little or no visual difference (you may not need a reflection of a reflection of a reflection if it is blurry or dim) Or use a falloff distance with either a color or environment attached.

Trace Depth Options: Reflection

Reflection Falloff Distance and Trace Depth overrides

-use the mia_envblur node to send only single samples to measure an environment texture that is pre-blurred. This is supported in the mia_material and the car_paint phenomenon. An example can be seen here on Zap’s blog: More Hidden Gems: mia_envblur

Single Sample from Environment (mia_envblur node as environment)


RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : refraction rays          3693155            0.56

These are rays sent through and bent or change direction (refracted) through objects like glass or windows. Note that this isn’t the same as transparency where a ray passes through an object and is not bent. Transparency is handled differently in a scene but may still be expensive with large amounts of semi-transparent objects. There are no such rays in this scene or they would be listed. Glossy refraction (specular transmission) is an effect like frosted glass and can be one of the most expensive effects. It is not as simple as a more diffuse effect like specular (glossy) reflection.

1. Frosted glass or blurry effects will increase these.

2. A high refraction trace depth may also increase these.


-Reduce the samples on the refraction (similar to the control seen in the mia_material reflection samples) to an acceptable amount of grain.

-Add a small amount of translucency instead

-Reduce the refraction trace depth in the shader or the Render Settings. Or use a falloff distance with either a color or environment attached. A good guide for the global trace depth is how many surfaces you must pass through before stopping. For instance: a correctly modeled (volumetric) empty bottle will have 4 surfaces to strike before passing through the other side.

Shadow Rays:

RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : shadow rays            501916475           75.89

Shadow rays are rays sent from surfaces back to light sources. These may sample an area light (direct reflection or specular for invisible sources). These can be expensive and are usually the most prolific in a scene. The more lights casting shadows and especially soft shadows, the more you will have. The larger the area light, the more you will have as well if you need to reduce grain. And naturally, the more lights you have casting shadows, the more of these rays you will have.

1. Large area lights may be sampled more to reduce shadow grain or direct reflection noise on shiny objects. This happens when the light is invisible or the shader is selected to use “highlight only” for reflections even if the area light is visible.

2. Slightly softening the shadow and increasing the rays on delta lights (lights without area like point and spot lights) generate more samples

3. The shadow trace depth is high so you can see a  shadow in a reflection or refraction for example

4. High ‘Quality” settings on Native IBL or high samples on the user_ibl shader.

A quick way to read about optimizing area lights can be seen in the Area Lights 101 post.


-Follow the guidelines in the Area Lights 101 post for tuning area lights using both High/Low Samples and a helpful shader like the Physical Light

-Reduce the trace depth as needed. You may not need the reflection or refraction of a shadow in a blurry surface; especially if Final Gather is already darkening it.

-Old trick: use a depth map shadow or preferably a Detail Shadowmap that can be baked and reused on many frames assuming the objects or lights casting those shadows do not move or are not animated. You can do this selectively per light.

Environment Rays:

RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : environment rays        69498575           10.51

Environment rays are rays that leave the scene and call the environment. These are usually fast. They exist in this scene more often because I am also using the mia_envblur to speed up the environment lookups for glossy reflections as described above in Zap’s blog.


-None typically.

Probe Rays:

RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : probe rays              33284793            5.03

These rays are usually the result of Ambient Occlusion rays being sent into the scene. (Ambient Occlusion + Colorbleed isn’t the same thing in this case.) These are caused by their inclusion in a material like the mia_material or in a separate occlusion framebuffer pass. In the mia_material with Unified Sampling we usually recommend keeping the sample counts to 4-6 since it is like a lighting effect. If used in a pass there are a few things that affect the quality: these things may be the distance they travel, the distribution of objects in the scene, and of course sample count for the buffer. (Note: The Native IBL set to “approximate” mode will generate probe rays as it is using lighting by the environment through occlusion. Not usually recommended but ok for tests.) If no distance cutoff is used, it increases the raytracing overhead for your scene by striking anything contained.

Ambient Occlusion has become a staple effect for most CG work but the need is less than before. Generally used as a fake for global illumination in the past, the inclusion now that global illumination is faster and more detailed isn’t always necessary unless the global illumination solution is purposefully reduced or interpolated a lot and loses details.

Creating an AO pass by default for compositing can be used to enhance or create details that aren’t there (occlusion where there is direct lighting is not realistic but is an artistic consideration) Using this pass as a multiplication in post is also incorrect mathematically if trying to reproduce a beauty render. Production is starting to move away from using Ambient Occlusion as a pass or effect in modern raytracers. Path tracers like iRay automatically include such an effect in its light transport so adding this effect on top is redundant.


1. Avoid large sample counts in a shader. These rays may be sent when the shader is struck by some other ray like reflection or refraction. High trace depths will call more and more of these as the ray bounces around.

Ambient Occlusion in the mia_material

2. Use a small distance to avoid lots of strikes on other objects that are unnecessary. For example: Buildings to scale 3 city blocks apart do not need to occlude one another.

3. Detailed indirect illumination may decrease the need to have this feature on at all.

Final Gather Points Interpolated:

RC 0.9 1072 MB info : type                      number     per eye ray
RC 0.9 1072 MB info : fg points interpolated  31840843            4.81

Final Gather is a topic in and of itself, so I will hit the highlights here. These are the points that are generated in the prepass and interpolated when they are struck (or near) by an eye ray. Points are generated based on geometric complexity (automatically adaptive) and by altering the “Point Density” parameter in the Render Settings. (These are also altered by the old radius settings that have since been deprecated and should be avoided for easier setup and rendering.)

Final Gather prepass time is greatly influenced by “Accuracy” which are the number of rays sent to measure the scene and “Point Density” used to place points projected by the camera on geometry. During the render phase, “Point Interpolation” can increase render time at higher settings because the renderer is doing more work mathematically.

Since we are talking about the render phase and not the prepass phase I will just mention those solutions here. Final Gather settings and prepass may be covered later.


-Avoid large interpolation values. If your scene has complex lighting, increase the “Accuracy”. If the scene has complex geometry, increase “Point Density”

Final Gather Settings

-Use more direct lighting to stabilize the solution (such as the Native IBL or user_ibl)

-Use the fgshooter shader/script to avoid flickering in animations

Triangle Count:

GAPM 0.2 844 MB info : triangle count (including retessellation) : 5240633

This may sound silly, but most modern raytracers do not have a Scanline option. This is because we have reached a point where complex scenes with lots of triangles are common. Scanline rendering may slow down this process with many triangles. Instead you should turn off Scanline and select “Raytracing” as the renderer. This is the default in Maya 2013. Rasterization also counts as a Scanline algorithm although more modern.

This is often why comparisons with other renderers may show a slower result, users with lots of objects or displacements fail to turn off Scanline.


-Stop using Scanline Algorithm!!

-Do not use overly aggressive displacements

-Use proxies or assemblies: these are pre-translated and more memory efficient since they are on-demand geometry

Texture I/O:

IMG 0.2 844 MB info : total for cached textures and framebuffers:
IMG 0.2 844 MB info :                 4656815552 pixel block accesses
IMG 0.2 844 MB info :                     535270 pages loaded/saved, 0.0114943% image cache failures
IMG 0.2 844 MB info : maximal texture cache size: 2700 pages, 298.781 MBytes

Texture usage can not only increase memory usage, it can slow down a render by quite a lot! Reasons for this can be:

1. Large textures pulled across a network

2. Un-mipmapped or un-cached textures, this will force mental ray to load the entire full-resolution texture from the source even if all of it is not seen.

3. Insufficient memory means a lot of flushing instead of rendering (related to point 2)

4. Poorly filtered textures may also call more eye rays to solve aliasing. This runs the shader and may increase all of the other rays counts.


-The easiest catch-all is read the post on Texture Publishing

-For the image cache failures you will want to keep this as low as possible. Preferably below 0.01% This can be done by altering a few things manually such as:

*The tile size of the cached texture with imf_copy

*The cached memory limit with the registry option to force more efficient handling, either an increase or decrease. This and the option above work together and are scene dependent. Not always worth a lot of tweaking unless your scene is exceptionally texture heavy. I can now render scenes with hundreds of 4k textures with only 8GB of RAM locally.


    • I didn’t cover interpolated reflections or refractions. This is because in animation they are difficult to not artifact. With the usage of Unified Sampling you may not even need those features. Future shading models (BSDF) will also omit these features.
    • I didn’t cover the Ambient Occlusion Cache. While it may be faster to use during a render, tuning it can be difficult and less necessary with the usage of Unified Sampling
    • Try to avoid layering shaders for some effects. A lot can be accomplished through selective layering of textures instead. 3.11 will introduce the layering library that will help remove this effect of added ray counts.
    • I assume usage of Unified Sampling
    • Using ray cutoff values: these can be useful and exist in the mia_materials as a way to tell the shader not to cast a ray if the effect is not important. It’s a little tricky to use, but heavily traced scenes may have some speed-up if this parameter is increased. Do so slowly and test frames, it will erode raytraced effects if too high with little benefit.
    • Use the Time Buffer Diagnostics as seen in the Unified Sampling for the Artist post to identify where your scene is taking longest to render. Then look at those shader settings or possibly change per-object sample settings.

Time per pixel measured in ‘S’ or seconds. Brighter is longer.

  • Dimmer reflections/refractions need fewer samples
  • The decadent “Maya Glow Buffer” is very slow on large resolution rendered frames, even if the effect isn’t used. Turn off “Export Post Effects” in the Render Settings > Options and do the effect in post.
  • Scenes rendered in motion with motion blur do not need to be perfectly smooth when viewed in motion.
  • Do not marry an image. Some tweaks may alter the look. Even client notes alter the look. Go with the best balance of what’s achievable and is possible in the time you have. Otherwise you’ll constantly be unhappy.
  • To resolve artifacts, simply “cranking up the settings” is a horrible idea. Use the progress messages and the time buffer to make faster/smart decisions.
  • Form habits of rendering with correctly prepared textures and default settings. Only tweaking where necessary by recognizing the cause of the artifact, be it FG splotch or aliasing crawl.
  • Begin to ween off of using Ambient Occlusion as a default effect or pass. The original reason to use it (ao multiplied against an ambient pass) no longer exists and is really an artistic consideration now.
  • Always remember you are going for a good image. An improvement on reality so-to-speak. Avoid using mental ray as a physical simulation to render an image. Use iRay or similar for that type of workflow. Flexibility and choice is key to getting what you need quickly for animation and visual effects
  • I did not include Irradiance Particles or Photons. These aren’t used as often (or at all) in VFX or animation work. They are also (like Final Gathering) topics in their own right.
  • If non-of-the-above applies, change your BSP2 to BSP and try again. If your geometry has bad bounding boxes and other problems, ray traversal can be painfully slow. This is a geo problem, remake it if necessary.

New Maya Rendering UI Testing!

After some careful thought and a lot of tedious work by developers Brenton and Corey (mental core), you can find a Maya Render Settings UI for testing. Barton Gawboy is project management. The purpose of this UI is to provide a more official/intended workflow for mental ray. This also means makers of UIs for other packages can use this as a template for features on modern mental ray (3.10 and future).

This UI is in an early phase and should not be used for production. Currently the Quality settings have been re-worked to provide a simpler interface and a modern workflow for using mental ray in Maya. Over time we will be improving on this (light and passes tabs to be re-worked) and adding documentation. We recommend Maya 2013 SP1.

You can find the scripts on the Google Code page: Maya Render Settings UI

A further discussion can be found on the ARC forum (must be a member): Maya Options UI

Navigating mental ray in Maya: ideas and experience

Since the integration of mental ray in Maya may obscure modern workflows, many of the posts here explain how to make the correct choices. But typically we assume some prior knowledge of Maya and mental ray.

We will begin a series soon on how to make modern choices and avoid pitfalls in the UI (like the default absence of shadows, no passes with materials unless using the *_passes materials, etc).

If you have experience where you have had difficulty tracking down a problem only to discover it was a checkbox or some other snafu, leave that here in the comments with a short description. We can use this information to make new users more efficient and help refresh the rest of us too. We will try and include this in sections explaining basic scene setup. Our cheat sheet for quickly rendering.

If you are a developer, it might be helpful for you to see where some defaults could be changed and alter the corresponding .mel in your installation to make a new default.

Please keep the comment factual and tidy. We reserve the right to edit it to the essence of the problem. 😉

For example:

My passes kept rendering black using the mia_material_x. I discovered I should upgrade them to the mia_material_x_passes shader.

Texture Publishing to mental ray

Nailing down a good texture pipeline can be confusing!  Hopefully this clears up some of the whys and hows for efficiently handling texture of images with mental ray, providing a straightforward solution to a complicated problem.

[You can also find a test example on the ARC forum here: ]

Texture Filtering

Non-filtered images may result in artifacts such as moire patterns (click to view full size).

When sampling a textured object, color information is lost between sampling points.  This is because a single sample only calls a single texture pixel (texel).  Information for texels that don’t get sampled is lost – which can be a real problem if that information was important.  For highly detailed textures or for textures with regular patterns this sampling limitation may manifest itself as an artifact such as a moiré pattern.

Aliasing-free texture sampling is restricted by a mathematical limitation known as the Nyquist rate which states that the number of samples required to adequately estimate a signal must be at least twice the noise frequency.  This means there are only two options to remove rendering artifacts caused by noisy textures: sample more or filter the texture.

Filtering a texture removes high frequency noise from images making them easier to sample.  While this does away with many potential aliasing artifacts, it also removes detail leaving your textures looking fuzzy.  This is not a problem for textures that occupy limited screen space because that is detail that would have been lost anyways.  However it is a problem for textures that occupy lots of screen space because that detail might have been visually important.  Luckily for us, texture mipmapping provides a convenient solution for both situations.


Mipmapped images reduce texture artifacts by filtering (click to view full size).

Mipmapping (from the Latin phrase multum in parvo meaning “much in little”) is the process of pre-filtering one large image into many smaller images of progressively decreasing resolutions. The largest images retain detail that the smallest images lose to filtering.  These mipmap images are generally organized into a multi-resolution file sometimes known as a pyramid image.

Mipmapped “Pyramid Image”

The advantage of multi-resolution images is that they provide mental ray with the ability to read from appropriately sized pre-filtered images based on the amount of screen space that the textured object occupies.  This allows superior anti-aliasing performance while using less samples.

For the above renders, mental ray read from higher and higher mipmap levels as the checkered texture receded into the background.  The highest mipmap levels have lost virtually all detail and appear as a middle grey.  As a result of filtering, mipmapping reduced the total number of eye rays from 5552111 to 5144177!

It is advisable to use mipmapped textures.  Generally, mipmapped textures should be used with the filter declaration (see miFilter attribute below).  The purpose of the filter declaration is two fold:

  1. filter tells mental ray to mipmap images on demand if they are not already so.  When computed on demand, full upper pyramid layers are computed and kept in memory, totaling 1/3 of the base (largest resolution) image.
  2. You can control how selective mental ray is when choosing mipmap levels (and thus the blur of the textures) by setting filter size (see miFilterSize attribute below).  A filter size above 1.0 increases the texture blur by reading from a higher level (lower resolution) mipmap.  A filter size below 1.0 reduces texture blur by reading from a lower (higher resolution) mipmap.  Generally filter size should correspond to the number of times a texture repeats itself in UV space, i.e. an image that repeats itself 10 times should use a filter size value of 10.0.

Elliptical Lookup

Elliptical lookup has removed practically all texture artifacts (click to view full size).

Texels are square when viewed with photoshop or some other 2D application, but this is not always the case when rendering in a 3D environment.  When textures are rendered at glancing angles, the screen space each texel occupies become distorted – i.e. more texels may fit into a screen pixel in one direction than another.

To avoid artifacts caused by symmetric filtering for textures viewed non-symmetrically, some textures require an additional elliptical lookup on top of mipmapping.  Elliptical lookup works by projecting the circular area surrounding a sample onto the texture image, resulting in an ellipse.  The texels within the ellipse are then averaged resulting in a more accurate return value.

The above render shows the artifact-free result elliptical lookups provide.  Not only is the strange middle grey circle gone, but the number of eye rays has been further reduced to 4705444!

While elliptical lookup is relatively fast, not every textured object requires it.  To avoid unnecessary render time, elliptical lookup should be enabled on a per-texture basis.  The default lookup size of 8.0 is generally a good starting point for most situations.  Reducing this value will speed up render times, but may reintroduce artifacts.

Note: mental ray will mipmap on demand if elliptical filtering is specified for non-pyramidal images.

Tiled and Cached Textures

Keeping all texture information in memory can become quite expensive for large scene, especially when using pyramid images.  mental ray supports texture caching for certain tiled texture formats.  When texture caching, only certain tiles of a texture are loaded into memory.  These tiles are automatically removed or replaced by recently accessed ones, dramatically reducing the memory consumption during rendering.

Starting with mental ray 3.9, texture caching can be specified globally using the “{_MI_REG_TEXTURE_CACHE}” registry variable set to “on” / “off” / “local”.  The defualt “local” means that only local textures are cached (see miLocal attribute below).  Native .map files are always considered local.  It is important that local textures be saved locally or on a caching file server where images can be shared across multiple machines (on a render farm) to avoid network overhead.

Another registry variable, “{_MI_REG_TEXTURE_CACHE_SIZE}”, exists to set a maximum size for the texture cache in megabytes (e.g. “512” corresponds to 512MB).  mental ray will dynamically determine the cache size for the default value of “0”.

It is advisable to use tiled/cacheable textures to reduce memory consumption.

Image Formats and Conversion

  • OpenEXR (.exr)
    • Can be saved as mipmap pyramid
    • Can be tiled/cached
    • Personally recommended by me!
  • mental ray’s native memory-mapped image (.map)
    • Can be saved as mipmap pyramid
    • Can be tiled/cached
  • TIFF (.tif)
    • Can be saved as mipmap pyramid
    • Can be tiled/cached
    • note: Photoshop has been known to do some odd things to tifs (like layering)
  • Bitmap Image File (.bmp, .dib)
    • Can be tiled/cached
  • Maya IFF (.iff)
    • Can be tiled/cached

imf_copy (which ships with mental ray and Maya) is a convenient command-line tool that can be used to create mipmapped-tiled images. For example, I could run this command if I wanted to publish my working TIFF texture to an mipmapped-tiled zip-compressed OpenEXR texture:

imf_copy -p -r -k zip working_texture.tif published_texture.exr

Alternatively, exrmaketiled and/or the openimageIO toolkit can provide increased flexibility for publishing textures.

Setting up the Maya file node for mental ray

Maya file node set up for elliptical lookup

If using a pre-mipmapped image (which you should be):

  1. File Attributes > Filter Type → Mipmap (This enables the miFilter option)
  2. File Attributes > Pre Filter → off (we are using mipmapping with the option of elliptical lookup instead)
  3. mental ray > Override Global Auto-Conversion Settings → on (I don’t trust any global Maya auto settings)
  4. mental ray > Convert File To Optimized Format → off (texture has been already mipmapped!)
  5. mental ray > Advanced Elliptical Filtering → generally off, on if this texture is producing rendering artefacts
  6. Extra Attributes > Mi Local → preferably on for texture caching, otherwise off.
  7. Extra Attributes > Mi Filter Size → generally 1.0, higher for repeated textures.
Note: The last two are dynamic attributes so they might not be present on all file nodes.  See below on how to add them.

Python script to add texture attributes to all Maya file nodes in a scene:

import pymel.core as pm

# Add Extension Attributes to all file nodes.
pm.addExtension(nt="file", ln="miLocal", at="bool")
pm.addExtension(nt="file", ln="miFilterSize", at="float", dv=1.0)

# Delete Extension Attributes from all file nodes.
pm.deleteExtension(nt="file", at="miLocal", fd=True)
pm.deleteExtension(nt="file", at="miFilterSize", fd=True)

Last Notes

As a rule-of-thumb, I recommend using mipmapped-tiled OpenEXRs for all textures and to declare them local.  The one exception is the environment map which will only benefit from tiling, not mipmapping.  To pre-blur the environment, try using the mia_envblur shader combined with “Single Env Sample” on material shaders.

A formalized texture publishing step between texture creation and rendering provides an excellent opportunity to both optimize image formats and linearize data.  For more information on colorspaces and linear color workflow, see Linear Color Workflow(s) in Maya.