Render Tests, Combined Lighting
In looking at features in mental ray, there are a lot of combinations that can be had. Some are more helpful than others based on scene needs.
And some results might be counter intuitive.
In rendering you have two usual options for indirect lighting.
One involves brute force-like rendering (also known as “unbiased”) where every sample in the scene sends rays back into the scene to sample other objects and the environment. This is typically easy to tune (just crank it up until acceptable), requires less technical skill, and is pretty predictable frame to frame in variance without testing. However, it can increase render times by a large amount in complex scenes.
Option two involves interpolated (or “biased”) schemes where sparse samples are taken across the image and merged together through a interpolation algorithm. This is typically much faster to render but requires more artist tuning and may cause surprises in later frames. For Final Gather we have the option of using the fg_shooter to help mitigate this problem.
In the below examples I am combining brute force (non-caching to be specific) Final Gather with Irradiance Particles. This means Final Gather rays are used to collect detailed information in the primary bounce (typically the most important). Irradiance Particles are used to collect secondary (and higher) bounces. There are no portal lights being used. I might also call this “The Joys of Rendering On My Ancient Workstation” which can still get the job done.
Irradiance Particles have a few interesting features:
- Additional bounces each send fewer and fewer rays so each pass of collection is faster (Final Gather is only one ray each additional bounce)
- It only collects diffuse information (Final Gather collects specular as well as diffuse)
- The Importon optimization phase is sadly not multi-threaded. But this is less important here since we are using it as secondary bounces and emit fewer
- The importance based scheme means it’s smart about probing the brighter areas of your scene for information instead of wasting rays in dark spots. This effectively “aims” indirect illumination rays towards the more important areas of the scene.
Notice a few things here about how long this took to render in the image captions. . .
Unified Sampling takes less time with each increase in diffuse bounces (I do use a higher than normal setting in this case to force Unified to catch small illumination details). Adding bounces for a faster render might be counter intuitive in this case. There’s a reason for this, the decreased lighting contrast needs fewer samples per pixel. But there are things to think about in a more realistic scene.
In most scenes, texturing will create more contrast in the image. This may drive more samples through Unified Sampling. In this case you may be able to reduce the number of rays sent by brute force Final Gather since Unified is taking more samples anyway. This would be a balance you might need to tune. I would still recommend portal lights which means more direct lighting detail and a reduced need for indirect illumination rays. Also, using this type of combination might require less time to tune an image for the right lighting/detail balance that one or the other algorithm might need.
My settings for this scene are:
- Final Gather Force – 48 accuracy, filter 1 (the filter option means nothing if rays/accuracy are less than 8)
- Importon Density – 0.5 (this can be played with based on what you might need, but recommend it be as low as possible. Too few introduces artifacts in crevices and corners.), bounces 3
- Irradiance Particles – rays 64 (The Maya UI limits this to 512 rays. There is no actual limit. You can change this in the mel or use our UI)
Diffuse bounces are controlled through the Final Gather trace settings, not the Irradiance Particles.
A quick primer for Irradiance Particles:
This lighting technique is best suited for interiors and as a secondary bounce usually. For exteriors I highly recommend the Environment Lighting Mode. The main reason for this choice being: It only collects diffuse information. This means something like glass will block it from getting information. Also, the optimization phase for Importons is single threaded and cannot be cached for multiple frames.
In looking at the easiest way to increase the quality of Irradiance Particles the simple answer is: increase the number of Importons emitted. This means:
- Better detail in geometry and cracks
- More scene information from better sampling and more rays
- Slower renders from the emission and optimization phases
However, you can achieve better performance by balancing rays, interpolation, and Importon density if you have the time and/or desire.
Posted on November 12, 2013, in final gather, Lighting and tagged Irradiance Particles. Bookmark the permalink. 62 Comments.
Thanks David! Is really good point about Irradiance Particles.
Any idea where to get proper documentation of IP at all? Apart from this here – as usual, I thankfully have to ad – it seems to be all trial and error with little or no documentation available what knobs actually do.
mental ray itself comes with an Irradiance Particle tutorial and PDF. It’s not commonly used because of its limitations.
Does Maya now finally allow to tick final gather AND importons? IIRC they grey out the box once fg is enabled, right?
Importons and Final Gather are not correctly exposed together. You’re correct. But you can change the flag to true in the string options. The importance scheme is somewhat helpful but not if using brute force.
Hey David. First off, just wanted to say Elemental Ray is great! It’s been super useful in bridging the gap between artist and mental ray. Anyway, I found the strings in the miDefaultOptions but, not sure what to change. By default Mental Ray will set FG as preferred over IP and Importons and even if I have all the options on I only get an FG result.
In mental ray Final Gather is the primary source of illumination with IP secondary. You can’t change that. This means the first bounce (typically more important) is final gather. This might make it hard to see the result of the IP rays since they work differently with FG on.
Do you mind providing the files for this test? Am I assuming correctly you did this in MR standalone?
I did the first frames in Maya, then exported to Standalone. I usually do this anyway to verify something doesn’t break. I can’t share the scene files anyway, but you can find them (and others that are nice) here: http://www.idst-render.com/scenes.html
Thanks for the link! I’ve never heard of final gather having the ability to be unbiased. Isn’t FG in essence a caching GI solution? Unless FG is sampling at a per pixel level, isn’t it still smoothing a under-sampled GI cache? Do you have any resources that go into this in a more detailed manner?
Final gather shoots rays back into the scene. This is basically the same as anything else that generates rays to measure the scene. Whether or not you apply interpolation is up to the user. The “force” option has been around for some time. But machines haven’t really been fast enough to make that viable. But now that they are, everyone is getting on the “unbiased” bandwagon. FG “force” is selectable in Maya by choosing “non-caching” for Final Gather as opposed to “automatic” and has been available in Maya for some time. The trick here is my secondary bounces of Irradiance Particles are interpolate, making this faster than completely brute force in some ways as they blend together.
Don’t get too hung up on the term “final gather” as the actual algorithm has changed over and over again and in some ways has lost all meaning from the original implementations. Brute force-like FG will shoot rays at each image sample like Arnold, Vray, etc if you select their brute force methods as well. Convergence takes longer but is easier to use. I still recommend combining with interpolated IP and direct lighting when possible. This test is somewhat contrived to introduce the concept in preparation for possible future lighting directions in all rendering with better hardware.
Ok, nice… Just did a simple test render here and it works great, albeit slow. Perhaps this could be a future feed instead of hijacking this one, but how would the relationship between accuracy and point density work out in this type of example? Do the unified quality controls affect the ‘noise’ in the GI? I suppose I could test all this, but asking you is so much easier! 🙂
Brute force anything will be slower. (This is why I’m confused about observations of brute force renders being “fast”) Point density can be ignored here. I would think the UI should grey it out for simplicity.
Unified will affect all unbiased rendering. Hence the term “Unified” which basically means one control for all effects. In these scenes I do go higher than usual, but these are flat and featureless so subtle lighting changes can be corrected through Unified instead of increasing the rays/accuracy. (Let Unified do the work for you) Most sections of the image only get a dozen or so samples and each pixel time is barely more than .002 seconds for most parts.
Something I forgot: for simplicity the name “final gather” is also retained for developers since this probably calls mi_compute_irradiance no matter the changes in algorithm over the years.
Final comment, I promise!
I get the speed slowdown that unbiased render solutions have, but it’s not just ease of use to like, it’s also a more accurate solve. If you want to make the argument that cached solutions are better because they are faster, then check out a REYES renderer, as their fanboys make the same claim. I feel a big point to using raytracers is that they more accurately portray the way real lights work. Still loving the website, keep it up!
In my opinion, sometimes it’s less about accuracy and more about simplicity. You don’t have to know as much to run a brute force, physically-based render engine for most things. This has consequences of course. Slower renders, less flexibility, and hard to produce non-physical realism (art direction in the renderer). This also democratizes the software. It used to be that you had to know how to code to build a website. Now you can go to a website to build a website.
New rendering systems make it possible for just about anyone to achieve a certain level of photorealism without having to know anything. You can literally select from a library of materials, send it to the cloud, get it back and publish to Youtube.
This changes the paradigm from what is possible to the economics of the field entirely.
Added: Take the current raytracers and render on a machine from 2002. You will never get an image on time. Rasterization will always be faster. Look up the SIGGRAPH talk called “Raytracing is the Future and Ever Will Be”
Hi David, would you mind uploading the scene file somewhere else? I get an account suspended error.
It might be your IP address. Since they forbid sharing the scenes outside their site, I can’t upload it.
Also a good source on raytracing, from a programming perspective. http://www.scratchapixel.com/
Thanks for this post!
FG Brute Force looks promising.
How can I control the diffuse bounces? Is it “Secondary Diffuse Bounces” under Final Gathering?
Could you elaborate the different settings a bit more? In which case do I need to increase/decrease Final Gather accuracy, Importon Density oder Depth or the Irradiance rays?
Do i need the Irradiance Indirect Passes?
Thanks in advance 🙂
Yes, secondary diffuse bounces controls the number of IP passes in this case.
You would increase the FG accuracy if you find the render is not converging well or is generating fireflies in most cases.
Importon Density would increase if I was getting lighting artifacts in corners or splotchiness.
Irradiance Particle Rays are actually a reasonably less important part of this. But more if your lighting has splotches or uneven areas.
Yes, use IP Passes. I do not recommend FG Force only. It will take forever to converge in the current state. Using the interpolated IP as secondary blends brute force lighting with interpolation for a reasonably easy to tune solution with faster results.
Thanks a lot!
That cleared things up.
I was always looking for a flicker free solution. And a noisier rendering seems to be much more acceptable than flickering 🙂
I’ll test this on my next project. I hope this is the way to go.
This is not unbiased result. Just look at the chairs shadows, looks like chairs are flying.
As I mention near the bottom in the comments, irradiance particles are interpolated. Hence the title of “combined lighting” where Final Gather is unbiased but IP is still interpolated. So you get a combination with both effects.
18 min for 960*540, white walls, with interpolated GI? I think this is too slow. What is your PC specs?
This is brute force Final Gather and interpolated IP at the same time.
Thx for your sharing.
I have followed all the steps what you mentioned. However, I can’t get the correct result ( Brute Force FG + IP )…>_<
I am using Maya 2014 SP1 Mental Ray for Maya. Would you mind sharing a simple test scene for us? So, I can check which steps I have missed.
Thx a lot~~
This is my test scene and result.
Mmm, my first guess is your MILA Material SG node does not have a photon shader (the MILA material) connected. Since MILA isn’t integrated yet you have to make the connections on your own, or alter the corresponding mel scripts to do it automatically like the mia_material. IP works with Importons and those rely on Photon Shaders.
Just a first guess, try that. Also, you should get a warning that no Importons are stored.
I tried mia_material_x and mia_material_x_passes too. Also, double checked those SG node already have the photon shader connected. The result are quite similar.. T_T
Sorry, it is my careless mistake. The foreground wall blocks out all the fg rays. It can be fixed by moving the camera inside the room.
One more question, MR for Maya has a upper limit 5 for the Diffuse Depth( maximum number of secondary diffuse final gather bounces). Should we use MR Standalone to hit a higher diffuse depth? or are there anyway to unlock?
Standalone doesn’t have this limit. In VFX we usually just care about the first bounce.
Without portal light render happens much more faster. On certain scenes i can measure
three to four time speedup. Irradiance particles changes my approach to whole rendering workflow. Just: add sun-sky light, increase number of IP and IP bounces, increase Final Gather accuracy and point desity. Very, very, veery fast !
Probably true. Portal lights act as direct lighting and are sampled. Keep their samples low like you would with area lights and Unified Sampling.
I want to batch render an animation with Importon depth, fg passes and irradiance passes all set to 4. The first frame looks fine but from the second onwards I get this message and the images comes out darker…
RC 0.2 101 MB warn 082139: FG+IP, setting the FG diffuse depth to 0 and the IP indirect passes to the current FG diffuse depth (4)
(Maya2014 on OSX)
What are your final gather depth settings? You want 4 diffuse bounces from IP? This takes the setting from final gather’s depth settings. For example 4 0 4 4 is 4 reflection, 0 refraction, 4 diffuse, 4 total. Make sure you have at least a 4 setting for reflection. It won’t do 0 0 4 4 for 4 diffuse only. It considers diffuse as diffuse reflection.
My depth settings are more than sufficient; 4 4 4 12.
What I would want is 4 FG bounces with help from IP, but it seems no matter what I try I always end up with this warning;
RC 0.2 101 MB warn 082139: FG+IP, setting the FG diffuse depth to 0 and the IP indirect passes to the current FG diffuse depth (4)
RC 0.2 1619 MB warn 082138: FG+IP with a diffuse depth of 0, setting to 0 the IP indirect passes
First line at the start of the batch render, second one after frame 1.
thanks for this post, it is a lighthouse in mr documentation.
Please I want to ask you somethintg. I think that I have seen that you were talking about some script for unlocking diff. bounce limit, like some guy also mentioned here in past posts.
I spend hours of looking for this. Can you, or someone else, direct me a bit where can I find it please.
Just wondering if you know of a way to get around the maya 2014 FG diffuse depth limit of 5. What if I want to do 10 bounces in maya?
Not currently in the UI. Usually for animation we’re just interested in the first diffuse bounce. Are you doing an arch viz scene?
First of all Thanks David for reply, I realy approciate your work on guiding people rendering with mental ray. I took many from your posts.
RE: Yes, something like this, I do static advertising visuals.
And maybe another question, discused on many forums. Will be with unified sampling way to increase AA contrast in sampling settings to get sharper edges and not these blured ones which unified sampling produces?
You should get crisp edges from Unified Sampling. Be sure and use Gaussian filter set to 2.0 2.0
David, Thank you for the reassurance that this is the right settings. Sometimes edges in renders seems to be somehow soft in comparison with my friends Vray renders. And you can imagine how problematic could be sometimes to outmach Vray. But now with US, MILA and ELM we are back on track. So i got to thank you again guys . Nice Job!
Greetings, David. Thank you for the article. As I still at the stage of exploring the topic of ‘advanced’ rendering options and don’t know all the necessary background could you please point out the exact string options for combining FG as the first ‘force’ bounce with the secondary IP interpolated rays? I use 3ds max. And how IP passes correspond to secondary passes mentioned in regard to FG? I am asking because the article indeed encourages the testing though I am one of those who still have to figure out how to approach the testing)
strings I am using:
“finalgather mode” “force”
“finalgather accuracy” 48
“finalgather filter” 1
“finalgather trace depth” 3 0 5 5
“importon density” 0.5
“importon emitted” 100000
“importon merge” 0
“importon trace depth” 2
“irradiance particles indirect passes” 5
“irradiance particles interpolate” “secondary”
“irradiance particles interppoints” 64
“irradiance particles rays” 64
“irradiance particles rebuild” on
Your FG trace depth and settings for points (even if using force) will be used for you IP bounces and interpolation. (I might need to check on the points part) Remember rays for IP is a “color” for each extra bounce. So you could specify something like 64 16 4 where each bounce takes 64 rays, 16, then 4 as an example.
I am also getting batch renders where the diffuse bounces dissapear after frame 1 if the irradinca map rebuild is “on”. If I leave it on the diffuse bounces remain after frame1, but slightly increase in value which shows up as slight discepencies in the diffuse bounces when I render several chunks on a farm. Thanks for clearing up so many mysteries here!
I meant to say if I leave irradiance build “off” the diffuse bounces remain after frame 1.
They persist with “off” but after frame 1 on a batch render with “on” they disappear?
Yeah with IRR rebuild “on” the secondary diffuse bounces only render on frame 1. Withh IRR rebuild “off” they calculate on every frame but slightly increase in value over time
I should clarify – the irradiance contribution to the secondary diffuse bounces is what’s turning off I believe.
I’ve not experienced this problem. Is any of your lighting keyframed or change?
Thanks for the prompt replies – mush appreciated. No, no animation on the lights.I have a Cornell box example I could send you or point you to. In the meantime here’s a list of my render settings:
Enable Color Management = On
Default Input profile = Linear sRGB
Default Output Profile = Linear sRGB
File name prefix = _
Image Format = OpenEXR
Image compression = RLE
Frame/Animation ext = name.#.ext
Frame padding = 4
Renderable Camera = shot cameras
Alpha channel = On
Depth Channel (Z depth) = Off
Image Size = 720p
Render Options> Pre render frame MEL setAttr miDefaultOptions.finalGatherImportance .1;
2D Motion Vector (mv2DToxik)
Camera Depth (depth)
World Position (worldPosition
Sampling Mode = Unified Sampling
Quality = 1
Min Samples = 1
Max Samples = 100
Error Cutoff = 0
Raytracing = On
Reflections = 2
Refractions = 2
Max Trace Depth = 4
Reflection Blur Limit =1
Refractions Blur Limit = 1
Motion Blur = Off
Keyframe Location = Start of Frame (Turn Motion Blur on to enable, then shut MB off again)
Framebuffer Data Type = RGBA (Float) 4X32 Bit
Gamma = 1
Global Illumination = Off
Caustics == Off
Importons (enabled by Irradiance Particles) = On
Density = .5
Merge Distance = 0.0000
Max Depth = 0
Final Gather On Accuracy = 150 Point density = .5 Point interpolation = 60 Secondary Diffuse Bounces = 2
Final Gather Map Rebuild = On
Irradiance Rays = 150
Indirect Passes = 0
Scale = 1
Interpolate = always
Irradiance map Rebuild = Off
Ambient Occlusion = Off
Force Motion Vector Computation
Force raytraced camera motion vector computation = On
Force raytraced camera clipping = On
I tried applying the FG depth of 4 for reflections and max depth of 4 as you suggested to another poster -still the same problem with IRR map rebuild “on”.
You might want to post the simple scene on the NVIDIA ARC forum. The Maya section.
I’ll do that after lunch and post the link here, thanks.
Just waiting for thread approval. I’ll send you the link as soon as it’s up
Yes and fill out your profile so a human moderator knows you’re real.
Good point – done!
They still haven’t posted my problem or sample file. I’ll try again – problem is persisting
Awesome post! Some great information and very helpful. Thank you to everyone who posted and shared questions and information. Will definetly be checking in on this blog more!