fgshooter UI for Maya

mip_fgshooter used to achieve flicker free final gather

mip_fgshooter is a mental ray production shader that allows you to shoot final gather points from multiple cameras instead of just the render camera.   These virtual FG cameras can greatly reduce flickering by providing stability to final gather points between frames. Increased stability reduces the need for overly aggressive final gather settings on difficult-to-light situations and can lead to faster render times as well as improved image quality.  This offers similar advantages to baking FG points (see Flicker-free Final Gather in dynamic animations) but with a significantly simpler workflow.  Also, I have put together a python script (complete with a user interface!) that will make using the fgshooter easy.

Thanks to The Mill for letting me post this script.

Final Gather Flicker

Generally, flicker is a result of changing indirect lighting contribution computations between frames.  This indirect contribution computation is based off of the perceived indirect lighting at each of the FG points.  Because the location/number of FG points is camera/geometry dependent, and cameras/geometry move between frames in animation, subtle differences in the locations of the FG points causes flicker.

For instance, if part of the scene geometry is visible to the camera in one frame and not visible in another frame, you might get flickering if the indirect contribution around this geometry is important.  Additional FG cameras that either do not move or can view geometry that might not be visible to the render camera for every frame, enable you to stabilize the indirect lighting contribution computations.

For the HTC advertisement above, the green laser lights that write on the buildings were causing FG flicker because their intensity was so great.  When the camera moved slightly, additional FG points inside the buildings would significantly change the indirect lighting contribution computations.  Even brute force indirect lighting flickered because of the addition/loss of a few primary/eye rays changed the QMC determinism so much!  We used stationary FG shooter cameras to anchor the FG points geometrically and kill the flicker at minimal cost to render time (much faster actually if you consider the original unnecessarily high FG settings).

Using the fgshooter UI

First off, you need to expose the mental ray production shaders if you have not already.  To do that, run this simple MEL command and then restart Maya:

optionVar -intValue "MIP_SHD_EXPOSE" 1;

Because focal distance and aspect ratio information is passed via the scale attributes of the camera transform matrix to the mip_fgshooter shader, it can be somewhat difficult to use inside of Maya.

I have provided a script that will make it fairly easy to set up an fgshooter camera(s).  To install the script download the compressed python file from the bottom of this post.  Place the unzipped python file inside of one of your script’s paths.  Now, create a custom fgshooter button from the shelf editor.  You should only need to add these two lines of code (make sure you select python, not MEL!):

import fgshooter

fgshooter UI

When you click the fgshooter button that you have just created on the shelf editor, an fgshooter window should pop up.  This window gives you three options of how to create virtual FG cameras:

  1. You can create a virtual camera at the same location as the render camera (Include Render Camera).
  2. You can create virtual cameras that are fixed at a certain frame along the path of the render camera (Stationary Cameras).
  3. You can create virtual cameras that are offset in time by a few frames from the render camera (Offset Cameras).

Virtual fgshooter cameras

The defaults settings will create 4 virtual FG cameras: 1 at the position of the render camera and 3 stationary cameras at frames 0.0, 12.0, and 24.0.  Specific settings will vary heavily scene to scene.  If you wish to change from this default virtual camera setup, raise or lower the number of stationary or offset cameras and then click “Update”.  The UI will now display the corresponding number of slots for each of the types of virtual cameras.  When you are ready to create the actual virtual cameras and mip_fgshooter node network, click “Apply / Refresh”.  Since this script is not cumulative, the entire fgshooter setup will change every time you click this button.  This way your scene won’t accumulate virtual final gather cameras.  You may also remove all virtual cameras/mip_fgshooter/node networks by clicking “Remove All”.

Note: This script will only create cameras for a non-default camera set as the render camera under render settings.

Offset vs Stationary

In general, the more stable the final gather points, the more stable the final gather, so it is best to use the stationary cameras in combination with the render camera.  This will be particularly useful for pans where flicker is being caused by small changes in the render camera’s position and orientation.  For fly-throughs where the render camera’s position changes greatly, offset cameras may be more useful than stationary cameras.  These offset cameras will help smooth out flicker by providing information from a few frames ahead and a few frames behind the render camera.  You should always include the render camera.


version 1.0 – posted 1/4/12
version 1.0.1 – posted 1/7/12
Animation below rendered with no lights, FG only with an animated camera and 3 fgshooter cameras (4 total) The image was tuned to the desired quality and then sent to render. No tuning for flickering was performed.

Simple fgshooter example with moving objects and camera

Simple fgshooter example with moving objects and camera

Another faster example with higher contrast


Example file (may need to cut and paste, Maya 2013), file courtesy of Narann on the ARC forums: FGshooter File

About bnrayner

I am a VFX developer specializing in 3d rendering. With a background in Physics and Digital Art, I enjoying using math to explain how light interacts with materials and to creating pretty pictures along the way.

Posted on January 4, 2012, in final gather, scripts and tagged , , , . Bookmark the permalink. 101 Comments.

  1. Thank you so much!
    Rendering flicker-free FG is such a pain.
    And providing the information + the script is so blessed. It comes right in time for me.
    I’m looking forward to part 2 😉
    Thanks again,

  2. Thanks a lot for sharing the script Brenton – really handy! Ah yeah, did I mention your blog is the best thing that happened to the MR community in a LONG time? Thanks a bunch, guys!

  3. Hi there,

    I’m trying implementing this script into my scene and I don’t seem to get it to work.
    I get the following error:

    # Error: CallbackError: file C:\Program Files\Autodesk\Maya2012\Python\lib\site-packages\pymel\internal\factories.py line 744: Error executing callback <bound method ui.apply of > – apply – module fgshooter – C:/Users/U/Documents/maya/2012-x64/scripts\fgshooter.py, line 397

    Original message:
    Traceback (most recent call last):
    File “C:\Program Files\Autodesk\Maya2012\Python\lib\site-packages\pymel\internal\factories.py”, line 742, in callback
    res = origCallback( *newargs )
    File “C:/Users/Uri/Documents/maya/2012-x64/scripts\fgshooter.py”, line 403, in apply
    createFgShooters(frames=self.stationary_frames, offsets=self.offset_frames, current_camera=self.render_camera)
    File “C:/Users/Uri/Documents/maya/2012-x64/scripts\fgshooter.py”, line 233, in createFgShooters
    fg_shooter = getFgShooter(render_cam)
    File “C:/Users/Uri/Documents/maya/2012-x64/scripts\fgshooter.py”, line 185, in getFgShooter
    pm.connectAttr(mip_fgshooter.message, render_cam.miLensShader[position])
    File “”, line 2, in __getitem__
    File “C:\Program Files\Autodesk\Maya2012\Python\lib\site-packages\pymel\internal\factories.py”, line 2073, in wrappedApiFunc
    result = method( mfn, *final_do_args )
    File “c:\engserv\rtest\Maya_2012AP_Win64_Build\build\wrk\optim\runTime\Python\Lib\site-packages\maya\OpenMaya.py”, line 7600, in elementByLogicalIndex
    RuntimeError: (kFailure): Object does not exist #

    I tried it in my scene and in a new simple scene that I created for this and I got the same error.
    In the simple scene I have a camera1 with animation, I added Physical Sun & Sky with default settings and a poly plane and sphere. I have the MR production shaders exposed. I created a shelf button as explained in this article and I clicked the Apply / Refresh button with default setting and I got the above error.

    • Okey, I found the issue, which is pretty straightforward regarding the cause, but I do not know how to implement it into your script.
      The issue is that the Lens Shader slot is being taken by the mia_exposure_simple.
      I know how manually add multiple Lens Shaders, but your script should support this situation because it is a common thing to have a Lens Shader connected to your camera.

      • Hi Uri, thanks for pointing this out. In the meantime you can add the other lens shaders after the script so that they are layered. It’s not something we thought of immediately because we don’t use lens shaders here. DOF and Tonemapping are done in post in most cases. (Unified Sampling does not require a lens shader for correct sampling when changing color space as well) But this is a good point.

        Also, you may find that (depending on the mode chosen in the fgshooter shader) you may need to increase your Final Gather “density” to maintain similar coverage based on the projected screen space you would normally have without dividing it with other virtual cameras.

      • Hi David,

        I started testing the fgshooter in a “real world” scenario and I have to say that it works amazingly!
        I have a scene with a bird flying and I couldn’t get rid of the fg flickering caused by the birds wings movement on the BG, and in 2 seconds, using your script, with the main cam + 3 stationary cams I got it fixed without changing my FG settings and it was 8%-10% faster(!).

        You mentioned the density. I could notice that my renders are darker in the indirect lighten areas.
        Do I need to multiple it for example 3 times because I have 3 stationary cams?
        Thanks again for this method and script!

  4. Hi Uri,

    Thanks for the feedback, I am glad the script helped so much!

    Since most of the cameras are probably shooting in the same general direction, the final gather point density probably only needs a moderate increase, maybe as little as 1.2 or 1.5 times as much. The greater the difference in position/orientation, the higher you will have to set the point density to get the same coverage on the geometry that you are rendering.

    Potentially, with four cameras pointing in four completely different directions, a point density of 4 times the original would give you the same coverage as before. If you find that you need to set the point density very high because the cameras are covering such different screen space, perhaps using a few offset cameras instead might be a better solution.

    Of course, every scene is different and if the renders look good then there is no reason to raise the point density!

    • Hi Brenton,
      Thanks for the clarification regarding the point density.
      And thanks for a great blog and script.
      I know that I sound like a broken record with all the thanks… 🙂

  5. Hi Brenton,

    I can’t seem to download the script I really want to test this out!!

    • We’re going to change how we deliver files shortly. But in the meantime we find that this works ok but you have to wait a bit after clicking the download button. We’ll update when we get it changed. Thanks!

      • Thanks for the feedback, we would love to continue hear about people using these techniques in the industry and maybe even see some of the results!

        We just posted a new version of the script (version 1.0.1) that fixes the bug described by Uri.

        Happy rendering!

  6. Hi

    Thanks for the script. I used to work at the Mill before Framestore. Right now I’m teaching in the University Of Hertfordshire


    I think some of my second year VFX lot are doing a project with you soon. Things like this script really help them have a second look at MentalRay rather than giving up and trying other renders. You’ve made my life and my students life far easier!

    • Hi Mark! That’s great to hear. mental ray is a perfectly capable renderer and we hope that we can help with some of the actual and perceived complexity of rendering and lighting. I have friends at Framestore as well. It’s a very small world in VFX and we hope we can spread a little joy to those on a deadline!

      • Actually have you heard about MentalCore. I’m in the process of getting it for our Uni. It is a “better” version of a MentalRay implementation for Maya. When I was in Mill-TV Adam wrote loads of tools to make MetalRay more usable for Maya.

        Although MentalRay is lacking some of the newer features like PTEX and DeepImage I’m guessing they will include it in their next release. Fingers crossed!

  7. Hi guys,

    I have a scene where I have 2 render cameras.
    This is a common thing to have more than 1 render cam in the scene.
    The issue is that if I run the fgshooter script on the first camera in my scene and then I want to run it on the second camera, it removes the fgshooter setup from the first camera, basically this script was intended to use with only one camera setup in mind.
    I know that you mentioned in the article that the script uses the camera that is in the render settings.
    I don’t know regarding the implementation of the fgshooter, but it could be best if one could select each of the cameras in the scene (individually set the amount of stationary/offset cameras he wants) and the script will just add the extra fgshooter cameras to the selected camera and won’t touch or effect any other cameras in the scene, so there could be several cameras in the scene with several different fgshooter setups.

    Thanks again for this script!

  8. True. I’m not sure it’s a modification we’ll make because (for simplicity reasons) we wanted to single out a single camera and avoid possible confusion.

    But with the base provided it shouldn’t be too hard for someone to make that change.

    • I understand where you coming from 🙂
      I took a look at the code (it is very clean and readable – thanks for that!)
      I thought on a way to be able to work with multiple cameras without adding complication to the script – by adding an input field where the user type the name of the render camera.
      This is not very elegant but in this way you don’t have to start fiddling with the user’s selection – if he selected a camera, and if it is a render camera or an old fgshooter camera that he selected.
      Also in this way the architect of the script can stays almost the same, just the ‘getRenderCamera’ procedure need to be changed to get the camera from the new input field and not from the render settings, and the ‘removeFgShooters’ procedure should select the mip_fgshooter that is connected to the camera that is in the input field and not just to select all the mip_fgshooter’s in the scene.
      You probably ask yourself why I won’t just go and do this myself, the thing is that I don’t know Python or MEL very well, I know other programming languages, and for me to implement it will take probably 10 times longer than to you guys.
      I know that you both are busy, but I hope this is doable 🙂

  9. Hi again,

    In the end I wrote the extra code that supports multiple cameras and user selection.
    Now you can simply select any of your cameras in the Outliner, one at a time, which is not a fgshooter camera or a default camera (there are validations for those scenarios + a validation for multiple selections and a non camera selection), and execute the fgshooter script.
    To remove it, you can simply re-select the camera, click the Remove from Selected, and it will remove only the fgshooter setup from the selected camera.
    Tested it in my project and it works great.

    If you guys want me to send it to you, I’ll be more than happy to do so.

    P.S: There was one feature missing for me, which is a way to know quickly what were the settings I typed in the fgshooter script when I reopen a scene with fgshooter setup. I thought on a way to implement it – the script will add in the Notes node of the camera the details that the user typed (for example how many static cameras and on which frames).
    In the end I didn’t have time to implement it, but I think this could be a great addition and a helpful feature.


    • Thanks so much for sharing these solutions!

      Would you be so kind and put your multicam solution somewhere online.
      Thank You

      • Here is a link:
        Just copy/paste it into a new .py file and save it in your scripts folder.

        I hope it’s alright with David and Brenton.

      • Sorry I have been MIA, I just moved to NYC and still have not unpacked my computer. I will update the script when I find the time.


      • Always feel free to improve, specialize, or alter scripts we post if you find it useful. In fact, we encourage it. Sadly we cannot be a vetting system for such changes, but the code being made available is how these tools will grow.

  10. Hello there chaps. Cool script and idea except would it be too much trouble to explain a locked off camera with animated objects workflow. Excuse me if I being dumb but it doesn’t seem obvious.

    • A locked camera would not experience much flicker at all in the static objects (depends on how the animated objects interact with them). But using other cameras to view the scene from different positions would still provide better adaptivity for FG calculation. Depending on the scene, adding these virtual cameras may not be necessary with a locked camera. Do you have a specific scenario where you experience flicker? Maybe your animated objects are causing hotspots?

      • Thanks for the reply. The camera is static but the objects are moving. I’m aware of the classic ways of solving flicker in MR but sometimes there not effective enough or practical. I guess I was hoping this script was a bit of a saver.

      • Hi Keith,

        So the mip_fgshooter shader should definitely be able to help in a situation like yours, but the script is not currently equipped to set up virtual fg cameras for that situation. I originally wrote the script as a useful tool for a specific problem, but it is not expansive enough to be generally applicable to every situation. I am currently considering ways to expand it’s usefulness, but I am not sure when (or if) I will be able to release anything better. If you wanted to set up the cameras manually, perhaps you could arrange the virtual camera around the render camera and point them in the same direction. That should enable them to “see around the corners” of geometry and provide you with a more stable indirect result.



  11. Thanks for your prompt reply.

    I’ll give the manual positioning a go. I don’t see why you should have to find the time personally to adapt and finish this script. It’s the kind of thing that should be implemented anyway. Much like a lot of things in the old MR. It’s thanks to guys like you that make the seamingly obvious things happen. Not everyone has a team of tech guys to back up the artists when all we want to do is get the bllody thing to ‘just work’.

    Anyway sorry for the rant. Keep up the good work with the blog.


    I’m good friends with Jon Wood. I think you know him from the Mill?



  12. Hi guys,

    We are about to do our first real animation project (normal work is high res stills) and we are doing some car interior fly through shots.

    Very thankful that you guys shared this script with everyone, I am hoping it will be just what we need to avoid any flickering.

    However wanted to get your advice before in diving head first.

    We have a shot where the camera dollies in from the rear of the car and the rear seats fold up as the camera moves in.

    Will fgshooter with stationary cameras work in this example? The camera is not moving a huge distance. A couple of meters in one direction.

    I am worried that because the seats do not move particularly quickly that we will still see flicker in them.

    It sounds like the Offset camera approach does temporal smoothing which I thought still causes final gather points to drag a bit?

    I do not have much time to test it out so just wanted to get some advice if possible before hand.

    Many thanks,


    • Hi Richard,

      Since you have directly visible animated objects what I might suggest is a piggyback virtual camera on the animated camera, and a few well placed stationary cameras.

      Easy way to test: render one frame where there’s decent movement then the next frame and flick back and forth between them. You can try this at a smaller resolution if the frame is slow and do a few more. For the HTC commercial 3 frames were enough to see if it worked well. See how much variance you get.

      Then maybe offset.

      Offset works well for camera pans and tilts. Zoom, dolly, etc are a bit more difficult so you might find offset is less helpful if you have an offset camera looking at something you haven’t come to with the main render camera. That’s my only major concern.

      We would love to hear what you find out so we can continually refine usage scenarios and practices. Ideally it should be stable and you can focus purely on quality of lighting.

      • Hi David,

        Thanks for your feedback and testing workflow.

        “piggyback virtual camera on the animated camera”

        What does that mean exactly? set just 1 offset camera?



      • I mean have one of the fgshooter virtual cameras follow along at the same location as the animated camera. So no offset for it. It will see what the render camera sees. This way one of your selected FG calculation positions is the render view. (The fgshooter shader doesn’t automatically include the render view as a camera to calculate from.)

  13. Ah ok. So I should just try with ‘Include Render Camera’ and set a few stationary cameras up at chosen frames in the animation.

    Many thanks,


  14. Hey guys,

    I hope you are well.
    I was wondering if you could make an “artist” POV tutorial, that explains your way of getting a flicker-free FG with efficient render times.
    There are too many parameters to play with inside the FG menus and sub-menus, and a good explanation from you guys could make my life and I bet a lot of other’s life easier.


  15. Hey guys,
    Thanks for the great script.

    I have one problem though. I have setup everything. I am working on an interior scene with main camera panning left (nothing fancy). I have one directional & 2 spot lights. I have got 3 FGshooter cameras + 1 main camera.

    These are my FG render settings in mental ray-
    accuracy – 400
    point density – 1.0
    point interpolation – 300
    secondary diffcuse bounces – 2

    But when i render my main camera, it gives me dark (almost no light) render. Could you guys help me out what’s happening here?

    Thanks in advance!

    • I might have to see the scene to be sure. Try without the fgshooter, it should not affect the amount of light in the scene.

      But interpolation of 300 is pretty insane! Maybe 40-45 is my usual upper limit. Is there a reason for that?

      • Hey david, thanks for your reply.
        I increased the interpolation & accuracy because the indoor scene is in foots & it is pretty big! So the FG points are not merging together when I keep these settings low & render is full of colorful blotchy spheres. (maybe i am wrong in increasing interpolation…so any kind of help is welcome). I tried rendering on same computer today & the lights are coming out good.

        But now when i render the scene by my main camera, I get the render divided in 4 squares, each showing what different fgshoot camera is seeing.
        So is there any way to get render what my main camera is seeing.


      • Ah, I think I know your problem then. Final gather points are based on screen space. This means they are based on the camera view so that they spend time computing what is going to be seen, (unlike photons that may bounce to hidden areas.)

        When you add the fgshooter, it divides this screen space into the cameras viewing the scene. In which case you may need to increase your Point Density to make up for the fact that multiple cameras are now sharing the same area. That’s probably why you aren’t getting enough points.

        For example: you use 4 cameras for the fgshooter, you may need to multiply your FG Density by 4 to achieve a similar coverage that you had before. Keep in mind that doesn’t necessarily mean you’ll have 4 times as many points as before. You may have just as many (give or take), but it’s shared in the same screen space as the original single camera had.

        Depending on the mode you’re using, the fgshooter presample density preview is normal. (Multiple views)

  16. Hey! THANK YOU. this script is a LIFE SAVER. Just wanted to share a small possible bug (super easy to work around anyways, but just in case you were interested). If The camera you are rendering from has locked rotation attributes, when you have “include render camera” in the FG shooter window selected, the FG shooter camera that is attached to the render camera ends up pointing in the wrong direction. Again, thank you so much for the awesome script! 🙂

  17. Joshua Jones

    Wonderful script. Thank you very very much. It’s stunning that something like this isn’t built into final gather. I sincerely appreciate you slogging through a solution.

    One question though…I’m rendering a film and all my cameras are using overscan as their “Fit Resolution Gate” style. The virtual cameras are created automatically with “horizontal.” When I render, any area in the “overscanned” part of the frame is clearly not being calculated as part of the final gather solution (comes up swirlying, flashing, and dark). If I change all the virtual cameras to overscan as well, it doesn’t seem to make a difference. It seems like FG is ignoring this setting on the virtual cameras.

    Any ideas of how to fix this so I don’t have to go through the horror of repositioning all my cameras?

    • That’s something that would be nice as a core feature. But exposure might not be as simple.

      As for your specific problem, we don’t render with overscan often (occasionally with stereo) so it’s not something we thought about. You could technically alter the math being used for the script if you look at the exported .mi file to how overscan affects the camera export. That’s a little out of my realm for this sort of thing. 😉

    • Hi Joshua,

      Time based interpolation of finalgather points is indeed handy and may be nice to see built natively into finalgather. That being said, direct control over virtual finalgather cameras is more flexible than an automatic solution as it lets you arbitrarily shoot finalgather points where ever your particular situation requires.

      Right now, my script just creates the virtual cameras as visual representations of where the points are shooting from. The only actual information passed to the mip_fgshooter shader is the virtual transform, aperture, aspect, and focal length. Changin overscan will have no effect.

      If you wish to use fgshooters for your project, I recommend manually moving or keying the “static” virtual cameras to better cover the same area that your overscan is rendering. While not very user friendly, it should give you the desired result.


  18. Joshua Jones

    Thanks for the feedback. I was able to switch my cameras back to a horizontal fit and adjust their horizontal aperture setting to approximate the framing that I did with overscan turned on. It was relatively painless and with the actual camera and virtual cameras in sync, the render worked out great.

    I do have another question about the script after working with it a bit. When I update the number of stationary cameras they come up automatically in 12 frame increments. Is this a reasonable increment to try to maintain. For a 144 frame shot would you then be looking at 13 cameras to maintain the 12 frame spread? In the few tests I had time to run, the results of this looked good, I just want to try to be as efficient as possible. I have some whopper shots at 500 frames and am nervous at the prospect of 42 stationary cameras!

    Thanks again for such an amazing script. It was extremely easy to use and the results so drastically better than what FG by itself is capable of that I shed a tear of happiness at the end of my render.

    • The value of 12 was set arbitrarily. The placement of cameras should probably be determined by the camera’s particular motion more than anything else. As long as you have good FG point coverage and the pre-render finalgather time is not too long you should be good to go.

  19. Hey guys,

    thank you for such a wonderful solution and helping the community out with an insightful blog! Question and apologies as I’m still a noob when it comes to Mental Ray, but what should I expect once I hit render? I mean from what I see I get 5 tiled images of a frame I selected in the timeline with final gather rendering over the entire frame. Once that’s finished I get my single image with render time stats etc…when I batch render will this be the new method of working or do I need to turn rebuild off or freeze etc to see the results? Apologies again if this is elementary looking forward to your response.

    • You should see a tiled pre-compute phase based on the number of cameras used. For example, 6 tiles for 6 cameras. The rest of the mental ray controls for final gathering work like before except now you have better adaptive coverage.

      But the point of this is that you should be able to more easily render dynamic scenes without needing to build a file or freeze FG. But difficult scenes might still benefit from that workflow.

      Try it without first and maybe different camera positions to see how successful you are without building a FG file. Once you get the hang of it you may be able to render more often without creating a FG cache file at all.

      • Thank you David! I will try your suggestions as it would be wonderful to be able to not have to render into a FG file. My concern mostly has to do when I submit to the farm and how that gets effected other than that pretty straight forward and genius.

      • Each frame will render as you see locally on the farm. Since you’re not dealing with cached files it should be easier to do as each frame renders on a node it doesn’t need to access a FG file or wait to append to one if not using that method.

        You might want to try a few frames locally, maybe 3-5 on a difficult/complex part of the animation to check for flicker first. And adjust as necessary.

  20. That’s a great idea! Thanks again for the suggestion.

  21. First of all, great script! Thanks so much for it!

    The only thing I have noticed (and it may just be me) has anyone had any problem with the script affecting your undo? A lot of the times after I run the script, my undo queue gets turned off randomly. I wasn’t sure at first why it was getting turned off, but I seemed to have pinpointed it to the script. Has anyone else had this issue?

    • I didn’t connect this with the fgShooter script necessarily, but yes this happened to me once over the course of the last month of usage. It had not happened to me before running the script, but if it was caused by the script it is not happening consistently.

  22. I have a shot where my camera tracks in on a subject.
    When I create the fg cams they stay in place, how do I go about using them for an animated shot like that?
    Is it fine just to parent the fg cameras to the render cam?

    Thanks you

  23. Hi David,

    I kinda lost from how to use the script. What should I do next for the FG settings after setting up virtual cameras? Rebuild = Off/On/Freeze? How should it be for the final FG settings usually (Accuracy / Pt Density / Pt Interp.)? After creating cams, do I need to build a fgmap or just batch render the whole sequence? For moving objects, should I use offset or stationary cam? Appreciate a lot for an explanation. Thank you.

    • The main point of the fgshooter is the ability to render without caching the information. It should allow you to more easily just hit “render” and have a good result.

      So with that in mind: Rebuild = On (if you were going to cache it, then why even use the fgshooter?) 😉

      Accuracy depends on the complexity of your lighting and if it’s very evenly lit by something like the Native IBL. So this is scene dependent.

      Point density is influenced by the complexity of your geometry (1.0 is typically enough) and how many virtual cameras you are using. The more the virtual cameras share screenspace, the more density you may need to catch geometric details.

      Interpolation is the same as before, I can usually get away with 15-25 in most cases but this is also dependent on your scene complexity and density.

      I would use an offset camera for most situations where the camera moves. Stationary is useful where you may need more coverage or the render camera is stationary but the objects animated could benefit from more angles.

      • Now I think started getting what’s going on. Thanks a lot David. Testing right now and will get back to you if any problem. Appreciate!

  24. Thank you guys for this awsome script it work it’s great

  25. Hello.. i keep getting this error.. not sure whats going on.. any help would be appreciated
    raceback (most recent call last):
    File “C:\Program Files\Autodesk\Maya2013\Python\lib\site-packages\pymel\internal\factories.py”, line 708, in callback
    res = origCallback( *newargs )
    File “D:/Users/MyNewBitch/Documents/maya/2013-x64/scripts\fgshooter.py”, line 391, in apply
    createFgShooters(frames=self.stationary_frames, offsets=self.offset_frames, current_camera=self.render_camera)
    File “D:/Users/MyNewBitch/Documents/maya/2013-x64/scripts\fgshooter.py”, line 221, in createFgShooters
    File “C:\Program Files\Autodesk\Maya2013\Python\lib\site-packages\pymel\internal\factories.py”, line 730, in newUiFunc
    return beforeUiFunc(*args, **kwargs)
    File “C:\Program Files\Autodesk\Maya2013\Python\lib\site-packages\pymel\internal\pmcmds.py”, line 134, in wrappedCmd
    res = new_cmd(*new_args, **new_kwargs)
    RuntimeError: Plug-in, “decomposeMatrix.mll”, was not found on MAYA_PLUG_IN_PATH. #

    also this is in a new scene with only a camera.. nothing else..It wasnt working im my project scene.. so tried it get it to work in a simple scene

  26. I was able to figure it out.. if anyone is having that issue..Go to settings/plugins and load matrixnodes.mll that’s if you getting the same runtime error plugin”decomposeMatrix.mll” was not found on Maya_plugin_in_path#

  27. Hello guys.. for some reason.. the fgshooter cams are not visibile in the scene.. not sure whats going on but any help would be appreciated

  28. i hope im not aggravating anybody.. basically the cameras were scaled really small.. since im working in real work units..

    • You can change the visible manipulator scale of the cameras in Maya without changing their actual scale. DO NOT scale your cameras in Maya. This is true for most renderers, scaling the camera can cause problems when rendering.

      Also be careful using real-world units if you export in centimeters. This can mean positions exported for rendering become very large numbers. This can cause problems with precision when it’s rendered in both animation and the raytrace structure. Renderers are unit agnostic: 1 = 1, it does not know if that’s 1cm or 1km, it’s just 1.

  29. thx alot david.. never knew that scaling the camera had any effect at all..I definitely seen the effect of scaling the fgshooter cameras on the distribution FG points.. when i reset the scale to its original scale, i had better distribution.. thx again for all your help bro.. i’ve learned so much from this site bruh

    • Scaling a camera causes artifacts with envblur node, the SSS shaders, and lens shaders at times. Other rendering software also has trouble when you scale a camera, so it’s best to leave it alone. 😉

  30. is this the usual workflow for animation with unified sampling? do you have to tune fg separately when using unified sampling or does it work similar to the glossy rays in the mia shader.

    • For now Final Gathering is independent of Unified Sampling: it works like it always has. The fgshooter is a helpful utility to help FG be more adaptive and render with less flicker.

  31. I just installed the script and ever thing is working fine except the frame buffer is now divided in 4 quarters and is rendering the 4 cameras in one image.anyone else seen this? Thanks

    • Hey bro.. That’s perfectly normal..if u have 8 8 fgshooter cams than 8 different viewes in the render view

      • Ok I am using it at the moment and had one more question. Do u run it for every frame like brute force final gather of should u save out the fg map it creates and reuse that?

      • I use it for every frame.. Thats the reason why that production node was made.. From what i’ve read using fg maps are for shots that objects are not moving within the scene or stills.. but honestly i’ve rendered a 960 x 540 image using the fgshooter with great success under an hour.. like 40 mins and i have alot of lots and geometry in my scenes..

  32. Hey,
    I just have to thank you for this great script. I finished my short film in a very tight deadline ( for cg talk challenge) and it could not have been possible without fg shooter script! Here is the short 🙂

  33. Wow! Great release, thanks for sharing!

  34. Hi,
    Thanks for your great help! This tool just help me to solve our flicker issue. However, the offset mode in your tool still give us flicker result, I don’t know why…
    Can you explain the usage of this mode ? I tried to use offset mode in different camera situation(camera fly through, dolly, pan…), but get the same flicker result… Is that possible to solve flicker issue when you are using offset mode? Thank you.

    • We use different modes depending on the shot itself. It’s possible that maybe:
      1. Your shot doesn’t benefit from that method, maybe the camera moves really quickly for example or
      2. The amount of offest needs to be changed

      Offset is probably more difficult to have less flicker since the cameras are all still moving. It performs best when the camera isn’t moving so quickly.

      • Thanks for your answer!
        I’m just curious why don’t we just use stationary mode? It looks like more stable for any situations. Is there any special situation(or benefit) we MUST to use offset mode rather than stationary mode? (really slow moving shot?)
        Thank you.

      • No, there’s no hard and fast rule requiring any method over another.

      • Offset cameras may be useful for long flythrough animations where stationary cameras may provide inefficient coverage. In generally I recommend using stationary cameras if possible.

    • Thank you!
      That’s more clear to me. Really appreciate your help, and thank you for your kind share! 🙂

  35. sir do you have video tutorial for this? thanks…

  36. Great script! Did have problems with the edges of the screen. Object arent visible on the very edges of the screen and render black, so had to modify the scale of the fgshooter cams.

  37. Hey there and thx for this incredible script, i read the whole thread to get my first shot rendered today, i have a circle motion path for my animated cameras turning around moving objects. To what i seem to understand through reading this post it is best not to attach the fg cameras on the motion path, so i set them each 90 degrees outside the motion path. My quesion is: does each stationnary fg cam have to render fg each frame or is there a particular increment to work out on each situation ? My sequence is 200 frames long in pal mode.
    THx a lot for your support guys and keep it up, i had no knowledge whatsoever on fgshooter shader before.

    • Is this just a turntable render? No animation other than the camera? If so, then you can bake the FG map with enough cameras in a single go.

      If not, then how to apply the cameras is based on the scene itself. If you don’t travel a lot, then not having them on the motion path is fine. If you travel with the camera a bit, then you need to have them follow to be most efficient.

  38. Hi first of all thanks for sharing this script.
    Can you give me a tip wich settings for the Camera would be the best.
    I have only moved Cameras without moving objekts in my Scene.

    So what would be the best settings / or wich Cameras should i set in your Script for my scenes when i only have camera move ?
    – Include Rendercamera
    – Stationary Cameras
    – offset Cameras

    sorry for my bad english

    PS: i use Maya 2014 X64
    and Mental ray Render

    thanks in advance

  39. Love this. Thanks very much Brenton.

  40. Hi, link to the script seems to not linked.. can you please re upload again?


    • This was kept on a Dropbox public folder and Dropbox has since made those private, will try something else. However, mental ray 3.14 makes use of GI Next and you might try that instead of final gather for simplicity’s sake.

      • hi, what do you mean use GI next? i havent use mentalray for quite time

      • Hi @gpgpu4mr is it possible to re-upload again the script ? i think it will be help for a lot user. im affraid we not using mr 3.14, cant upgrade either for some reason


  1. Pingback: Maya – Flicker-free Final Gather « i have a mental blog

  2. Pingback: “My render times are high? How do I fix that?” « elemental ray

  3. Pingback: fgshooter UI for Maya Creates Final Gather Shooter Cameras in a Maya Scene for Flicker Free Final Gather Animation

  4. Pingback: Özgur Yıldırım

  5. Pingback: Render Tests, Combined Lighting | elemental ray

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: