Welcome to the Depthkit Documentation Portal!

Depthkit is the first volumetric filmmaking tool designed to empower creators of all experience levels to participate in the cutting edge of immersive storytelling.

Verify Computer Performance

Computer Requirements

Depth Sensors


Refinement Workflow


Refinement Workflow Requires Depthkit Core or Depthkit Cinema

Subscribe to Depthkit Core (formerly known as Depthkit Pro) or Depthkit Cinema for high quality Depthkit assets exported with the Refinement Workflow.

The Refinement Workflow consists of a robust algorithm that refines depth data, allowing for:

  • Export resolutions that match your color video, ensuring the highest export resolution possible.
  • Cleaner edges for your 2.5D assets.
  • Reduction of depth noise, an artifact commonly seen in most depth sensors.
  • Removal of depth artifacts.
  • Recovery of lost depth information, often caused by materials, lighting, and other capture conditions.

For best results, refine footage shot with a green screen by applying a Refinement Mask created by pulling a key from the color video. This acts as an input to tell Depthkit what area to apply the algorithm to.

If you have not captured on a green screen, you can also use tools like After Effect's Roto brush, to create a mask without pulling a key.


With the release of Depthkit v0.5.7, comes an introduction to automatic refinement. This allows the activation of the Refinement Algorithm without the requirement of a Refinement Mask. This new feature is experimental and actively in development. See Automatic Refinement Best Practices for details.

The Refinement Workflow

  1. Set your depth rangedepth range - The depth near and far planes represented as hue encoded ranges, red as the near plane and pink as the farthest plane before clipping. in the Isolate panel to best suit the range of motion in your clip or select.
  1. In the Refine panel, select the Enable Refinement Checkbox. This will reveal a selection of parameters to refine your footage.


Refinement slider values will vary based on your choice to apply a Refinement Mask or if you intend to export without a linked mask. See Automatic Refinement Best Practices for details.

  1. Apply your Refinement Mask in the Isolate panel. See our guide for creating a Refinement Mask for details.

The Refinement Mask is recommended for high quality results from the Refinement Workflow, since it defines the area that the algorithm is going to tackle.

Refinement Mask

  1. Click add Refinement Mask and link your previously created mask. Ensure your mask has the same duration and aspect ratio as your source color video. You will also need to match the source codec.

You will notice that the mask automatically removed all areas in black, allowing for cleaner edges.


For best results, crop your footage in order to remove excess data. When enhancing data, the crop lets you take advantage of every depth pixel to maximize quality. This is ideal when exporting for Unity to ensure lower pixel dimension in your combined-per-pixelcombined-per-pixel - A video or image sequence export format, optimized for Unity playback that consists of the color video (top) and depth data (bottom) in a single export. This format provides a performance friendly playback of your volumetric data in the game engine. exports for optimal playback performance in the game engine.

*Note that the crop will impact the final export resolution.

Cropping in allows you to optimize the frame around your subject.


The Refinement Algorithm consists of the following parameters, which are used to refine your footage, reduce depth noise, and remove artifacts.

Refinement parameters include:

  • Filter Size: You can think of this as the pixel size of the enhanced depth. This parameter will fill holes in your depth data and reduce depth noisedepth noise - Noise in the depth data caused by the way in which a sensor detects depth, by projecting an infrared pattern onto a surface.. Start with a moderate value around 2-4. Increase if you have a lot of depth noisedepth noise - Noise in the depth data caused by the way in which a sensor detects depth, by projecting an infrared pattern onto a surface.. Decrease if you are losing depth details, most noticeable around facial details.

  • Sharpness: Adjusts the sharpness of the filter. What this means is the adjustment of the sharpness along edges and depth ranges. This is most visible when areas of the subject overlap, to avoid artifacts from occlusion.

  • Color Contribution: The percentage of how much the color video with influence the depth data. This is particularly noticable when you have edges well defined in your color video that are not clear in the depth data alone. Increasing the color contribution in this case will allow the refinement algorithm to pull more cues from the color in order to modify the depth. In many cases, this creates a smoothing of your depth data and can almost act as a Gaussian blur.

Please note that when you have a color input with high contrast, increasing the color contribution may be too strong of a cue for the depth and may result in inaccurate depth details. For example, below I have a capture of a black and white checkerboard. By increasing the color contribution to 100%, I am creating a color influence from the black/white contrast that should not actually be represented in the depth.

Depth preview at right includes edges that are only in the color view.

  • Depth Contribution: The percentage of how much the depth data influences the refinement algorithm. At 100% contribution, your data will reflect the look of raw depth data. Decreasing the value will soften your data, putting more weight onto the other enhancement parameters.

Advanced Settings

  • Fill Amount: Complements the Filter Size by providing a secondary fill value. Leave at the default value of 4 unless you are dealing with holes or abrupt clipping in your depth. Decrease the value to remove these artifacts.

Apply automatic Kinect Mask: allows you to remove any background data that still may be available in your custom mask.

What Enhancement Parameters are best for me?

Sometimes it is tricky to determine the ideal settings, especially when depth data can vary based on lighting conditions, materials, etc. When getting started stick with the moderate default values. When you get to adjusting these values, keep in mind that the parameters work with each other. For example, in order to reduce depth noisedepth noise - Noise in the depth data caused by the way in which a sensor detects depth, by projecting an infrared pattern onto a surface., you may notice that it is more effective to increase your Filter Size and Color Contribution, but only if your Depth contribution is reduced as well.


Tool Tip

Get a close up on the Enhancement Settings in action by "zooming in" with the crop sliders. Just remember to set the back to the frame crop before exporting!

Automatic Refinement Best Practices


This feature is experimental and actively in development.

The Refinement parameters will work differently without a Refinement Mask applied. If you are not sure where to start with these settings, try the following:

  1. Reduce all sliders to the minimum values to remove any visual artifacts.
  2. Increase Filter Size slightly. When a mask is not applied, this value should be kept at a minimum. If this value is too high, it may create an unwanted extrusion, or halo, around your subject.
  3. Increase Fill Amount if you need to fill holes in your depth data. Similar to Filter Size, this value should be kept at a minimum when a Refinement Mask is not linked, in order to reduce a halo effect surrounding the edges of the subject.
  4. If you are facing edge artifacts, increase Sharpness and/or Color Contribution to clean up any unwanted edge extrusions or similar artifacts.

Updated about a month ago

Refinement Workflow

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.