Welcome to the Depthkit Documentation Portal!

Depthkit is the first volumetric filmmaking tool designed to empower creators of all experience levels to participate in the cutting edge of immersive storytelling.

Verify Computer Performance

Computer Requirements

Depth Sensors


Refinement Workflow

Depthkit Pro


Refinement Workflow Requires Depthkit Pro

Subscribe to Depthkit Pro for high quality Depthkit assets exported with the Refinement Workflow.

The Refinement Workflow consists of a robust algorithm that refines depth data, allowing for:

  • Export resolutions that match your color video, ensuring the highest export resolution possible.
  • Cleaner edges for your 2.5D assets.
  • Reduction of depth noise, an artifact commonly seen in most depth sensors.
  • Removal of depth artifacts.
  • Recovery of lost depth information, often caused by materials, lighting, and other capture conditions.

For best results, refine footage shot with a green screen by applying a Refinement Mask created by pulling a key from the color video. This acts as an input to tell Depthkit what area to apply the algorithm to.

If you have not captured on a green screen, you can also use tools like After Effect's Roto brush, to create a mask without pulling a key.

The Refinement Workflow

  1. Set your depth rangedepth range - The depth near and far planes represented as hue encoded ranges, red as the near plane and pink as the farthest plane before clipping. in the Isolate panel to best suit the range of motion in your clip or select.

  2. Create a Refinement Mask using the color video. If using the sensor color video as an input, you can find this video in your project folder, in the take folder, under _sensor, then in a folder called sensor01. Learn how to generate a Refinement Mask.

The Refinement Mask is a necessary input to the Refinement Workflow, since it defines the area that the algorithm is going to tackle.

Refinement Mask

  1. Click add Refinement Mask in the Isolate panel to activate the Refinement workflow.

Once the refinement mask is enabled, you can apply the custom mask in the selection box under the Refinement Mask button.


Notes on the Refinement Mask

Make sure to export your mask with the same duration and aspect ratio as your source color video. You will also need to match the source codec.

You will notice that the mask automatically removed all areas in black, allowing for cleaner edges.


The first Refinement parameter to be activated is a crop. This allows you to crop the clip in order to remove excess data. When enhancing data, the crops lets you take advantage of every depth pixel to maximize quality. This is ideal when exporting for Unity to ensure lower pixel dimension in your combined-per-pixelcombined-per-pixel - A video or image sequence export format, optimized for Unity playback that consists of the color video (top) and depth data (bottom) in a single export. This format provides a performance friendly playback of your volumetric data in the game engine. exports for optimal playback performance in the game engine.

*Note that the crop will impact the final export resolution.

Cropping in allows you to optimize the frame around your subject.


The Refinement Algorithm consists of the following parameters, which are used to refine your footage, reduce depth noise, and remove artifacts.

Enhancement parameters include:

  • Filter Size: You can think of this as the pixel size of the enhanced depth. This parameter will fill holes in your depth data and reduce depth noisedepth noise - Noise in the depth data caused by the way in which a sensor detects depth, by projecting an infrared pattern onto a surface.. Start with a moderate value around 2-4. Increase if you have a lot of depth noisedepth noise - Noise in the depth data caused by the way in which a sensor detects depth, by projecting an infrared pattern onto a surface.. Decrease if you are losing depth details, most noticeable around facial details.

  • Sharpness: Adjusts the sharpness of the filter. What this means is the adjustment of the sharpness along edges and depth ranges. This is most visible when areas of the subject overlap, to avoid artifacts from occlusion.

  • Color Contribution: The percentage of how much the color video with influence the depth data. This is particularly noticable when you have edges well defined in your color video that are not clear in the depth data alone. Increasing the color contribution in this case will allow the refinement algorithm to pull more cues from the color in order to modify the depth. In many cases, this creates a smoothing of your depth data and can almost act as a Gaussian blur.

Please note that when you have a color input with high contrast, increasing the color contribution may be too strong of a cue for the depth and may result in inaccurate depth details. For example, below I have a capture of a black and white checkerboard. By increasing the color contribution to 100%, I am creating a color influence from the black/white contrast that should not actually be represented in the depth.

Depth preview at right includes edges that are only in the color view.

  • Depth Contribution: The percentage of how much the depth data influences the refinement algorithm. At 100% contribution, your data will reflect the look of raw depth data. Decreasing the value will soften your data, putting more weight onto the other enhancement parameters.

Advanced Settings

  • Fill Amount: Complements the Filter Size by providing a secondary fill value. Leave at the default value of 4 unless you are dealing with holes or abrupt clipping in your depth. Decrease the value to remove these artifacts.

Apply automatic Kinect Mask: allows you to remove any background data that still may be available in your custom mask.

What Enhancement Parameters are best for me?

Sometimes it is tricky to determine the ideal settings, especially when depth data can vary based on lighting conditions, materials, etc. When getting started stick with the moderate default values. When you get to adjusting these values, keep in mind that the parameters work with each other. For example, in order to reduce depth noisedepth noise - Noise in the depth data caused by the way in which a sensor detects depth, by projecting an infrared pattern onto a surface., you may notice that it is more effective to increase your Filter Size and Color Contribution, but only if your Depth contribution is reduced as well.


Tool Tip

Get a close up on the Enhancement Settings in action by "zooming in" with the crop sliders. Just remember to set the back to the frame crop before exporting!

Updated about a year ago

Refinement Workflow

Depthkit Pro

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.