Processing Studio captures

  1. Enter the Editor context by clicking on the Editor tab. You can also navigate there by going to the View menu and selecting Editor.
  1. As you can see in the recording library, the capture from each sensor is available to review. Double-click one of these clips and see all related sensor perspectives highlighted for the complete multi-sensor capture.
  1. For each sensor perspective, we will set the optimal depth range per clip, and apply the refinement algorithm to enhance our depth data.

Depth Range

Located in the Isolate panel, the Depth Range is the range of distance from the sensor that you will be preserving. It's represented as a hue-encoded depth map, which you can preview in the Depth Preview window, with red/orange as the closest preserved depth data, and purple/pink as the farthest preserved depth data. Bringing these clipping planes as close to your subject without clipping them at any point in the recording will increase the spatial resolution of your asset.
Set the depth range via the slider in the Isolate panel, or by grabbing the corners of the frustum in the 3D viewport.


Cinema workflow

If you have recorded a Cinema clip for any of your sensors, you may apply them in the same fashion as a single-sensor capture. Follow the steps here to do so.



Refinement is enabled on a per-sensor basis.

In order to get the most out of the Refinement workflow, enable and set the Refinement settings on each sensor so that all of them are set appropriately.

At this point you can export your clip for Unity. However, for best results, we recommend using the Refinement workflow. The step consists of a robust algorithm that refines depth data, allowing for:

  • High-resolution exports
  • Reduction of depth noise & artifacts
  • Recovery of lost depth information, often caused by materials, lighting, and other capture conditions

There are two methods of applying the Refinement algorithm:

  • Masked-Refinement
  • Maskless-Refinement


For the highest-quality results, create a Refinement Mask for each of the color videos recorded from each sensor. These source videos from which to generate these mattes can be found in the following location: <Project Folder>/<Take Folder>/sensor<#>/sensor.mp4

The Refinement Mask informs the Refinement process to keep areas of the frame filled with white, and discard areas filled with black. This assists tools like Hole Filling to more accurately reconstruct missing depth data. For this reason, the quality of the key, rotoscoping, or other forms of background removal and their ability to accurately trace your subject directly affects the quality of the end result.

The Refinement Mask video should be the same resolution, frame rate, and duration as the source color video it corresponds to. See our guide for creating a Refinement Mask.

In the Isolate panel, link the masks to each corresponding recording.



Select Enable Refinement in the Refine panel and go straight to exporting. In this case, you may not be able to leverage the power of the Refinement Algorithm, but you can achieve high resolution exports with little post production.


The Refinement algorithm consists of the following parameters, which are used to refine your footage, reduce depth noise, and remove artifacts.


Filter Size and Fill Amount recommendation

If you are not applying a Refinement Mask as an input, it is recommended to reduce the Filter Size and Fill Amount. While these higher values can repair depth artifacts, they work best when a Refinement Mask is applied, and may result in stray geometry added to your asset without a Refinement mask.

  • Filter Size: You can think of this as the pixel size of the enhanced depth. This parameter will fill holes in your depth data and reduce depth noise. Start with a moderate value around 2-4. Increase if you have a lot of depth noise. Decrease if you are losing depth details, most noticeable around facial details.
  • Sharpness: Adjusts the sharpness of the filter. This adjusts of the sharpness along edges of your capture, with a low value creating a gradual transition between elements at different depths, and a high value severing these transitions. This is most visible when areas of the subject overlap.
  • Color Contribution: The percentage of how much the color video with influence the depth data. This is particularly noticeable when you have edges well defined in your color video that are not clear in the depth data alone. Increasing the color contribution in this case will allow the refinement algorithm to pull more cues from the color in order to modify the depth.
  • Depth Contribution: The percentage of how much the depth data influences the refinement algorithm. At 100% contribution, your data will reflect the look of raw depth data. Decreasing the value will soften your data, putting more weight onto the other enhancement parameters.
  • Fill Amount: Complements the Filter Size by providing a secondary fill value. Leave at the default value of 4 unless you are dealing with holes or abrupt clipping in your depth. Decrease the value to remove these artifacts.


Depth Range allows you to cull depth data closer than the Near Plane or farther than the Far Plane.

If you have enabled Refinement in the Refinement panel, you'll have access to additional parameters.

Link Refinement Mask will allow you to select a matte video for the Masked Refinement workflow.

Crop allows you to remove unused portions of your frame. Like the Depth Range, the closer you can get the crop parameters to your subject without clipping them at any point in the recording, the higher-resolution your output will be.

Crop with the horizontal and vertical crop sliders or via the interactive handles in the Color Preview window.