Welcome to the Depthkit Documentation Portal!

Depthkit is the first volumetric filmmaking tool designed to empower creators of all experience levels to participate in the cutting edge of immersive storytelling.

Verify Computer Performance

Computer Requirements

Depth Sensors


Getting Started

What equipment do I need to get started?

All you need is a depth sensor and a Windows PC to get up and running with the Depthkit.

What are the recommended computer system specs?

See Depthkit's minimum computer specifications.

What depth sensors are supported?

Depthkit supports several depth sensors. Our recommendation is the Microsoft Azure Kinect which is available in the Depthkit Pro license. We also support the Kinect for Windows v2 and the Kinect for Xbox One with the addition of the Kinect Adapter for Windows. Note that this device has been discontinued and since replaced by the Azure Kinect, but is still available second hand. Support for RealSense D400 series depth cameras (D415, D435 and D435i) is supported but experimental- meaning we do not recommend them for use in a production for quality and stability reasons.

What depth sensor is right for me?

We strongly recommend the Azure Kinect due to stability during capture and depth data quality. Read more about it sensors in our Depth Sensors guide.

What is the difference between the Kinect for Windows v2 & the Kinect for Xbox One?

These two Kinect models are identical hardware, but distributed differently. Since the Kinect has been discontinued, the Kinect for Xbox One is the sensor that is still readily available to purchase. You will just need to purchase the Kinect Adapter for Windows, which usually needs to be purchased separately. Read more about where to purchase the Kinect.

Is Depthkit Mac compatible?

Not yet. Mac support is currently in development and will be available for Depthkit very soon. For the moment, you can capture on a Mac using Windows Bootcamp.

How can I verify that Depthkit will run on my computer before using it?

The best way to ensure that Depthkit will run well is to buy one of the recommended computers listed in our minimum computer specifications.

The Azure Kinect has the ability to capture at 2K and 4K color resolutions, but requires a more powerful computer to do so smoothly. See the Azure Kinect guide for more on the device's hardware requirements.

For the Kinect for Windows V2, we recommend downloading the Kinect Configuration Verifier Tool and running it while your Kinect is plugged in to your computer. See details on computer performance.

Do I need to shoot with an external camera in addition to the Depth sensor, like a DSLR?

No! Depthkit captures the color streams provided by the Kinect and RealSense cameras so you can begin capturing without any other equipment. Depthkit does have experimental support for calibration and capture with a video camera alongside the depth sensor. Please get in touch with [email protected] to inquire about access to this feature.

Can I use Depthkit to capture a scene with multiple sensors simultaneously?

Not yet! Depthkit multicamera capture methods are still in development and will be released with the Studio product tier. If you are interested in learning more, feel free to join the Depthkit Studio waiting list on our website.

Capturing Data

What is the export format for Depthkit?

Depthkit is optimized for Unity and exports a video file (mp4) in Depthkit's combined-per-pixelcombined-per-pixel - A video or image sequence export format, optimized for Unity playback that consists of the color video (top) and depth data (bottom) in a single export. This format provides a performance friendly playback of your volumetric data in the game engine. format. This includes the color video as well as the depth data, formatted for performance friendly playback in Unity with the addition of the Depthkit Unity Plugin.

How much hard drive space will Depthkit footage consume?

If shooting with the Azure Kinect at 1080p, keep in mind that a ten minute clip will take up approximately 4GB.

Do I need to record with a green screen?

Depthkit supports workflows for shooting on location, as well as in a professional production setting with a green screen. Capturing on a greenscreen enables the Depthkit Enhancement workflow to dramatically improve the captured data quality. Read more about capture locations and best practices.

Can I shoot with Depthkit on the go?

The Depthkit has been developed with increased capture stability and performance, allowing you to capture on portable machines. Check out system specifications for more details on what is possible!

Why does my depth data appear noisy?

The current generation of depth sensor all suffer from noise and artifacts. These cameras were simply not originally designed for photographic purposes. However, Depthkit's software processes to make depth data capture high quality. Check out these best practices to avoid excess depth noise. If you are interested in the depth data enhancement systems, please check out our Beta program for more information.

Export and Publishing

What can I do with my combined-per-pixelcombined-per-pixel - A video or image sequence export format, optimized for Unity playback that consists of the color video (top) and depth data (bottom) in a single export. This format provides a performance friendly playback of your volumetric data in the game engine. footage once it is exported?

Once exported, this video format (with the corresponding metadata filemetadata file - A text file that holds the camera data of your capture. This file is required to play your recording in Unity. and poster image) is ready for Unity with the addition of the Depthkit Unity Plugin.

What game engines or other realtime environments does Depthkit publish to?

Depthkit is compatible and optimized for Unity and Three.js.

What visual effects or other rendering environments can Depthkit publish to?

The Depthkit Pro license supports the export of Geometry sequences that integrate with all of the leading visual effects tools. To learn more, check out Depthkit for Visual Effects

How long does it take to process my depth footage?

The capture to export process is extremely fast, with depth data exporting happening faster than real time on most machines. You can get up and running and into Unity in minutes with Depthkit's streamlined workflow.

Publishing in Unity

Is there a limit to the amount of Depthkit footage I can play in a Unity scene before I run into performance issues?

This will vary based on your scene and the performance of your computer. That being said, there are several things you can do to increase performance.

  1. Lower the mesh densitymesh density - The quality of your asset. of your clips. This can be found in the Depthkit clip Inspector.
  2. Lower the resolution of your Depthkit videos. This can be helpful if you are building to Android or iOS.
  3. Use AVPro as your video player. This can often help performance by offering hardware decoding. It also can accept alternate video codecs like HAP, a performance friendly codec designed for low CPU utilization.

Updated about a year ago


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.