Welcome to the Depthkit Documentation Portal!

Depthkit is the first volumetric filmmaking tool designed to empower creators of all experience levels to participate in the cutting edge of immersive storytelling.

Computer Requirements

Depth Sensors

Getting Started

Getting Started

What are the recommended system specs?

See Depthkit's minimum computer specifications.

What equipment do I need to get started?

All you need is a Kinect for Windows v2 / Kinect for Xbox One (plus Kinect Adapter for Windows) and a Windows PC to get up and running with the Depthkit.

What depth sensors are supported?

Depthkit supports the Kinect for Windows v2 / Kinect for Xbox One (with the addition of the Kinect Adapter for Windows). Support for RealSense depth cameras (D415 and D435) is experimental. This means that we support both the D415 and D435, but we do not recommend them for use in a production.

What is the difference between the Kinect for Windows v2 & the Kinect for Xbox One?

These two Kinects are identical in how they function. Since the Kinect has been discontinued, the Kinect for Xbox One is the sensor that is still readily available to purchase. You will just need to purchase the Kinect Adapter for Windows, which usually needs to be purchased separately. Read more about where to purchase the Kinect.

What depth sensor is right for me?

We strongly recommend the Kinect due to stability during capture and depth data quality. We support Intel RealSense, but it is currently in experimental state with some quality and stability issues. Read more about it sensors in our Equipment guide.

Is Depthkit Mac compatible?

Not yet. Mac support is currently in development and will be available for Depthkit very soon. For the moment, you can capture on a Mac using Windows Bootcamp.

How can I verify that Depthkit will run on my computer before using it?

Yes. We recommend downloading the Kinect Configuration Verifier Tool and running it while your Kinect is plugged in to your computer. See details on computer performance.

Do I need to shoot with an external camera in addition to the Depth sensor, like a DSLR?

No! Depthkit captures the color streams provided by the Kinect and RealSense cameras so you can begin capturing without any other equipment. Depthkit does have experimental support for calibration and capture with an video camera alongside the depth sensor. Please get in touch with support@depthkit.tv to inquire about access to this feature.

Can I shoot with multiple sensors?

Depthkit multi-camera capture methods are still in development and will be released with the Studio product tier. If interesting in learning more, feel free to join the Depthkit Studio waiting list on our website.

Capturing Data

What is the export format for Depthkit?

Depthkit is optimized for Unity and exports a video file (mp4) in Depthkit's combined-per-pixel - combined-per-pixel - The export format optimized for Unity playback that is made up of the color video (top) and depth data (bottom). This format provides a performance friendly playback of your volumetric data. format.This includes the color video as well as the depth data, formatted for performance friendly playback in Unity with the addition of the Depthkit Unity Plugin.

What is the export resolution for Depthkit?

The export resolution when using the Kinect is 512x848.

How much hard drive space will Depthkit footage consume?

If shooting with a Kinect, keep in mind that a ten minute clip will take up approximately 4GB.

Do I need to record with a green screen?

Depthkit has been optimized to shoot on location as well as in a professional production setting with a green screen. The latter is optional, but encouraged in order to process footage with the Depthkit Enhancement workflow! Read more about capture locations and best practices.

Can I shoot with Depthkit on the go?

The Depthkit has been developed with increased capture stability and performance, allowing you to capture on portable machines. Check out system specifications for more details on what is possible!

Export and Play

What can I do with my combined-per-pixel - combined-per-pixel - The export format optimized for Unity playback that is made up of the color video (top) and depth data (bottom). This format provides a performance friendly playback of your volumetric data. footage once it is exported?

Once exported, this video format (with the corresponding metadata file - metadata file - A text file that holds the camera data of your capture. This file is required to play your recording in Unity. and poster image) is ready for Unity with the addition of the Depthkit Unity Plugin.

What game engines can I publish to?

Depthkit is compatible and optimized for Unity.

Why does my depth data appear noisy?

The current generation of depth sensor all suffer from noise and artifacts. These cameras were simply not originally designed for photographic purposes. However, Depthkit's software processes to make depth data capture high quality. Check out these best practices to avoid excess depth noise. If you are interested in the depth data enhancement systems, please check out our Beta program for more information.

How long does it take to process my depth footage?

The capture to export process is extremely fast, with depth data exporting happening faster than real time on most machines. You can get up and running and into Unity in minutes with Depthkit's streamlined workflow.

Publishing in Unity

Is there a limit to the amount of Depthkit footage I can play in a Unity scene before I run into performance issues?

This will vary based on your scene and the performance of your computer. That being said, there are several things you can do to increase performance.

1) Lower the mesh density - mesh density - The quality of your asset. of your clips. This can be found in the Depthkit clip Inspector.
2) Lower the resolution of your Depthkit videos. This can be helpful if you are building to Android or iOS.
3) Use AVPro as your video player. This can often help performance by offering hardware decoding. It also can accept alternate video codecs like HAP, a performance friendly codec designed for low CPU utilization.


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.