All you need is a depth sensor and a Windows PC to get up and running with the Depthkit.
See Depthkit's minimum computer specifications.
Depthkit supports several depth sensors. Our recommendation is the Microsoft Azure Kinect which is available in the Depthkit Pro license. We also support the Kinect for Windows v2 and the Kinect for Xbox One with the addition of the Kinect Adapter for Windows. Note that this device has been discontinued and since replaced by the Azure Kinect, but is still available second hand. Support for RealSense D400 series depth cameras (D415, D435 and D435i) is supported but experimental- meaning we do not recommend them for use in a production for quality and stability reasons.
We strongly recommend the Azure Kinect due to stability during capture and depth data quality. Read more about it sensors in our Depth Sensors guide.
These two Kinect models are identical hardware, but distributed differently. Since the Kinect has been discontinued, the Kinect for Xbox One is the sensor that is still readily available to purchase. You will just need to purchase the Kinect Adapter for Windows, which usually needs to be purchased separately. Read more about where to purchase the Kinect.
Not yet. Mac support is currently in development and will be available for Depthkit very soon. For the moment, you can capture on a Mac using Windows Bootcamp.
The best way to ensure that Depthkit will run well is to buy one of the recommended computers listed in our minimum computer specifications.
The Azure Kinect has the ability to capture at 2K and 4K color resolutions, but requires a more powerful computer to do so smoothly. See the Azure Kinect guide for more on the device's hardware requirements.
No! Depthkit captures the color streams provided by the Kinect and RealSense cameras so you can begin capturing without any other equipment. Depthkit does have experimental support for calibration and capture with a video camera alongside the depth sensor. Please get in touch with [email protected] to inquire about access to this feature.
Not yet! Depthkit multicamera capture methods are still in development and will be released with the Studio product tier. If you are interested in learning more, feel free to join the Depthkit Studio waiting list on our website.
Depthkit is optimized for Unity and exports a video file (mp4) in Depthkit's combined-per-pixelcombined-per-pixel - A video or image sequence export format, optimized for Unity playback that consists of the color video (top) and depth data (bottom) in a single export. This format provides a performance friendly playback of your volumetric data in the game engine. format. This includes the color video as well as the depth data, formatted for performance friendly playback in Unity with the addition of the Depthkit Unity Plugin.
If shooting with the Azure Kinect at 1080p, keep in mind that a ten minute clip will take up approximately 4GB.
Depthkit supports workflows for shooting on location, as well as in a professional production setting with a green screen. Capturing on a greenscreen enables the Depthkit Enhancement workflow to dramatically improve the captured data quality. Read more about capture locations and best practices.
The Depthkit has been developed with increased capture stability and performance, allowing you to capture on portable machines. Check out system specifications for more details on what is possible!
The current generation of depth sensor all suffer from noise and artifacts. These cameras were simply not originally designed for photographic purposes. However, Depthkit's software processes to make depth data capture high quality. Check out these best practices to avoid excess depth noise. If you are interested in the depth data enhancement systems, please check out our Beta program for more information.
What can I do with my combined-per-pixelcombined-per-pixel - A video or image sequence export format, optimized for Unity playback that consists of the color video (top) and depth data (bottom) in a single export. This format provides a performance friendly playback of your volumetric data in the game engine. footage once it is exported?
Once exported, this video format (with the corresponding metadata filemetadata file - A text file that holds the camera data of your capture. This file is required to play your recording in Unity. and poster image) is ready for Unity with the addition of the Depthkit Unity Plugin.
Depthkit is compatible and optimized for Unity and Three.js.
The Depthkit Pro license supports the export of Geometry sequences that integrate with all of the leading visual effects tools. To learn more, check out Depthkit for Visual Effects
The capture to export process is extremely fast, with depth data exporting happening faster than real time on most machines. You can get up and running and into Unity in minutes with Depthkit's streamlined workflow.
Is there a limit to the amount of Depthkit footage I can play in a Unity scene before I run into performance issues?
This will vary based on your scene and the performance of your computer. That being said, there are several things you can do to increase performance.
- Lower the mesh densitymesh density - The quality of your asset. of your clips. This can be found in the Depthkit clip Inspector.
- Lower the resolution of your Depthkit videos. This can be helpful if you are building to Android or iOS.
- Use AVPro as your video player. This can often help performance by offering hardware decoding. It also can accept alternate video codecs like HAP, a performance friendly codec designed for low CPU utilization.
Updated about a year ago