Processing Cinema captures

How to process Cinema takes

Once your cinema captures are transferred onto your computer, you are ready to process your volumetric footage.

🚧

Re-encoding

You may need to re-encode these files for Depthkit if your camera uses a codec unsupported by Windows Media Foundation. Re-encode to H.264 (mp4) for Depthkit.

In the Edit window Library, select the clip that you would like to process. By default, the take is already set up with the sensor video in the 3D view port.

2306
  1. Expand the Cinema Capture Panel and click the Enable Cinema Capture button to enable the linking of the cinema captures to the sensor captures.
297
  1. Click Link Cinema Capture Video to import the video into Depthkit.
297

Enabling Cinema Capture previews footage from your video camera as opposed to the sensor color.

Once the video is linked, you will see that the 3D viewport now previews the video from your camera instead of the sensor. In addition, the video file path is linked within the Cinema Capture panel.

298

Synchronizing footage

  1. Now you need to synchronize the cinema and sensor video, which will in turn synchronize your depth data as well.

A quick way to do this is to take both the cinema and sensor video into a video editor like Adobe Premiere and find the clap, or slate, to use as the sync point. Note the time-code for both videos.

📘

Use Clip-Relative Timecode, Not Time of Day

Gather the time from the start of the video – note that the numbers will be relatively low – not the timecode representing time of day. Some professional cameras will produce timecode against a 24 hour clock which will look like 01:23:45:01.

As seen in the example below, the Cinema Capture has a sync point of 00;00;04;02. The sync point of the Sensor Capture is 00:00:05:00.

2560

Sync point of the Cinema Capture

2560

Sync point of the Sensor Capture

Paste these time-codes under Synchronization in the Cinema Capture panel.

298

Paste the time-code from your videos into the Synchronization panel.

Once applied, you will notice that the play-head jumps to the sync point on the timeline and in the 3D view port and your footage is now synchronized.

Depth & Color alignment

🚧

Depth & Color Alignment

On some Azure Kinects, we have noticed a slight misalignment between the sensor depth and color - More information is available on Microsoft's Azure Kinect SDK GitHub. In the meantime, we have provided an Alignment panel to solve for this potential issue in Depthkit Cinema.

Once your Cinema footage is linked and synchronized, if you notice a slight offset between the color and depth in your take, you can solve this with the Alignment panel, located within the Cinema Capture panel. With this functionality, you can tweak the depth and color alignment to solve for any discrepancy that your sensor may have introduced. Translate, rotate, or scale can all be used to adjust as needed.

For best results, apply any translation, rotation, or scale prior to adding a refinement mask to your clip.

If your sensor introduced this misalignment, you may need to apply these tweaks to all of your Cinema Captures. Simple click the Copy to Cinema Captures button to do this. This action will only copy the alignment values to takes with enabled Cinema Captures that share the same Camera Pairing data.

Please note that this not recommended as a solution for bad or moderate Camera Pairings. You must start your project with a good Camera Pairing to be set up for success to process your takes.

600

At this stage, your volumetric footage is ready to edit, isolate, and export just as you would with a Sensor Capture. For highest quality results with the Cinema Capture workflow, we recommend using the Refinement Workflow.