Sensor-only texture alignment
Due to a rare known hardware issue for some Azure Kinects, the color texture of a Depthkit recording might not perfectly align to the geometry.
In the Depthkit Cinema workflow, this can be addressed by using the Cinema > Alignment tools found in the Edit context, but if no Cinema camera is being used for the recording, you can expose these tools by editing the project JSON.
Whenever making edits to your project JSON, be sure to create a backup first.
What you'll need
- Depthkit Project with a recording
- Depthkit Cinema or Depthkit Studio Pilot License (does not work for Depthkit Core)
1. Copy Source Extrinsics and Intrinsics
In order to preserve the intrinsics and extrinsics of the sensor, we need to copy the factory calibration of the color camera.
- Open the project JSON in a text editor - Code editors like Atom and Sublime Text make the file easier to read.
- Search for
"deviceConfigurations": {
- This object contains all of the calibration data for all sensors. - Within it, scroll down to the serial number of the sensor you want to enable Cinema on - This serial number can be found printed on the underside of the sensor itself, or in the name of the take in the Edit context's Library. It is formatted as follows:
"(Azure Kinect Serial Number)": {
- This object contains all of the calibration data of a specific sensor. - Within that serial number is an array named
"sources": [
- This holds the calibration data for each of the sensor's sources, including color camera, IR camera, and depth camera. Each object in this array represents one source, enclosed in curly braces{}
. - The first object, representing the color camera, contains a
"calibrations": {
object, which stores all of the sensor's calibrations, and ends with"hwid": ...
and"name": "Azure Kinect Color Camera"
. - Within the
"calibrations": {
object, locate the sensor resolution of your recording, for example"auto-calibration-2560x1440": {
. If you are unsure of your recording resolution, you can navigate to your project folder > take folder for the recording > sensor folder, and get dimensions in the the properties for thesensor.mp4
file. - Copy everything within that object, including
"extrinsics": {...}
and"intrinsics": {...}
"extrinsics": {
"rotation": [
-0.09482414275407791,
-0.002256935928016901,
-0.003377950517460704
],
"translation": [
-0.03216085210442543,
-0.002425259444862604,
0.003952352330088615
]
},
"intrinsics": {
"distortionRadial": [
0.4293920397758484,
-2.4632327556610107,
1.3982385396957397,
0.31188586354255676,
-2.294495105743408,
1.3303236961364746
],
"distortionTangential": [
0.00103277456946671,
-0.000685523496940732
],
"focalLength": [
1218.191162109375,
1218.0323486328125
],
"imageSize": [
2560.0,
1440.0
],
"principalPoint": [
1278.05078125,
740.3385620117188
]
}
2. Create Cinema Object
Next, we'll create a new Cinema source to trick Depthkit into thinking there's a Cinema pairing for the sensor, and paste the extrinsics and intrinsics from the sensor to make sure they match the sensor's internal calibration.
- In the
"sources": [
object described above, locate the]
at the end of the array. - Just before the close-bracket
]
, place a comma,
after the last curly-brace}
, and then paste in the following Cinema source:
{
"calibrationErrorMetrics": {
"external-calibration": {
"extrinsic": {
"meanSquaredError": 0.0,
"spatialCoverageVolume": 0.0
},
"lens": {
"meanSquaredError": 0.0,
"reprojectionError": 0.0,
"spatialCoverageArea": 0.0
}
}
},
"calibrations": {
"external-calibration": {
"extrinsics": {
"rotation": [
-0.10022589564323425,
-0.0042062075808644295,
0.0011628884822130203
],
"translation": [
-0.031951893121004105,
-0.002534630009904504,
0.0037099765613675117
]
},
"intrinsics": {
"distortionRadial": [
0.5108205676078796,
-2.5549936294555664,
1.4210169315338135,
0.38889163732528687,
-2.381080150604248,
1.3518937826156616
],
"distortionTangential": [
0.0006913636461831629,
-0.00013841036707162857
],
"focalLength": [
1223.1407470703125,
1222.8065185546875
],
"imageSize": [
2560.0,
1440.0
],
"principalPoint": [
1281.5313720703125,
734.4553833007813
]
}
}
},
"hwid": "external-camera",
"name": "Azure Kinect External Camera",
"ransacThreshold": 0.5,
"type": "color"
}
This creates a structure that Depthkit recognizes as a Cinema calibration.
- Within this Cinema source object, replace the contents of the
"external-calibration": {
object with the sensor's internal calibration intrinsics and extrinsics you copied before.
3. Enable 'fixExternalIntrinsics'
To get Depthkit to honor the intrinsics in the newly created object:
- Scroll back up to the object named as the Azure Kinect serial number, and within it, change
"fixExternalIntrinsics":
fromfalse
totrue
. - Save the JSON.
4. Link Sensor Video as Cinema Clip
- Open your Depthkit project.
- In the Edit context, select the desired clip and angle.
- In the Cinema panel, you now have the ability to Enable Cinema. Click the button to enable it.
- In the dialogue that appears, navigate to the take folder and the sensor folder of the original Depthkit sensor color recording. Select the
sensor.mp4
file that corresponds to the take and angle. - Now that the clip is linked, the Alignment sliders are available to adjust the texture-geometry alignment. Use the Alignment > Rotation tools to compensate for any misalignment from the factory.
Updated about 1 year ago