Livestreaming

The following guides are a quick step by step guide to get you up and running with Depthkit local live-streaming for either a single sensor or with multiple calibrated sensors.

For a more in depth tutorial, explore Depthkit Holoportation via Webcam →

In this page

Single-sensor Livestreaming →
Local Livestreaming [Multi-Perspective] →
Remote Livestreaming →
Remote Livestreaming with WebRTC →


Single-sensor Livestreaming

This guide will walk through how to set up local live-streaming from Depthkit into Unity with a single sensor.

Live-streaming from Depthkit requires both the Depthkit application as well as the Depthkit Expansion Package for Unity.

Getting started in Depthkit

  1. Launch Depthkit and ensure your license reads Depthkit Pilot. Create a new project.
  1. By default, the application will launch in the Pair Camera workspace. Head over to the Record workspace tab or navigate there by going to the View menu and selecting Record.
  1. Under Edit → Preferences, select Enable Record Context Live Stream under Live streaming settings. Make note of the Discoverable Live Stream Name.

📘

Please note that Body Index in the live stream is currently only available for the Kinect for Windows v2.

  1. Once enabled, you should see a notification in the upper left corner of the 3D viewport confirming that Livestreaming in enabled.
  1. In your Depthkit project, go to your Exports folder and locate the livestream_meta text file. You will need this file for Unity.

Configuring a Unity project for Livestreaming

  1. Launch Unity and create a new project. For this guide, we will get started in the Built-in Render Pipeline with Unity 2019.4 LTS.
  1. Under Edit → Project Settings → Package Manager, expand Scope Registries. Add a new registry.
  1. Enter the following information for the New Scoped Registry:

Save the registry.

  1. In the Package Manager, click the add icon and select Add package from disk.
  1. Locate the depthkit.core package folder. Within this folder, select the package.json file and click Open.

  2. Repeat this step for the Depthkit.Live package.

  3. Once imported, you will see the addition of Depthkit Core and Depthkit Live within the Package Manager. Select Depthkit Core and expand the Samples toggle. Import the available Depthkit Core Built-in RP Prefab. This will import a Depthkit clip prefab that is preconfigured for your project.

  1. Locate this prefab under Assets → Samples → Depthkit Core → 0.8.0 → Prefabs Studio Built-in RP. Drag the Depthkit Clip + Core Built-in RP Look into the Hierarchy. This will add an empty Depthkit Core clip to your scene.

  2. Return to your Depthkit livestream metadata text file in your Depthkit project folder. Drag or copy/paste it into your Unity project, under Assets.

  1. Back in the Hierarchy, select your Depthkit clip and see the Inspector. In the Depthkit Clip component, select the video player dropdown and select Depthkit Live Player.
  1. Drag and link your livestream_meta file to the Meta Data File field, just below the video player.
  2. In the Depthkit Clip component, under Advanced Settings, disable the poster image.

Once linked, the Depthkit Clip component will report that your clip is setup.

  1. In the Spout Receiver Component, select the Source Name Depthkit.

Press Play and you will see the live stream from Depthkit into your unity scene.

❗️

Livestreaming performance

Your stream may run slowly in Unity until you enter Play mode or if the Depthkit application is in the background. For best performance enter Play mode, click on the Depthkit application window to bring it to front.

Local Livestreaming [Multi-Perspective]

When setting up your project for multi-perspective livestreaming, follow our Calibration guide →.

Watch our Local Livestreaming video walkthrough.

  1. Once your sensors are calibrated, in the Multicam workspace, click the Start Streaming button.
  1. Under Edit → Preferences, select Enable Record Context Live Stream under Live streaming settings.
    Once enabled, you should see a notification in the upper left corner of the 3D viewport confirming that Live streaming in enabled.

📘

When you have Livestreaming enabled, you cannot edit the depth range in Depthkit.

To modify the depth range, disable Livestreaming and make the adjustments, then re-enable it. You'll need to replace your metadata in Unity after modifying the depth range for correct rendering.

  1. In your Depthkit project, go to your Exports folder and locate the livestream_multicam_meta.txt file. Open this file in a text editor, scroll to the bottom, and make note of the textureHeight and textureWidth values.

  2. Bring this file into your Unity project.

  3. Under Edit → Project Settings → Package Manager, expand Scope Registries. Add a new registry.

  1. Enter the following information for the New Scoped Registry:

Save the registry.

  1. In the Package Manager, click the add icon and select Add package from disk. Locate the Depthkit.Core package folder. Within this folder, select the package.json file and click Open.

  2. Repeat this step for the Depthkit.Studio and Depthkit.Live packages in your Unity project. Import the Depthkit Studio Built-in RP prefab.

  1. Locate this prefab under Assets → Samples → Depthkit Studio → 0.5.0 → Prefabs Studio Built-in RP. Drag the Depthkit Clip + Studio Built-in RP Look into the Hierarchy.

  2. Return to your Depthkit livestream metadata text file in your Depthkit project folder. Drag or copy/paste it into your Unity project, under Assets.

  3. Back in the Hierarchy, select your Depthkit clip and see the Inspector. In the Depthkit Clip component, select the video player dropdown and select Depthkit Live Player.

  1. Drag and link your livestream_multicam_meta.txt file to the Meta Data File field, just below the video player.

  2. In the Depthkit Clip component, under Advanced Settings, disable the poster image.

Once linked, the Depthkit Clip component will report that your clip is setup.

  1. In the Spout Receiver Component, select the Source Name Depthkit.

Hit Play for multi-camera local live-streaming.

Remote Livestreaming

Watch our Remote Livestreaming video walkthrough.

Remote Livestreaming with WebRTC

While the ultimate goal and solution to livestreaming will be a more direct integration with WebRTC within Depthkit and Unity, in the mean time you can use the prototype workflow below. It enables low latency streaming over WebRTC, facilitated by Spout, OBS and a web browser.

Workflow

This workflow is designed for a unidirectional stream of video, but could be extended to be bidirectional by duplicating the workflow for the opposite direction. At a high level, the workflow is to leverage existing technologies and services built on WebRTC to transport a Depthkit live stream over the internet or a local network very quickly and with very low latency.

The main components of this workflow are the Depthkit capture app, OBS, Spout, OBS Ninja, Unity, and the Depthkit Studio Expansion packages for Unity.

OBS.Ninja

OBS.Ninja is a web application created specifically to facilitate low latency, high quality, peer-to-peer video streaming using WebRTC and make those streams available within software like OBS.

At its core, OBS.Ninja is like a Zoom or Google Meet built for video engineers. It supports video chat rooms, but more importantly for our purposes it supports unidirectional video streaming with lots of options to control quality, and is easy to integrate into OBS.

Sending peer setup

To set up the sending side of the workflow, follow these steps:

  1. Installation.
    • Install OBS.
    • Install the Spout 2 OBS plugin.
  2. Launch Depthkit and start the livestream.
    • Send the livestream_multicam_meta.txt file to the receiver.
  3. Launch OBS and create a new scene.
    - Go to Settings → Video.
    - Set the Base (Canvas) Resolution to the same resolution as the spout stream being sent out of Depthkit. This can be found within the livestream_multicam_meta.txt file that Depthkit produces in the export folder.
    - Set the Output (Scaled) Resolution to the same as the Base (Canvas) Resolution.
    - Add a Spout2 Capture source for the Depthkit livestream.
    - Start the virtual camera.
    - Go to obs.ninja and click on Add Your Camera to OBS. Choose the OBS virtual camera as the video source.
    - Click Start, and send the generated link to the receiver.
    Note: the default settings on OBS Ninja have a max resolution of 1920x1080. To use a custom resolution append the following string to the URL and reload the page: &w=2256&h=1184 where 2256 and 1184 are the width and height of the actual source video being sent. IF you do not do this, the receiver may have a cropped or distorted image which will not reconstruct in Unity.

Receiving peer setup

To set up the receiving side of the workflow, follow these steps:

  1. Installation.
  2. Launch OBS and create a new scene.
    • Go to Settings → Video.
      • Set the Base (Canvas) Resolution to the same resolution as the spout stream being sent out of Depthkit.
      • Set the Output (Scaled) Resolution to the same as the Base (Canvas) Resolution.
    • Add a Browser source.
      • Enter the same resolution as the Video Base (Canvas) Resolution.
      • Use the link generated by the sender as the URL for the Browser source.
      • Click OK.
      • The texture should appear in OBS momentarily as long as the sender is still active.
    • Go to Tools → Spout Output Settings and configure the spout output to be used by Unity.
    • Obtain the metadata file from the sender and configure a Unity clip to use the OBS spout output via the Live player.

OBS.Ninja advanced configuration options

To automate and customize this workflow more, additional URL parameters may be added.

Sender:

  • w=<width>, h=<height> Use this to set the max video resolution to be sent. Importantly this also controls the aspect ratio of any scaled resolution that may be sent due to bandwidth constraints. This is very important to set accurately.
  • mfr=30 Sets the max frame rate to 30FPS, which is the maximum Depthkit will ever provide. You can set this lower if you are bandwidth constrained.
  • ad=0 disables audio, eliminating the need to choose a microphone.
  • vd=OBS will attempt to find a video camera device that contains the string 'OBS' in the name, eliminating the need to choose a camera.
  • push=<id> uses a custom stream ID. 1 to 49-characters long: aLphaNumEric-characters; case sensitive.
  • pw=<password> Optionally define a password that is used to encrypt the stream.
  • autostart Skips some set up options to get to streaming faster.

Receiver:

  • view=<id> must match sender's push parameter.
  • pw=<password> must match sender's.
  • vb=5000 Video bitrate in kbps. This is the maximum bit rate that will be used for video. Lower rates will be used if necessary to maintain a good connection.
    • codec=h264 You can define the video codec to use. Available options are h264, vp8, vp9, and av1 though on windows it appears that only h264 is hardware accelerated encode & decode. vp8 is the default, so this should always be set to h264 if you want to use HW acceleration.

OBS Ninja allows you to craft URLs to be used and re-used without any registration step. Using unique push/view parameters for each use case allows you to set up a configuration in OBS and leave it that way, so next time it is needed there are no configuration steps.

For example:

Utilizing a Local Area Network for higher video bandwidth and quality

WebRTC does not technically require any internet connection, although in the examples above there are aspects of this system that are hosted on the publicly available internet. An important part of this system is a STUN server, which determines how to route the video stream from the sender to receiver. If the STUN server is not accessible on your LAN, then the IP addresses that it gives to each peer are going to be public IP addresses, meaning your video will be going over the internet, even if your computers are on the same LAN.

To allow WebRTC to discover local network IP addresses of each peer, a locally hosted STUN server must be used. The open source STUN server STUNTMAN is simple to set up and run:

  • Download the zip file for windows, extract it, and open a command prompt (cmd.exe) inside the directory where the stunserver.exe is located.
  • To start the server, use the following command:
    stunserver.exe --verbosity 3 --protocol udp
  • To use this stun server with OBS Ninja, add the following parameters to the URL of both the sender and receiver:
    &stun=stun:<IP address or hostname of local STUN server>:3478

If you've successfully used the local STUN server, you should see some output in the command prompt window once you establish a connection between the sender and receiver.

At this point you should be able to set the vb option significantly higher (up to 60000) to take advantage of the increased bandwidth available on the LAN.

Even when running a local STUN server, there are still other parts of this system (OBS Ninja itself) that are using the internet. Luckily OBS Ninja is also open source, and can be locally hosted as well for a more secure and reliable set up, but that is outside the scope of this document for now.

Resources — Running 2x instances of OBS is necessary for receiving a local stream as well as a remote.
How to run multiple instances of OBS on Windows 10 →