Sunday 12 February 2012

Tutorial 4: Depth Stream

In the last installment we got the video stream which was pretty cool but now we are going to get the depth stream.  It looks pretty cool when you see the image so lets get started.
First up we need a texture to store our Depth Stream similar so we shall add another Texture2D object with the current colorVideo:
Texture2D colorVideo, depthVideo;
Next we need to enable the depth stream like we did the color stream.  The Depth Stream can be 60x60, 320x240 or 640x480 all at 30fps which is quite nice.  I personally chose 320x240 as the higher 640x480 caused some stuttering on my machine.
kinect.DepthStream.Enable(DepthImageFormat.Resolution320x240Fps30);
At same time we need to change the event handler we added last time to use the AllFramesReady rather than individual frame ready events.  The AllFramesReady event means the color and depth frames are sent to the event handler at the same time so there is no lag between the images:
kinect.AllFramesReady += new EventHandler<AllFramesReadyEventArgs>(kinect_AllFramesReady);
If you have not removed the old ColorFrameReady event handler assignment then do that now, so our initialisation of the Kinect Sensor should look like this:
kinect = KinectSensor.KinectSensors[0];
kinect.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);
kinect.DepthStream.Enable(DepthImageFormat.Resolution320x240Fps30);
kinect.AllFramesReady += new EventHandler<AllFramesReadyEventArgs>(kinect_AllFramesReady);

kinect.Start();
Null Image Fix
Recently I have noticed that the game will start drawing before the textures have been filled leading to a null point exception.  There are two ways to handle this; check if the images are null before drawing them or initialise the textures before they are drawn.  I decided for the second option and have added these two lines below starting Kinect:
colorVideo = new Texture2D(graphics.GraphicsDevice, kinect.ColorStream.FrameWidth, kinect.ColorStream.FrameHeight);
depthVideo = new Texture2D(graphics.GraphicsDevice, kinect.DepthStream.FrameWidth, kinect.DepthStream.FrameHeight);

All Frames Ready Event Handler
What we can do next is copy the existing kinect_ColorFrameReady and rename it to kinect_AllFramesReady, we also need to change the arguments so we have this:
void kinect_ColorFrameReady(object sender, ColorImageFrameReadyEventArgs colorImageFrame)

Now we can deal with getting the depth data and converting it to an image.  We need to open the Depth Image Frame and check if it is null:
DepthImageFrame depthVideoFrame = imageFrames.OpenDepthImageFrame();

if (depthVideoFrame != null)
{
}
The ColorStream returns a nice 32bit image but the DepthStream returns a 16bit image so we need to convert this to 32bit, before doing this we need copy the pixel data to a short array. Add the following lines so we have this:
short[] pixelData = new short[depthVideoFrame.PixelDataLength];
if (depthVideoFrame != null)
{
    short[] pixelData = new short[depthVideoFrame.PixelDataLength];
    depthVideoFrame.CopyPixelDataTo(pixelData);
    depthVideo = new Texture2D(graphics.GraphicsDevice, depthVideoFrame.Width, depthVideoFrame.Height);
    depthVideo.SetData(ConvertDepthFrame(pixelData, kinect.DepthStream));
}

We had taken the pixel data and stored it in our short array, initilaised the texture and then set the data of this texture making a call to ConvertDepthFrame.  Now would be a good time to create this method!

Converting PixelData To An Image
private byte[] ConvertDepthFrame(short[] depthFrame, DepthImageStream depthStream)
{
    int RedIndex = 0, GreenIndex = 1, BlueIndex = 2, AlphaIndex = 3;

    byte[] depthFrame32 = new byte[depthStream.FrameWidth * depthStream.FrameHeight * 4];

    for (int i16 = 0, i32 = 0; i16 < depthFrame.Length && i32 < depthFrame32.Length; i16++, i32 += 4)
    {
        int player = depthFrame[i16] & DepthImageFrame.PlayerIndexBitmask;
        int realDepth = depthFrame[i16] >> DepthImageFrame.PlayerIndexBitmaskWidth;

        // transform 13-bit depth information into an 8-bit intensity appropriate
        // for display (we disregard information in most significant bit)
        byte intensity = (byte)(~(realDepth >> 4));

        depthFrame32[i32 + RedIndex] = (byte)(intensity);
        depthFrame32[i32 + GreenIndex] = (byte)(intensity);
        depthFrame32[i32 + BlueIndex] = (byte)(intensity);
        depthFrame32[i32 + AlphaIndex] = 255;
    }
    return depthFrame32;
}
We now have our method to convert the depth frame data into an image.  It may look complex but it is quite simple:  We need a byte array to return our data in a 32bit format so we calculate its size based on the width and height of the depth frame image.  Looping through the image we take each short of information and convert it to a byte with a bit shift right; we then apply the intensity to each of the channels (RGB) and make the alpha channel opaque.  Finally we return our byte array containing the transformed pixel data. Simples ;)

The Result
There we go a lovely noisy black and white image but we will try and reduce the noise in a later tutorial, although generally we wont be seeing the image.

Wrapping Up
In this tutorial we have covered setting up the depth stream, retrieving the data using the All Frames Ready event handler and finally converting the pixel data to a nice black and white image.  Next time we will get the Skeleton Tracking working. 

Solution: Tutorial 4 Solution

7 comments:

  1. Just checked the blog again and lovely, these tutorials are definitely helping for my project. Skeleton Tracking will be a great one, I'm sure!
    Thanks for the hard work putting all these.

    ReplyDelete
  2. heu great tutorial....but i have a question...why do i keep getting overload for kinect_AllframesReady at the event handler?

    ReplyDelete
    Replies
    1. What version of the SDK are you using, I wrote these to work with version 1.0 and have not tried with any newer versions of the SDK.

      I would guess they have changed the method as it changed between 0.9 and 1.0 and broke my code.

      Delete
  3. getting the same error, and was making great progress. darn!

    ReplyDelete
  4. Make sure you throw a using statement around the DepthImageFrame, it must be disposed.

    ReplyDelete
  5. very good tutorial that help me in the process of working with Kinect sensor for robotic applications. Also, I add a link to this tutorial into an article with many more tutorial related to Kinect sensor http://www.intorobotics.com/working-with-kinect-3d-sensor-in-robotics-setup-tutorials-applications/

    ReplyDelete