Wednesday 8 February 2012

Converting Beta 2 to V1.0: RGB Video

I can imagine many of you suddenly found the V1.0 release for the Kinect SDK did not work with your code written for the Beta 2 version, I have started converting my code and thought I would share the key changes I have made.


Here is what the image frame ready event handler method looked like for beta 2:
void kinect_VideoFrameReady(object sender, ImageFrameReadyEventArgs e)
{
    //Get raw image
    PlanarImage videoFrame = e.ImageFrame.Image;
    
    //Create array for pixel data and copy it from the image frame
    Color[] color = new Color[videoFrame.Height * videoFrame.Width];
    kinectRGBVideo = new Texture2D(graphics.GraphicsDevice, videoFrame.Width, videoFrame.Height);

    //Read image pixel by pixel and convert RGBA to BGRA
    int index = 0;
    for (int y = 0; y < videoFrame.Height; y++)
    {
        for (int x = 0; x < videoFrame.Width; x++, index += 4)
        {
            color[y * videoFrame.Width + x] = new Color(videoFrame.Bits[index + 2], videoFrame.Bits[index + 1], videoFrame.Bits[index + 0]);
        }
    }

    kinectRGBVideo.SetData(color);
}

Now we have something that resembles this:
void kinect_ColorFrameReady(object sender, ColorImageFrameReadyEventArgs colorImageFrame)
{
    //Get raw image
    ColorImageFrame colorVideoFrame = colorImageFrame.OpenColorImageFrame();

    if (colorVideoFrame != null)
    {
        //Create array for pixel data and copy it from the image frame
        Byte[] pixelData = new Byte[colorVideoFrame.PixelDataLength];
        colorVideoFrame.CopyPixelDataTo(pixelData);

        //Convert RGBA to BGRA
        Byte[] bgraPixelData = new Byte[colorVideoFrame.PixelDataLength];
        for (int i = 0; i < pixelData.Length; i += 4)
        {
            bgraPixelData[i] = pixelData[i + 2];
            bgraPixelData[i + 1] = pixelData[i + 1];
            bgraPixelData[i + 2] = pixelData[i];
            bgraPixelData[i + 3] = (Byte)255; //The video comes with 0 alpha so it is transparent
        }

        // Create a texture and assign the realigned pixels
        colorVideo = new Texture2D(graphics.GraphicsDevice, colorVideoFrame.Width, colorVideoFrame.Height);
        colorVideo.SetData(bgraPixelData);
    }
}
The key differences should be quite obvious but to recap:
  • We use a Byte array not a Color array
  • Using a ColorImageFrame not a PlanarImage to store the frame from the sensor
  • Get the Pixel Data from the frame and then re-order the bytes from RGBA to BGRA
If XNA used RGBA and not BGRA then the code to get the image from the sensor would be a lot shorter and it is a shame that we have to adjust the pixel order.  I will be working on a neater solution but for now the above works.

Hopefully this will be useful for some people scratching their heads with the changes from beta 2 to 1.0 final.  Soon I will post what is needed for the Depth Sensor and Skeletal Tracking but I am still working on these.

No comments:

Post a Comment