11 Feb 2013, 19:52

Playing with Kinect and ZeroMQ Part 2

In the previous post I briefly outlined writing a simple tool to pull data off of a kinect, and pass the buffers into zeromq frames for transmission.  In this post I’m going to go over the other side of the equation - a simple receiver that subscribes to the data stream and displays it. I’ll be using opencv for image creation and display:

#include stdio.h;
#include stdlib.h;
#include opencv/cv.h;
#include opencv/highgui.h;
#include czmq.h;

So, first I create my zeromq SUB socket, subscribe (in this instance I’m not using a topic so I subscribe to “”), and connect to the broadcaster:

zctx_t *ctx = zctx_new ();
void *receive = zsocket_new ( ctx, ZMQ_SUB );
zsocket_set_subscribe ( receive, "" );
int rc = zsocket_connect ( receive, "tcp://192.168.1.113:9999" );

Next, I’ll create an image header using opencv’s cvCreateImageHeader call:

IplImage *image = cvCreateImageHeader ( cvSize (640, 480), 8, 3 );

Within my while loop, I will first do a blocking zmsg_recv call. I could instead use the zmq poller, or czmq’s zloop, but I kept it simple for this example. After I receive the message, I think pop off the rgb and depth frames, then copy the rgb frame to a buffer:

/* receive message */
zmsg_t *msg = zmg_recv ( receive );

/* pop frames and then copy rgb data */
zframe_t *depth_frame = zmsg_pop ( msg );
zframe_t *rgb_frame = zmsg_pop ( msg );

char *rgb_buffer = zframe_strdup ( rgb_frame );

Note that I could use zero copy techniques instead of duplicating the frame, but once again for this example I kept things simple.

Next is the code do display the image:

cvSetData ( image, rgb_buffer, 640*3 );
cvCvtColor ( image, image, CV_RGB2BGR );
cvShowImage ( "RGB", image );

Then all that’s left is cleanup:

zframe_destroy ( &depth_frame );
zframe_destroy ( &rgb_frame );
zmsg_destroy ( &msg );

Here’s a little video showing 6 receivers running, all pulling from the same broadcaster. Oh, the music in the background is a track that happened to be up on spotify when I shot the video - it’s Marley Carroll’s “Meaning Leaving”, and you might want to check him out, he’s fantastic!

Have Fun!

10 Feb 2013, 14:19

Playing with Kinect and ZeroMQ Part 1

Image

I was looking for something fun to play with in order to start experimenting with sending “binary” (non string) data over zeromq.  I realized I had a Microsoft Kinect laying around that no one was really using anymore, so I grabbed it and spent a day reading up on the available open source libraries for accessing it.

The Kinect is a really nifty little device.  You can pull data streams off of it containing rgb frame info (the video camera), depth information (the ir camera), four audio streams, and accelerometer data.  In addition, you can adjust the tilt of the device using the motorized base.

I now have some working test code that pulls both the rgb and depth data from the kinect and broadcasts it over zeromq using a pub socket, and a small receiver program that receives the data, parses out the rgb frame and displays it.

To accomplish this I’m using the following libraries:

First, we’ll look at the broadcast code.    The includes are simply the libfreenect_sync wrapper, the czmq library, and stdlib / stdio. I’m using the sync wrapper for libfreenect to start because it is a simpler interface than the asyncronous interface. I plan to move to the asyncronous interface soon, as it’s event driven / callback model would be a nice fit with czmq’s zloop.

#include stdlib.h
#include stdio.h
#include libfreenect_sync.h
#include czmq.h

So first I set up a zeromq publish socket. I’ve set a high water mark on of 1000 as I’d rather drop frames than run my laptop out of ram if the receivers can’t process fast enough:

    /*  set up zmq pub socket */
    zctx_t *ctx = zctx_new ();
    void *broadcast = zsocket_new (ctx, ZMQ_PUB );
    zsocket_set_sndhwm ( broadcast, 1000 );
    zsocket_bind ( broadcast, "tcp://192.168.1.113:9999" );

Since I want to send both the rgb and depth buffers, the next thing I do is get the sizes I will need for each buffer. To do this, I use freenect_find_video_mode and freenect_find_depth_mode, which are part of the openkinect “low level” API ( see http://openkinect.org/wiki/Low_Level ):

    size_t rgb_buffer_size = freenect_find_video_mode(
        FREENECT_RESOLUTION_MEDIUM, FREENECT_VIDEO_RGB).bytes;
    size_t depth_buffer_size = freenect_find_depth_mode(
        FREENECT_RESOLUTION_MEDIUM, FREENECT_DEPTH_11BIT).bytes;

Next, I’ll create an empty zeromq message using czmq’s zmsg api ( http://czmq.zeromq.org/manual:zmsg ):

    zmsg_t *msg = zmsg_new ();

Now, I’ll get the rgb data, put it into a buffer, put that buffer into a zeromq frame, and push the frame into my empty message. Note that the freenect_sync_get_video call also expects an unsigned int, into which it will place the timestamp for the frame. I’m currently not doing anything with the timestamp, but it would be easy enough to include in the message as well.

    /*  get rgb frame and timestamp
     *  and add rgb frame to msg */
    char *rgb_buffer;
    unsigned int rgb_timestamp;

    freenect_sync_get_video (
        (void**) (&rgb_buffer), &rgb_timestamp,
        0, FREENECT_VIDEO_RGB );
    
    zframe_t *rgb_frame = zframe_new ( rgb_buffer, rgb_buffer_size );
    zmsg_push ( msg, rgb_frame );

Now, I’ll do the same thing for the depth buffer:

    /*  get depth frame and timetamp
     *  and add depth frame to msg */
    char *depth_buffer;
    unsigned int depth_timestamp;

    freenect_sync_get_depth (
        (void**) (&depth_buffer), &depth_timestamp,
        0, FREENECT_DEPTH_11BIT );
    
    zframe_t *depth_frame = zframe_new ( depth_buffer, depth_buffer_size );
    zmsg_push ( msg, depth_frame );

All that’s left to do at this point is send the message and clean up:

    int rc = zmsg_send ( &msg, broadcast );
    assert ( rc == 0 );

    /*  cleanup */
    zmsg_destroy ( &msg );

I’ve been using czmq for quite awhile now. I’m pleased by the balance the library strikes between providing some nice higher level abstractions while allowing low level control over.  Hopefully this post demonstrates how simple it is to create multi frame messages from buffers using the library.

I’ll post about the receiver in a follow up post.  It currently receives the messages over zeromq, pulls out the frame with the rgb buffer, and uses opencv to construct and display the images as a video.

26 May 2011, 17:56

ZeroMQ Input / Output Plugins for Rsyslog

Just dropping a quick note that Aggregate Knowledge, where I work as Service Delivery Data Architect, has released our first open source release this week - zeromq input and output plugins for rsyslog. Give them a spin if it’s something you’re interested in! You can find them at the Aggregate Knowledge ZeroMQ Rsyslog Plugin repository on github.