Pykinect - depth map / video to numpy array

Feb 16, 2012 at 9:42 AM

Hi everyone,

I just started exploring the possibilities of the kinect. I want to do some post processing with opencv on the video streams.

I was able to read the video stream by the possibilty of the pygame surface to export its content as numpy array. 

Then I converted this array than to an opencv matrix...

Is there is easier / more straight forward way to access the video data from the kinect with opencv in python?

Thank you in advance! 

Coordinator
Feb 16, 2012 at 4:52 PM

I think you can skip the pygame surface by constructing a new array and then using the ctypes attribute so you can copy memory into it, something like:

arr = numpy.empty(320*240, numpy.uint16)  # 16-bit 320x240 resolution for the depth stream, change appropriately for the video stream

def depth_frame_ready(frame):

     frame.image.copy_bits(arr.ctypes.data)          # copy into the array

 

I haven't tested it but I think something like that should work.