kinect anyone?

hello,
i’m going to begin a rudimental kinect human-machine interface for using it in a panda3d project.
anyone has any kind of helps?
(suggestions, links, python kinect api, computer vision python api or anything else?)

Hi nkint!

I’ve been working very recently on two related Panda 3D projects which receive motion and video input from a Kinect.

In my case, I handled Kinect data input with OpenNI, which is the best open-source C++ “semi-official” framework released by PrimeSense (the guys who created Kinect) which was released when I started my project. However, now Microsoft has already released its truly official Kinect SDK, which you can download from http://research.microsoft.com/en-us/um/redmond/projects/kinectsdk/. I think both frameworks have essentially the same functionalities, so I could not say which one is currently the best option to start with.

Anyway, in case it can help, i’m writing below the guide for developing Kinect based applications using OpenNI and Panda 3D as the rendering engine, just as I did it (and it turned out to perform really fine):

Steps for using Kinect:

  1. Download the following four installation components:

a) From http://www.openni.org/downloadfiles/opennimodules:

  • OpenNI binaries (the main OpenNI framework API)
  • OpenNI compliant middleware binaries
  • OpenNI compliant hardware binaries
    (i recommend you download the “development edition” in all three cases, so you can use the bundled sample projects as the starting point for your application)

b) From https://github.com/avin2/SensorKinect download:

  • Kinect Drivers
  1. Then install in the following order:
  • OpenNI binaries
  • Kinect Drivers
  • middleware binaries and hardware binaries
  1. Plug in your Kinect

  2. Execute some of the sample applications to check that the installation so far went ok.

  3. Create your own Visual Studio project which includes the installed OpenNI assemblies and libraries (or just start from any existing sample project and edit it, which is much easier!)

Then, the steps I followed for passing OpenNI Kinect data to Panda 3D:

(From the Visual C++ application):

  1. I used WinSock2 sockets (on Windows) to send one UDP message for each Kinect frame captured (it captures at 60 fps). The data I put into the datagram are all skeleton joints which the OpenNI API provides.

  2. I used shared memory file mapping to pass VGA captured video data as a pointer to Panda 3D.

(And from your final Panda 3D application):

  1. Just receive UDP datagrams with starndard Panda3D UDP sockets, and read shared memory with Python’s mmap function.

As you can see, this is quite an intrincate approach… although I think it is easier to implement than it may seem. Anyway, I think there are some Python wrappers for OpenNI or other high-level Python bindings for Kinect development at date, which were not available when I needed them… :confused:

So, just google to see if there is some easier solution already which encapsulates everything in a single Python module…

Good luck :wink:

thank you very much for the answer!

really i’m using a python under ubuntu… so i’m trying (for now with now success) python wrapper of openni.

so the better way is to have 2 application: one is panda3d application and one is for kinect stuffs.

ok, thanks!

Hi nkint,

I highly recommend you using OpenNI instead of Microsoft SDK:

  • It is more complete.
  • It is open-source.
  • It is multi-platform (I also use Ubuntu, so there is no choice, …)
  • Works with more devices (not only Kinect).

And, for the Python wrappers, I’ve made a more complete fork of Python Wrapper for OpenNI. It has more amazing features, examples, and support for linux, especially Ubuntu. It’s called PyOpenNI.

[size=150]Check out PyOpenNI at https://github.com/jmendeth/PyOpenNI
See http://bit.ly/install-pyopenni-ubuntu for how to install. It’s easy![/size]

hei!
yeah i’m on ubuntu too.
till now i’m using ofxOpenNI that is a wrapper for Openframeworks (c++ opengl framework for creative coding) and communication via OSC (open sound control, a protocol under UDP)

till now it’s ok, i use only left hand (for translation) ritgh hand (for rotation) and neck position (for distance nech-hand) and skelton recognition works quite well in ofxOpenNI.

i’ll check your example!
do you have some ideas on how control actors via gestures or similar?