ESP
ESP copied to clipboard
Add input stream for the Kinect.
We need to address:
- Find a Kinect.
- Kinect on Mac or Openframeworks for Windows.
And the priority is low, I assume?
Accidentally closed it...
I think the Kinect in particular is not a high priority, although I think we need a third sensor source to go with the color sensor and accelerometer... A reliable audio example would be good.
Some notes. I was able to get skeleton from a Kinect (model no. 1414) using ofxOpenNI which is a (largely unmaintained and undocumented) openFrameworks wrapper around an old version of OpenNI.
This required:
- Placing ofxOpenNI in
third-party/openFrameworks/addons
. - Renaming
ofGetGLTypeFromPixelFormat
toofGetGLFormatFromPixelFormat
as the function has been renamed in recent versions of openFrameworks. - Adding the ofxOpenNI source files to the ESP Xcode project.
- Adding the sub-directories of
ofxOpenNI/include
to the Xcode target's header search path. - Copying the
lib
directory fromofxOpenNI/mac/copy_to_data_openni_path
toXcode/ESP/bin/data/openni/
directory, adding the latter directory to the project's/target's library search path, and adding the contained .dylib files to the Xcode project. - Copying the
config
directory fromofxOpenNI/examples/openNI-SimpleExamples/bin/data/openni
toXcode/ESP/bin/data/openni/
.
To make the Kinect data actually useable, we'd probably want to:
- draw the Kinect camera image / depth map (e.g. so you can see whether or not you're in the frame)
- support depth thresholding
- support selection of joints and coordinates space for them (e.g. world vs. body)
- find a way of recording training samples that doesn't require the user to have their hands on the keyboard. (Note that it can take a while for OpenNI to find the user's skeleton, so even a fixed delay at the start of a gesture may not work.)
- have a way to draw joint data as a skeleton / wire-frame
Alternatively, some of this is taken care of by Synapse, and there's even a Synapse example for the GRT. This seems kind of silly, since Synapse is built on ofxOpenNI but it might be easier than using ofxOpeNI directly.
Also, ofxKinectFeatures looks useful.
If we use Synapse, we don't need to show the depth / RGB image ourselves.
We can rely on the user to trim each training sample. We'll need a way to toggle recording of training samples, rather than pressing and holding a key while recording.
Use Nick's OSC receiving code.