opentrack
opentrack copied to clipboard
Request: OSC output compatibility
Currently the UDP over network option is not directly compatible with OSC (open sound control). OSC, I think, is looking for a text string (command) and number (argument) to be received over the UDP port.
I figure that it wouldn't be too hard to convert the headpose data to this format and send it to the UDP port as another output option. But this is beyond my meager C++ skills.
Compatibility with OSC would allow audio developers to move around sound fields, making virtual spaces more realistic since OSC is a standard in virtual reality and surround audio development.
+1 for the OSC support!
I was asked recently if the head tracker developed by me is capable of sending data to opentrack. The easiest way to integrate both would be by using OSC. I'm happy to discuss this further if anyone is interested on connecting both projects.
Link to the thread: https://github.com/trsonic/nvsonic-head-tracker/issues/12
Cheers!
I'm not adding another RS232-based tracker. Hatire is already a pain as it is with quirks on each platform.
Actually, I wasn't originally asking for Opentrack to receive OSC INPUT compatibility. I was requesting Opentrack OSC OUTPUT compatibility. This would open up a world of headtracking options to music developers because if Opentrack could send OSC compatible messages out UDP, the Opentrack could be used to control a crapload of music production softwares.
I was thinking about attempting this myself, but my coding skills are not good enough - I can freely admit it. I think I can see where to manipulate the data, but getting the SDK and modules going on my computer is a big challenge for a newbie like me.
@dorem-midi You're right, sorry for capturing your thread.
I thought that adding OSC input/output might be something that @sthalik would want to consider. OSC is one of the most universal and elegant protocols for exchanging motion tracking data.
@sthalik There is no need of adding support for any specific headtracker or software. You could define your own OSC input/output data format and people like me or @dorem-midi would take it from there. Cheers.
@sthalik There is no need of adding support for any specific headtracker or software. You could define your own OSC input/output data format and people like me or @dorem-midi would take it from there. Cheers.
What is it specifically that you need? Your software wants to receive raw/mapped pose data, is that correct? Is there any other data that you wish to receive? What exactly is the OSC API/ABI, got any docs?
@sthalik In my case I would like to send OSC data to your app. Head rotation only. Either in a form of Euler angles or quaternions.
E.g. "/quaternion", float Qw, float Qx, float Qy, float Qz
or "/ypr", float yaw, float pitch, float roll
@dorem-midi probably wants your app to output the following OSC messages to a predefined IP/port. Something like this:
"/ypr", float yaw, float pitch, float roll
; "/xyz", float x, float y, float z
. Can't really speak for @dorem-midi.
The "/address"
string is the so-called OSC address. It allows to nicely rout messages across different apps and their functionalities/callbacks.
Some links regarding OSC: https://en.wikipedia.org/wiki/Open_Sound_Control https://web.archive.org/web/20030914224904/http://cnmat.berkeley.edu/OSC/OSC-spec.html http://cnmat.org/OpenSoundControl/
I usually use JUCE OSC module for my C++ projects. I also had some success with this library: http://www.rossbencina.com/code/oscpack
There are some Qt wrappers available but I have no experience using them.
Regarding my request about OSC output, @trsonic is correct. I think the best implementation is to allow the user to input the "/address" string because different sound design softwares will respond to different addresses. The udp port addresses are usually configurable, but the addresses are not, at least as far as I've seen. In my particular case, I'm interested in the addresses defined here: https://plugins.iem.at/docs/osc/#osc-messages Which would allow head tracking data to control sound directions in real-time for that particular software.
Can you write some stub code that does this? I can't write something I can't test.
Receive data on a separate thread, blocking with possibility of cancellation. Have an initializer and a destructor. Settings can be stubbed. Get data in -180->180 range and position coordinates in centimeters, all as a float array. Uses this Euler (or properly, Tait-Bryan) angle rotation order: <https://github.com/opentrack/opentrack/blob/master/compat/euler.cpp>.
The main thread will copy data once every 4 ms or so, and the callback for receiving it shouldn't spend too many time or CPU cycles. Ideally it should only copy the data under a lock.
I'm not an expert at this, but I can try. Probably the entire OSC standard is too much to absorb as we don't need to have every option available. The way I see it, the user would be given the following options:
- IP address and UDP port (OSC receiver)
- Up to 3 address strings that the user can input - let's call it STRING1, STRING2, for example, "/screenrotator/yaw" or "/screenrotator/pitch". The user should be able to enter the strings because each sound software will respond to different addresses that are called out in their respective documentation.
- Choice of parameter to output, such as FLOAT YAW, FLOAT PITCH, or FLOAT ROLL
- Choice of output units (degrees or radians)
- Possibly a data transmission rate, but I'm not sure if this is defined elsewhere in the code as it also relates to data smoothing (?).
(Note, my settings example above is tailored for only the angles and not the XYZ position. Some people may want this, but I currently don't have a need for it. Implementation for XYZ would be similar, however.)
So when the OSC output is enabled, the data going out would have the format:
STRING1 FLOAT YAW
an example of the output would be something like: /screenrotator/yaw -5.6379972 /screenrotator/pitch 4.3954344
Additionally, there might be some cases where the output parameters may need to be combined in the same line:
STRING1 FLOAT YAW, FLOAT PITCH, FLOAT PITCH
for example: /ypr -5.6479972, 4.3954344, 2.53422
I hope this example helps, and sorry if I didn't answer the question well enough. Possibly @trsonic can weigh in.
+1 for adding OSC Support so it can be used with all the Music & Binaural Headtrackers set to receive yaw,pitch and roll. I've not found a single headtracking software that works, the only one I can get working is the following browser-based one, but it's very CPU intensive. https://demos.mach1.tech/Facetracking-SpatialPlayer.html
Orientation OSC Data For custom orientation transmission we expect the following Euler YPR angles with the address /orientation float [0] = Yaw | 0.0 -> 360.0 | left->right ++ float [1] = Pitch | -90.0 -> 90.0 | down->up ++ float [2] = Roll | -90.0 -> 90.0 | left-ear-down->right-ear-down ++
There are some other people who have made some similar solutions since I opened this. Not sure if it helps, but, I've connected facepose app with a OSC translator as one example https://youtu.be/yHo0byXcMPo And this guy did it with open source face tracking code that he has shared. https://youtu.be/rOM5t3045Jg
Hope it helps someone, and full OSC support would be even better.
Do you have any sample code for it?
Hi @sthalik !
the easiest way to implement OSC is by using this library: https://github.com/CINPLA/oscpack
for OSC output:
// add headers
#include "osc/OscOutboundPacketStream.h"
#include "ip/UdpSocket.h"
// configure ip and port
#define ADDRESS "127.0.0.1"
#define PORT 8888
#define OUTPUT_BUFFER_SIZE 1024
// call this every time new data is available
UdpTransmitSocket transmitSocket(IpEndpointName(ADDRESS, PORT));
char buffer[OUTPUT_BUFFER_SIZE];
osc::OutboundPacketStream p(buffer, OUTPUT_BUFFER_SIZE);
// example msg format (this one can be received with my osc bridge app and passed to various plugins/apps according to predefined presets)
// https://github.com/trsonic/nvsonic-head-tracker
p << osc::BeginMessage("/bridge/quat") << (float)q.w << (float)q.x << (float)q.y << (float)q.z << osc::EndMessage;
// if you want to send separate quaternion or euler angle values, create a bundle, like this:
// p << osc::BeginBundleImmediate
// << osc::BeginMessage("/qW") << (float)q.w << osc::EndMessage
// << osc::BeginMessage("/qX") << (float)q.x << osc::EndMessage
// << osc::BeginMessage("/qY") << (float)q.y << osc::EndMessage
// << osc::BeginMessage("/qZ") << (float)q.z << osc::EndMessage
// << osc::EndBundle;
transmitSocket.Send(p.Data(), p.Size());
OSC input is a bit more complicated, here is an example: https://github.com/CINPLA/oscpack/blob/master/examples/SimpleReceive.cpp
I hope this helps, cheers!
+1 for OSC output to support headtracking with SPARTA Binauraliser and individual HRTFs: https://sourceforge.net/p/mesh2hrtf-tools/wiki/virtual_surround_with_headtracking_tutorial
thank you.
Don't know if it helps, but someone built a headtracker specifically for my VR piano library, and the control is OSC:
https://github.com/blah1898/the-experience-face-tracker
Don't know if it helps, but someone built a headtracker specifically for my VR piano library, and the control is OSC:
https://github.com/blah1898/the-experience-face-tracker
Thanks! I personally have an OSC compatible headtracker right now, but it is not very reliable. OpenTrack has all the input compatibility to use alternative devices which I am very interested in. So the Webcam alternative is Awesome for people who have nothing, but OpenTrack would open possibility to use so many more trackers with even higher performance.
Testers needed! Drop the library into the modules
directory.
The message is /bridge/quat
with 4 float values.
@sthalik First of all, thanks for the work - steps in the right direction. I did as directed - opened my music app and monitored the port that OpenTrack was sending to. The stream of data looks really good, and is exactly as you mentioned: /bridge/quat with 4 float values.
However, because OSC is a pretty open framework, every application will need a different type of string + data as mentioned in the earlier part of this thread. The music software I am using (Reaper + IEM plugin with OSC support) is looking for a different string so the two don't work together (yet).
So let's say you get to choose whether to send a bundle with each value being a separate message or just a single message with ypr/quat values. For ypr you probably want radians and degrees as well. Would that exhaust all expected formats?
I think that's right. For my application, I need a separate string for each value, yet other apps might require one string with multiple values.
Hi, for https://github.com/leomccormack/SPARTA the OSC format is: /ypr
on port 9000 where the 3 values are in degrees with Zero on all axis when looking straight.
- Yaw angle (positive anticlockwise)
- Pitch angle (positive down)
- Roll angle (positive anticlockwise)
There are actually quite a lot of different OSC input conventions for headtracking. Here is an explanation for all the options supported by one headtracker dedicated to OSC and MIDI: https://supperware.net/headtracker/profile-txt/ But there is not necessary to support all this flexibility right away.
Thank you for working on this! This will be huge for https://sourceforge.net/p/mesh2hrtf-tools/wiki/Everyday_use_of_SOFA_HRTF/#headtracking
Testers needed! Drop the library into the
modules
directory.The message is
/bridge/quat
with 4 float values.
Your module work.
But I have to use another bridge to format the message.( I used OSCII-bot)
I would like to create one with a "head_pose" protocol (kinect) (used in Steinberg Cubase DAW) /head_pose UserId, X, Y, Z, pitch, yaw, roll an get this output (UserId set to 0, and all xyz set to 0) /head_pose 0, 0, 0, 0, pitch,, yaw, roll
But I also need to set rate to 25Hz
Great work! While a bit more niche, I would like if it also send XYZ over OSC. So for things like IEM Room Encoder to have body tracking. If it isn't too much extra work.
@ABoredBunny I think the issue is writing the module so that the OSC output stream is configurable to the many output formats. The current version has all of the data there, it just needs to be reformatted, and @sletonqu used OSCII-bot to translate each OSC line to something that his software could take as input. Problem is that OSCII-bot is a really old scripting utility and is not supported on some modern OS's.
Anyway, if your OS supports OSCII-bot, you could try it. IEM plugin compatibility is what I'm after too.
@dorem-midi It doesn't send XYZ values, it sends headtracking as quadtrinos. I'm not worried about the headtracking values. I can make it work using reaktor, together with reapers routing ability. But XYZ would be nice.
@sthalik Also towards how it should be. The ability to change the message "header" and polarity. But also the send format. Also the ability to send Yaw, Pitch, and Roll. The option to select the address per value. Of course the best option would probably be to code an implementation per software and just having a dropdown. But that sounds like more work. while this should work with most things that use OSC.
But that is just my opinion.
Thanks @ABoredBunny , I'd forgotten about that UI element - it's nearly the same yes.
Maybe this will help someone. This is my Reaktor setup that converts the OSC output of opentrack to IEM scene rotator OSC messages. you have to go to OSC receive, set the port to what open track has or wise-versa(10000 in reaktor). Then copy your send port to OSC receive in scene rotator(10001 usually)
I had to convert it weirdly so it was something like W, Y, Z, -X or the oppsoite of that. Don't ask how much pain that took to figure out.
Here's the Reaktor ensemble Opentrack to scene rotator.zip
@sthalik I want to thank you personally for saving me like 50 Euro
@ABoredBunny
Thanks for this! I'm trying to find a way to connect opentrack to an app that supports headtracking data in IEM Scene Rotator format and this seems like it might be the solution I've been looking for. Unfortunately, I only own Reaktor5 and can't open your .ens file (created in Reaktor6). I've been trying to recreate your patch and am not sure i have everything correct. Here's my screenshot... I managed to create and connect all of the modules (matching all of the options in the "Function" tab for all of them) but I'm wondering if I'm missing something as I don't see the same icons in the lower right of each box (the "musical notes"). Any tips? Thx!
UPDATE: Just ran a test and I am getting OSC Recieve data from opentrack but I'm having trouble getting the OSC Send connected to SceneRotator (VST, latest version):
I got this all working today. Turns out the "Router 1->M" needed to be set to 'mono' in the Properties "Function" tab. Once I checked that box, signal started flowing.
Thx!!
While I do have this working, it would be amazing to be able to avoid the OSC>Reaktor translation. Would love any updates on getting this all implemented directly inside OpenTrack.
I'm still trying to wrap my head around it all and due to some existing bugs in some of the software I'm mapping to, I'm not 100% dialed in. It's been difficult for me to know what is causing some of the issues I'm having because I'm not sure if Reaktor is not configured correctly of if the bugs are affecting things. FWIW, I'm using this setup to create a headtracking solution for immersive (binauralized) audio monitoring and some of the audio signals are snapping around to different positions with small head movements (?).