Decoupling the UI from the audio engine.
Still interested in doing this? I think it'd make it much easier to port the app to multiple platforms including mobile (with a web-based UI). I've been making progress over at http://github.com/infamous/infamous. No docs or official demos yet. Here's a small video preview of part of a UI I was making using infamous:
https://youtu.be/gaUA00ggb9Y
We can imagine those cards in a slightly different arrangement, like the machines in buzztrax.
Seems like gstreamer has good support for compiling on all the platforms, so making the UI web-based might help make compiling for all the platforms easier overall.
Soon, when infamous (name subject to change) is a little further along, I'd like to make a proof-of-concept UI similar to the current one Buzztrax has.
Totally. We need some RPC layer between the UI and the 'engine'. For a long time I was thinking to use OSC. I am playing with OSC in a hardware project. It will work, but might be a bit inconvenient since it is async. E.g. you would send a message to the engine: /buzztrax/load s "/path/to/song" and it will load the song. Once the song is loaded the engine would broadcast a message /buzztrax/current-song s "song-name" and that tells your UI that the song was loaded.
So in order to get there, we need:
- [ ] build buzztrax for android
- [ ] add a new front-end: e.g. buzztrax-server that called with --osc-port starts a osc-server on that port
- [ ] write a front-end that launches the server and e.g. load a song and plays it
- [ ] design the full osc message protocol we need
Some of those things can be done in parallel.