hydrogen
hydrogen copied to clipboard
Humanizer
Yet another try to talk about the humanizer within Hydrogen.
I reworked the humanizer, removed the swing
knob and its counterparts in the core backend, and introduced another color for the noise.
This is still work in progress but the next thing I'm going to do will come with more invasive changes to the audio engine, which will most probably not be merged.
Changes:
- Better algorithm for drawing uniformly distributed white noise
- Better algorithm for drawing Gaussian white noise
- Introducing a pink noise source
- Putting all noise generating functions and their states into a single object called
Randomizer
- This object will be (hopefully) properly created and addressed within Hydrogen using the well known
create_instance()
andget_instance()
routines - The user can change between the two different colors using a drop-down menu in the
MixerLine
- The background figure of the latter was reworked to represent the changes
Hi!
Concerning the missing feedback: There are not a lot of developers who are contributing to Hydrogen and all of the "original" developers who wrote the code which you want to discuss are no longer active. For my part, when i've time to work on hydrogen, i'm mostly spending my time on bugfixes and keeping things alive :-/ About the swing knob: Have you checked if this has an effect in older versions like 0.9.4 or 0.9.5?
I do like the idea in general to give the user more control about the randomization techniques, but haven't checked your branch yet for the user interface part.
About the implementation of the Randomizer: The singleton pattern is quite over-used in Hydrogen :) W I'm trying to avoid it for new classes if it is possible since it is hard to tell which instances do rely on each other. If the randomizer is ony used in the Hydrogen class, there is nothing against making it a private member of that class and managing that object in the engines constructor/destructor.
Ah, i forgot to add one thing.. It might help to put some screenshots and descriptions of the randomize functionality in the pull request to discuss ideas with users and other developers. I don't have any idea what those noise colors are affecting and where the difference is :smile:
Hey,
Yes, I'm aware that Hydrogen is maintained but not under active development. That's no problem since I intend to do all the work on the humanizer instead of handing some unfinished code to others ;). But as someone who not did part in the development of the software yet I do not feel myself entitled to change core parts of the code and ask for their adaption without discussing the changes first. In addition this is the first code I ever wrote in C++. As someone used to interpreted languages like R, Python or Lua implementing all these features in C++ is a little bit complicated. Therefore I'm glad to receive some guidance and have no problem in changing the current implementation. E.g. the singleton pattern was heavily inspired by the Looper object in order to make my code work at all.
I'm not quite sure what you mean with an effect in older versions but the reading and writing of the new options specifying the noise color and the swing parameter do not cause problems. For earlier versions the noise colors are just ignored when loading a more recent .h2song file and the swing parameter is falling back to its default values since it is not present in the .h2song file anymore. Importing older .h2song with the updated humanizer works the same way.
The main thing I want to improve in my contribution are the correlations in the fluctuations generated by the humanizer. Sounds quite abstract and theoretic, I know :). Let me phrase it like this: using Gaussian white noise, as Hydrogen used beforehand, for every single note the audio engine draws a new random number to slightly alter both its velocity/pitch and the onset. But this implies that all the fluctuations are completely independent of each other. This would simulate a drummer who is not perfectly on time and immediately forgets about the note she played as soon as the stick hits its target. At one point she is slightly behind the beat in the base drum, in the next beat she is in front of it and so on. Everything completely uncorrelated.
This is most probably not who humans do operate and, indeed, some researches did already find during the 80s such fluctuations to behave like pink noise. Now if the drummer is behind the beat, there is a tendency she will be behind the beat at the next beat as well. But just a tendency. It's not something like we are storing the fluctuation added during the last beat and add a random number to it. It has a more complex and "fractal" dependency.
The later is what I did implement. Now the user can choose between white and pink colored noise via a dropdown menu. At least in my perception the humanizer indeed produces more human sounds.
That said, I'm not really done yet. At least on my side everything is working and it might be save to merge it into master but I will change a lot of stuff again, including the graphics of the mixer line.
What I intend to do:
- Introduce yet another noise generating algorithm producing random numbers of arbitrary correlations. This can be controlled via the Hurst exponent and will include both the pink and white noise as special cases.
- Since the previous point definitely has a way too large theoretic overhead for the general user, it has to be hidden in some way. Maybe there will still be some rotary knobs to control the scaling factor and thus the contribution of the noise for a couple of well tested correlation settings. The rest might be accessible via an advanced button.
- The overall randomization of the velocity via the MixerLine and the randomization of the individual instruments via the random pitch knob in the instrument interface are completely separate in the code base. Although I really like the idea of tuning the fluctuations for every instrument separately I think keeping them separately is a bad idea. I would rather generate the fluctuation for each note by
randomNumber * overallHumanizationVelocityFactor * individualInstrumentPitchFactor
. But I haven't fully understood the audio engine and have to dig into its implementation further before applying any changes.
Hi, I did a quick review. For me it looks good - I have suggested some cosmetic changes.
I try to incorporate all the suggested changes end of the week and also do some squashing of the commits I did on the way.
hi all, Although I think that most audio techies will understand the concepts of white/pink noise i'm not sure that everyone will. It should be documented in the manual, and the mouseover info could perhaps also give some hints. Not sure if this is something we should even worry about (I'm probably just being over cautious again) but you never know ...
Hi @thijz ,
I perfectly agree. I'm a huge fan of big and comprehensive documentation. But this is just the first draft of my changes to the humanizer.
E.g. in comparison to the white noise the pink one has some 'memory' of its past. But for now this memory is somewhat shared by all instruments since there is a single state of the noise generating object for all of them. Maybe this is how it works and a human keeps track of the rhythm as a whole. Or maybe our hands and feet are decoupled and it would be more appropriate to separated for all of them. Or some shared (toms) and some separate (snare, kick, hihat). Actually, since I'm not a drummer I don't have an intuition how it might work. In addition, all papers I studied so far just investigate a particular part of the drumkit (what probably would favor the separate approach). In any case, I'll try to get my hands on some drum recordings, look for the correlations myself, and fix them according to what I find.
So, since there are still a number of such open questions it would imply adjusting the documentation at every step.
What's the general policy for documentation of your project? Do you incorporate all changes immediately into the manual and wiki or do you wait will the next release?
I've added a "manual" tag then we can track this.
BTW, it sounds a great and funny improvement, keep up the good work @theGreatWhiteShark !