BSchaffl icon indicating copy to clipboard operation
BSchaffl copied to clipboard

Humanization / randomness

Open sjaehn opened this issue 4 years ago • 6 comments

Addition of some randomness / humanization for velocity and timing has been requested @ linuxmusicians.com: https://linuxmusicians.com/viewtopic.php?f=24&t=21670

sjaehn avatar Jun 24 '20 15:06 sjaehn

I think true humanization would require groove templates: http://www.numericalsound.com/uploads/3/2/1/6/32166601/dna-groove-template-user-manual.pdf

One way would be to add support for groove templates like above, but another would be to add a sidechain midi input, whose timing information would be transfered to the main input. Not sure if an lv2 even can have 2 midi inputs, but for now, let's assume it can, or will soon ;)

The sidechain midi would be quantized to the grid of BShaffl and the difference between the quantized and non-quantized version determines the timing of the output of Bshaffl.

You could get the sidechain midi from an audio track with a drumtrigger plugin that converts it to midi. That way the output of BShaff gets the groove of the audio, creating something like this!

magnetophon avatar Jun 24 '20 18:06 magnetophon

I think about an easier way. Not true "humanization". Only to "simulate" the human error rate by randomization. As many other drum machines do (including Hydrogen).

FYI, LV2 can handle multiple midi input and midi output ports. However, hosts usually have a problem with it.

Edit: But you don't need multiple MIDI ports as you have up to 16 MIDI channels.

sjaehn avatar Jun 24 '20 19:06 sjaehn

There are several ways to humanize quantized MIDI-pattern music. You can use algorithms, a. i. (as mentioned @ linuxmusicians.com), and - of course - humans.

I can't help with a. i. as i don't have any experience. I'm very sceptic about a. i. (or a natural intelligence) can produce useful humanziation of real-time MIDI signals without pre-listening the whole track (or at least parts of it).

No doubt, you can humanize a MIDI track using human groove patterns, human MIDI sidechaining or a human-geneated MIDI track.

But we already go the algorithm way. With the amp swing, the steps swing, the amp sliders, and the str markers. By doing this you can roughly simulate a playing style. And you can spice it up with some randomness. But of course you may call it "humanized" but (you are right) it isn't human.

sjaehn avatar Jun 25 '20 17:06 sjaehn

Agreed. As mentioned in #12, I think we don't even need the randomness, at least when we have layers.

magnetophon avatar Jun 25 '20 18:06 magnetophon

Amp randomization added in 09ad274985d13bf0fd44554e333602c79a412ef9.

Timing randomization added in fa25a00b50c25d1076e92cb660083fda1fbb0605. With some TODOs (latency, values > 0.5).

sjaehn avatar Jun 26 '20 13:06 sjaehn

Latency and values > 0.5 fixed too in 8a25852bbd78abaa1d9041c23b7d6f1db8db6a0e and bd40abf957376b5a4f8fd6fce491623e1e7d9e4f, respectively.

sjaehn avatar Jun 26 '20 20:06 sjaehn