Voice isolation and volume normalisation
Hi
I use this extension to listen to local lectures. I have observe two problems, first is that sometime the lecturer goes furthermore or closer to the mic, so the volume is not constant. The second is that sometime there is noise (people talking in background) so it is hard to listen to the voice of the lecturer. I did some research and there are some tricks to isolate voice :
Using an equalizer then apply a triangle (triangle bandstop filter from 200 to 300) :
const eqBoost1 = audioContext.createBiquadFilter();
eqBoost1.type = "peaking";
eqBoost1.frequency.value = 2000;
eqBoost1.gain.value = 6;
eqBoost1.Q.value = 1;
const eqBoost2 = audioContext.createBiquadFilter();
eqBoost2.type = "peaking";
eqBoost2.frequency.value = 3000;
eqBoost2.gain.value = 6;
eqBoost2.Q.value = 1;
let highPass = audioContext.createBiquadFilter();
highPass.type = 'highpass';
highPass.frequency.value = 200;
highPass.Q.value = 1;
let lowPass = audioContext.createBiquadFilter();
lowPass.type = 'lowpass';
lowPass.frequency.value = 3000;
lowPass.Q.value = 1;
When isolating the voice, it is good to boost the volume of the output signal. I did some tests and this seems to work good. Ty
Hey, thanks for feedback! It's nice to see some code as well.
This is related to https://github.com/WofWca/jumpcutter/issues/46.
Are you suggesting to only apply the filters to the silence detection processor, or to the user output as well? I am on board with the former, however, as per latter I am not sure whether this is something that this extension is supposed to be doing (after all it's only about skipping silence).
Do you wish to make an MR? I suppose we'd just want to put those filters in between the input and the volumeFilter, here:
https://github.com/WofWca/jumpcutter/blob/7252cf39ed09fa658977a7decfed6dc37c1b1de2/src/entry-points/content/ElementPlaybackControllerStretching/ElementPlaybackControllerStretching.ts#L321