Noise suppression fine tune
Hi Rikorose,
I'm trying to fine tune some effects, do you have any suggestions for these points?
- In harsh environments (lower SNR), since the dataset only yields -5 to 45 dB SNR, the resulting spectrogram has little above 5kHz, can it be improved?
- I want to enhance the effect of 8kHz to 14kHz and increase the brightness of the human voice. Can this be improved through post-processing?
- In PercepNet, it adds global gain while using warped gain. Do we need to do the same thing here?
Thanks, Aaron
- Most probably. One could think about adding a connection from a later stage of the DF decoder to the ERB decoder. This however would not allow to only run the ERB decoder without the DF decoder anymore.
- This idea is partly implemented within the air absorption distortion, but not properly tested.
- Not sure what you mean here. What formula do you mean?
Hi,
I found the result is good while using your website. Because I re-train the model by Keras, and Keras do not support grouped Conv2DTranspose layer. I will try to figure out the difference between Keras and Torch.
Best regards, Aaron
Hi,
I am checking the model inputs and found some differences. I can use numpy.rfft, vorbis window, and stft_norm get the same value with stft function.
stft_norm = 1 / (n_fft ** 2 / (2 * hop))
spec = torch.stft(
audio, n_fft=n_fft, hop_length=hop, window=torch.Tensor(vorbis_window(n_fft)),
return_complex=True, normalized=False, center=False
).transpose(1, 2)
But I found when I send the same signal to df.analysis or df_features in enhance.py, I get different spec with this stft function. Is there any different?
Another question, is dB rescale important for ERB?
Thanks,
Code looks good, not sure where you get some differences. dB scaling is important since the raw amplitude does not correlate well with human loudness perception and is thus not a good feature.
Hi,
I try to use this command in enhance.py.
spec, erb_feat, spec_feat = df_features(audio, df_state, device=get_device())
and save spec as a npy files.
Also, use
spec = torch.stft(
audio, n_fft=n_fft, hop_length=hop, window=torch.Tensor(vorbis_window(n_fft)),
return_complex=True, normalized=False, center=False
).transpose(1, 2) * stft_norm
But these two functions get different values of spec.
Hi,
I try to use this command in enhance.py.
spec, erb_feat, spec_feat = df_features(audio, df_state, device=get_device())and savespecas a npy files.Also, use
spec = torch.stft( audio, n_fft=n_fft, hop_length=hop, window=torch.Tensor(vorbis_window(n_fft)), return_complex=True, normalized=False, center=False ).transpose(1, 2) * stft_normBut these two functions get different values of
spec.
i have the same question, so are you guess about answer?
for the stream process mode, every process i only have 480 samples(48k ssamplerate and 10ms data), if i had 480 samples delay and 480 samples overlap, neither vobis window and np.fft and torch.fft, it was different result with spec in df.analysis, it make me confuse...