iSTFTNet-pytorch
iSTFTNet-pytorch copied to clipboard
Single frequency line problem
Thanks for the implemention of ISTFT. It has better inference speed than hifigan v1.However, I found that there is a single frequency line which would cause little noise.I use 16KHZ dataset for training.And all the line is extractly at 4k which is the middle of the all frequency.I'm trying to fix this problem, do you have the same problem?
yes I also saw that line but it won't impact my quality
Hi @mayfool, can you show the mel-spectrogram image which has a single frequency line ?
Hi @mayfool, can you show the mel-spectrogram image which has a single frequency line ?
All synthesised wavs have single frequency line. Not few of them. So I think it has no matter with input mels.
Hi @mayfool, can you show the mel-spectrogram image which has a single frequency line ?
All synthesised wavs have single frequency line. Not few of them. So I think it has no matter with input mels.
@mayfool have you solved this problem now?
Hi @mayfool, can you show the mel-spectrogram image which has a single frequency line ?
All synthesised wavs have single frequency line. Not few of them. So I think it has no matter with input mels.
@mayfool have you solved this problem now?
@xiaoyangnihao Nope..
I implemented the code for any number of upsampling, and it seems that the problem with the single line frequency (which, by the way, lies in the region of 5500 Hz, that is, half of the fmax at a sample rate of 22050 Hz) is observed specifically for the configuration C8C8I.
Below is a comparison of C8C8I and C8C8C2I
I also encountered this problem. I set the sample rate to 48k, the horizontal line appears at 12k HZ (half of the fmax 24k HZ).
I encountered this problem too.
I have trained a model with 100k steps, it sounds good, but looking into the generated spec, it seems to make no sense.
Looking at the generated spec, we can find the highlight in 2k and 6k Hz, which do not exist in the input mel, even not in the mel re-calc from the generated audio. Comparing the two specs, we can say, that even it can be converted to audio by istft, the model output is actually NOT spectrum.
Moreover, run ifft and de-window for the generated spec, it generates the audio frames with length 16. For sil, the beginning 100 frames look very similar, thus, followed with the overlap-and-add method, just the same signal added with 4 samples moving behind. And that's it, the 4k Hz signal.
I think adding spec loss directly to the generated spec may be a good idea to improve the quality and to solve this question.
I will train a new model to see what will happen.
Yet I have trained a model with linear spec loss for 100k steps, the 4k Hz signal is gone.
Since I randomly selected the sample to syn, I cannot find the exact sample that was used for the aforementioned plot. So I re-plot the figure with a new sample.
Looking at the figure with spec loss, the generated spec looks to make sense, the 4k line was gone, and the waveform for the beginning silence is not always the same. Now everything looks all right.
I have trained a model with 100k steps, it sounds good, but looking into the generated spec, it seems to make no sense. Looking at the generated spec, we can find the highlight in 2k and 6k Hz, which do not exist in the input mel, even not in the mel re-calc from the generated audio. Comparing the two specs, we can say, that even it can be converted to audio by istft, the model output is actually NOT spectrum. Moreover, run ifft and de-window for the generated spec, it generates the audio frames with length 16. For sil, the beginning 100 frames look very similar, thus, followed with the overlap-and-add method, just the same signal added with 4 samples moving behind. And that's it, the 4k Hz signal.
I think adding spec loss directly to the generated spec may be a good idea to improve the quality and to solve this question. I will train a new model to see what will happen.
@ease-zh what changes you have made to this repo to achieve that ?
@ease-zh what changes you have made to this repo to achieve that ?
Just a l1 loss for the generated linear spec, loss_spec = F.l1_loss(y_spec, spec) * 45
. However, after careful comparison, I found that the spec loss harms the audio quality.
Maybe changing the loss formula or tuning the loss weight can further improve the quality. Do you have more suggestions?
By the way, I think using reflection_pad
before the conv_post
seems to make less sense, although it did work. I guess here is to justify the length so that torch.istft
will give the exact samples as used to calculate the mel? But the difference is caused by that torch.istft
only supports center
mode, while calculating the mel, we manually pad the wav, and set center=False
for torch.stft
. This may not affect the final synthesis, and I will give it a try later.
@ease-zh what changes you have made to this repo to achieve that ?
Just a l1 loss for the generated linear spec,
loss_spec = F.l1_loss(y_spec, spec) * 45
. However, after careful comparison, I found that the spec loss harms the audio quality. Maybe changing the loss formula or tuning the loss weight can further improve the quality. Do you have more suggestions?By the way, I think using
reflection_pad
before theconv_post
seems to make less sense, although it did work. I guess here is to justify the length so thattorch.istft
will give the exact samples as used to calculate the mel? But the difference is caused by thattorch.istft
only supportscenter
mode, while calculating the mel, we manually pad the wav, and setcenter=False
fortorch.stft
. This may not affect the final synthesis, and I will give it a try later.
then? have you tried?
@a897456 Not yet. I've been busy with something else.