Yugesh AV

Results 9 comments of Yugesh AV

> Hi, you need to downsample to 16K first. Does your model has any option to resample the audio data?

> Sorry, I think you can directly use the FullSubNet model to enhance the 48K wav file in inferencing time. > > Check this [line](https://github.com/haoxiangsnr/FullSubNet/blob/main/src/dataset/DNS_INTERSPEECH_inference.py#L38) of the project. When loading,...

> Could you please send me the wav file and the inference config? Input file uploaded in this link [https://drive.google.com/file/d/1UVejws8QuAtDWuA3cyCU6nMNp1Gv2E-L/view?usp=sharing] Code changes are in config/inference/fullsubnet.toml inherit = "config/common/fullsubnet_inference.toml" [dataset] path...

> You will get the correct result by changing `sr = 48000` to `sr = 16000` in the `inference/fullsubnet.toml`, I presume? > > Considering that `sr = 48000`, Librosa will...

The pre-trained model is in here: https://github.com/haoxiangsnr/FullSubNet/releases On Wed, Mar 10, 2021, 2:08 PM ahmedbahaaeldin wrote: > @yugeshav can you share the pretrained > model ? > > — >...

As per the author, it is fullsubnet. On Wed, Mar 10, 2021, 5:19 PM ahmedbahaaeldin wrote: > @yugeshav which one from the archive/data > file should i pick for the...

> Ah, the internet provided a fix/workaround - for my problem, at least. Woohoo! > > > edit ...etc\pulse\default.pa > > change the line where waveout module is loaded and...

> > > Ah, the internet provided a fix/workaround - for my problem, at least. Woohoo! > > > > edit ...etc\pulse\default.pa > > > > change the line where...

ok, any timeline you are thinking ??