ZeroSpeech
ZeroSpeech copied to clipboard
I want to train this model in vctk dataset, but i don't know how to generate the json files provided by you?
Hi @Georgehappy1. I'll add some instructions to the README soon. The format of the json file is:
[
[
in_path,
offset,
duration,
out_path
],
...
]
The following steps should get you most of the way to generating your own:
from pathlib import Path
import librosa
from tqdm import tqdm
import json
metadata = []
data_path = Path("path/to/VCTK-Corpus/")
for path in tqdm(list(data_path.rglob("*.wav"))):
in_path = path.relative_to(data_path).with_suffix("")
duration = round(librosa.get_duration(filename=path), 2)
out_path = Path("train") / in_path.parts[-2] / in_path.stem
metadata.append([str(in_path), 0, duration, str(out_path)])
That'll take a little time to run. Then you can optionally split metadata
into train and test sets. Finally, create the folders datasets/VCTK
in the repo root directory and dump the json:
train_path = Path("datasets/VCTK")
train_path.mkdir(parents=True, exist_ok=True)
with open(train_path / "train.json", "w") as file:
json.dump(metadata, file, indent=4)
Let me know if that works for you. Also, please share your results if you get the training working!
@bshall Thank you for your reply! I will follow your instructions to train the model in vctk. I will show the results here once results come out.
No problem @Georgehappy1. Also, I forgot to mention that you'll have to add a new config file VCTK.yaml
under config/dataset
. The format is:
dataset:
dataset: VCTK
language: english
path: VCTK
n_speakers: 109
Then when you run any of the scripts you'll use the flag dataset=VCTK
. I think that should cover everything.
@Georgehappy1, just checking if you ever managed to get the training on VCTK working?
@Georgehappy1, just checking if you ever managed to get the training on VCTK working?
yes, i have got the results. Thank u for ur help. Later i will upload the demo link here.
@Georgehappy1, fantastic! Looking forward to hearing the results.
If you'd like to contribute your model and dataset splits, I'd be very happy to take a look at a pull request.
@bshall hi, the demo link is here https://drive.google.com/drive/folders/1c1VQFzC2zf25OXZPkNTiwlaHHZOREBGe?usp=sharing I used the first 30 speakers' speeches in VCTK to train the model except for the first 30 speeches of p225, p226, p227, p228 worked as the test set. The model was trained for 240k iterations. The demos showed here are 225-226(female-male) and 227-228(male-female) folder. Besides I also put the test set p225, p226, p227, p228 in link for reference.