ZeroSpeech icon indicating copy to clipboard operation
ZeroSpeech copied to clipboard

I want to train this model in vctk dataset, but i don't know how to generate the json files provided by you?

Open Georgehappy1 opened this issue 4 years ago • 7 comments

Georgehappy1 avatar May 23 '20 11:05 Georgehappy1

Hi @Georgehappy1. I'll add some instructions to the README soon. The format of the json file is:

[
    [
        in_path,
        offset,
        duration,
        out_path
    ],
    ...
]

The following steps should get you most of the way to generating your own:

from pathlib import Path
import librosa
from tqdm import tqdm
import json

metadata = []
data_path = Path("path/to/VCTK-Corpus/")
for path in tqdm(list(data_path.rglob("*.wav"))):
    in_path = path.relative_to(data_path).with_suffix("")
    duration = round(librosa.get_duration(filename=path), 2)
    out_path = Path("train") / in_path.parts[-2] / in_path.stem
    metadata.append([str(in_path), 0, duration, str(out_path)])

That'll take a little time to run. Then you can optionally split metadata into train and test sets. Finally, create the folders datasets/VCTK in the repo root directory and dump the json:

train_path = Path("datasets/VCTK")
train_path.mkdir(parents=True, exist_ok=True)
with open(train_path / "train.json", "w") as file:
    json.dump(metadata, file, indent=4)

Let me know if that works for you. Also, please share your results if you get the training working!

bshall avatar May 25 '20 08:05 bshall

@bshall Thank you for your reply! I will follow your instructions to train the model in vctk. I will show the results here once results come out.

Georgehappy1 avatar May 26 '20 05:05 Georgehappy1

No problem @Georgehappy1. Also, I forgot to mention that you'll have to add a new config file VCTK.yaml under config/dataset. The format is:

dataset:
  dataset: VCTK
  language: english
  path: VCTK
  n_speakers: 109

Then when you run any of the scripts you'll use the flag dataset=VCTK. I think that should cover everything.

bshall avatar May 26 '20 12:05 bshall

@Georgehappy1, just checking if you ever managed to get the training on VCTK working?

bshall avatar Jun 26 '20 16:06 bshall

@Georgehappy1, just checking if you ever managed to get the training on VCTK working?

yes, i have got the results. Thank u for ur help. Later i will upload the demo link here.

Georgehappy1 avatar Jun 29 '20 08:06 Georgehappy1

@Georgehappy1, fantastic! Looking forward to hearing the results.

If you'd like to contribute your model and dataset splits, I'd be very happy to take a look at a pull request.

bshall avatar Jun 29 '20 17:06 bshall

@bshall hi, the demo link is here https://drive.google.com/drive/folders/1c1VQFzC2zf25OXZPkNTiwlaHHZOREBGe?usp=sharing I used the first 30 speakers' speeches in VCTK to train the model except for the first 30 speeches of p225, p226, p227, p228 worked as the test set. The model was trained for 240k iterations. The demos showed here are 225-226(female-male) and 227-228(male-female) folder. Besides I also put the test set p225, p226, p227, p228 in link for reference.

Georgehappy1 avatar Jun 30 '20 05:06 Georgehappy1