nnsvs icon indicating copy to clipboard operation
nnsvs copied to clipboard

Implementation status and planned TODOs

Open r9y9 opened this issue 5 years ago • 35 comments

This is an umbrella issue to track progress and discuss priority items. Comments and requests are always welcome.

MIlestones

  • [x] ~ 4/26 (Sun): Refactor my jupyter-based code to python scripts and push them to the repo
  • [x] Achieve comparable quality to sinsy
  • [ ] Achieve comparable quality to NEUTRINO

Fundamental components

  • [x] Music context extraction (by sinsy)
  • [x] Acoustic model (music context to vocoder parameter prediction)
  • [x] Relative pitch modeling
  • [x] Timg-lag & duration model
  • [x] Multi-stream modeling
  • ~~Quantized F0 modeling~~
  • [x] Autoregressive modeling [3] #31
  • [x] Mixtuire density networks #20
  • [x] Explicit vibrato modeling (low priority, as I believe autoregressive models implicitly model vibrato)
  • ~~HMM (or similar)-based unsupervised phone-level alignment.~~ https://github.com/DYVAUX/SHIRO-Models-Japanese

Demo

  • [x] Add a Jupyter notebook to demonstrate how to use pretrained models
  • [x] Add demo page

Dataset

  • [x] Kiritan singing https://zunko.jp/kiridev/login.php
  • [x] nit-song070
  • [x] jsut-song

Frontend

MusicXML -> context features

  • [x] Japanese language support https://github.com/r9y9/pysinsy
  • [x] English language support
  • ~~Chinese language support~~ #105
  • ~~Pure python implementation for musicxml parsing~~ We can use https://github.com/oatsu-gh/utaupy for converting UST to HTS labels
  • ~~Frontend implementation for MIDI files~~ Frontend can be done by external tools

DSP

  • [x] Implement Nakano's vibrato parameter estimation (I have C++ implementation locally. Will port it to python) [2]

Acoustic model

Context features -> acoustic features

  • [x] Net + MLPG
  • [x] (Fixed width) autoregressive models [3]
  • [x] WaveNet-like model

Timing model & duration model

  • [x] Time-lag model [1]
  • [x] Phoneme duration prediction [1]

Vocoder

Acoustic features -> raw waveform

  • [x] WORLD vocoder
  • [x] Parallel WaveGAN
  • ~~LPCNet~~
  • [x] NSF

Command-line tools

  • [x] Feature extraction
  • [x] Mean-var/ min-max stats calculation
  • [x] Mean-var / min-max normalization
  • [x] Training
  • [x] Prediction
  • [x] Inference

Data loader

  • ~~Phrase-based mini-batch creation~~

Design TODOs

  • ~~Think and write software design~~
  • [x] Think about the recipe design

Software quality

  • [x] Add tests
  • [x] Enable Github actions
  • [x] Write documents

Recipes

  • [x] Think about recipe design
  • [ ] https://arxiv.org/abs/1910.09989

Misc

  • [x] Waiting for https://github.com/facebookresearch/hydra/issues/386 to provide more flexible control on configs
  • ~~Write a paper for this perhaps? ~~ WIll do it as a part of my PhD research

References

  • [1] Y. Hono et al, "Recent Development of the DNN-based Singing Voice Synthesis System — Sinsy," Proc. of APSIPA, 2017. PDF
  • [2] Vibrato estimation in Sinsy: HMMに基づく歌声合成のためのビブラートモデル化, MUS80, 2009.
  • [3] Wang, Xin, Shinji Takaki, and Junichi Yamagishi. "Autoregressive neural f0 model for statistical parametric speech synthesis." IEEE/ACM Transactions on Audio, Speech, and Language Processing 26.8 (2018): 1406-1419.

r9y9 avatar Apr 19 '20 16:04 r9y9

Hi, I am one of your followers. I feel glad and excited to see the growth of a great project and want to do something for that. I am familiar with Chinese and I can help with Chinese frontend if needed.

About Vocoder I think LPCNet vocoder may fit the need ? It gets spectrogram and pitch to generate audio signals.

About Recipes Kaldi wants to get rid of its heavy things like shell scripts and c++ interfaces. Could we consider offerring some recipes written by python or maybe else ?

Thank you, I am on your call !

Yablon avatar Apr 20 '20 05:04 Yablon

Hi, @Yablon, many thanks for your comments! Your help with Chinese frontend support is definitely welcome! Let me first make a Japanese version of the entire system, and then let's discuss how to extend it to other languages.

About vocoder: Yes, LPCNet is also a good candidate. I will add it to the TODO list.

About recipe: At the moment I am thinking that a recipe would look like https://github.com/r9y9/wavenet_vocoder/blob/master/egs/mol/run.sh. It consists of a single shell script that involves core python implementations (e.g., train.py). A recipe can be written in python (if you want). I may consider C++ for performance heavy things, but for simplicity and maintainability, I will implement most of core features in python. Does it sound okay for you?

FYI, I don't want to add Kaldi requirement to the repo. I guess it would make users having installation issues...

r9y9 avatar Apr 20 '20 06:04 r9y9

@r9y9 I agree with you and hope to see the entire system.

Yablon avatar Apr 20 '20 07:04 Yablon

Hi @r9y9, really excited to see where this project goes!

For training acoustic models that have a WORLD vocoder target perhaps it's a good idea to take a look at WGANSing.? In addition to the actual model used, I think their preprocessing gives some insight into how to predict WORLD vocoder features efficiently.

apeguero1 avatar Apr 21 '20 23:04 apeguero1

Hi @apeguero1, thanks for sharing your thoughts! I will look into their paper and code to find something useful.

Seems like they used https://smcnus.comp.nus.edu.sg/nus-48e-sung-and-spoken-lyrics-corpus/ for singing voice synthesis, but unfortunately there are no musicxml files and MIDI files available, which makes the task quite difficult. I guess the dataset was designed for speech-to-singing voice conversion.

r9y9 avatar Apr 22 '20 03:04 r9y9

I can help with the DSP part too. When you publish your data process pipeline, I can help with building the LPCNET vocoder for the specific spectrogram and pitch or else.

Yablon avatar Apr 22 '20 07:04 Yablon

That would be great! I am now working on refactoring data processing code for Kiritan database. After that, I will make a simple time-lag model and a duration model (those described in https://ieeexplore.ieee.org/document/8659797).

Once we complete

  1. data preprocessing (musicxml to feature vectors, acoustic feature extraction, etc)
  2. time-lag model
  3. duration model
  4. acoustic model (this is done already),

we can start experimenting with advanced ideas including neural vocoder integration, explicit vibrato consideration, end-to-end approach, GAN, Transformer, etc. I will keep posting progress here. I suppose that I will finish making a whole system in one or two weeks hopefully.

r9y9 avatar Apr 22 '20 07:04 r9y9

Great !

Yablon avatar Apr 22 '20 08:04 Yablon

A new paper for Chinese singing voice synthesis comes up on arxiv! It was submitted to INTERSPEECH 2020. Looks very interesting.

"ByteSing: A Chinese Singing Voice Synthesis System Using Duration Allocated Encoder-Decoder Acoustic Models and WaveRNN Vocoders"

  • arxiv: https://arxiv.org/abs/2004.11012
  • samples: https://bytesings.github.io/paper1.html

r9y9 avatar Apr 24 '20 02:04 r9y9

Yes, it is. Tacotron(2) structure could be used everywhere and perform well. Is your implemention performs as well ?

I think tacotron structure may needs more data, and dnn based may need less and performs more stable. What's your oppion ?

Yablon avatar Apr 24 '20 07:04 Yablon

In TTS, we typically need more than 10 hours of data to build attention-based seq2seq models. However, in contrast to TTS, SVS is highly constrained by a musical score (e.g. pitch, note duration, tempo, etc), so I suppose that we can build Tacotron-like models even on a small dataset. For example, see https://arxiv.org/abs/1910.09989.

There are pros and cons in traditional parametric-based approaches and end-to-end approaches. I want to try the traditional one first, since it is simple and enables us to perform fast iterations of experiments, which I think is important at the early stage of prototyping.

As for the Tacotron implementation, I implemented it before (https://github.com/r9y9/tacotron_pytorch) but it is now outdated. I would use https://github.com/espnet/espnet for Tacotron 2 or Transformer implementations. The toolkit is a little bit complicated, but it is well tested and worth resuing the components.

r9y9 avatar Apr 24 '20 15:04 r9y9

I pushed the data preparation scripts for kiritan database: https://github.com/r9y9/kiritan_singing. I suppose I will finish making the entire system this weekend. Please wait for a few days!

r9y9 avatar Apr 24 '20 15:04 r9y9

That is exciting!

Yablon avatar Apr 24 '20 16:04 Yablon

That's awesome can't wait to test it! :D

apeguero1 avatar Apr 24 '20 16:04 apeguero1

I have implemented the time-lag model and duration model as well as the acoustic model. Now that we can generate a singing voice from a musicxml file. A generated sample can be found at https://soundcloud.com/r9y9/kiritan-01-test-svs-7?in=r9y9/sets/dnn-based-singing-voice. The quality is not good but not bad.

I pushed lots of code including feature extraction, normalization, training, inference. The inference script is too dirty at the moment and needs to be refactored. I plan to do it tomorrow.

Also, I pushed a recipe so that anyone can (ideally) reproduce my experiments. https://github.com/r9y9/dnnsvs/tree/master/egs/kiritan_singing Note that this is still WIP and may subject to change.

r9y9 avatar Apr 25 '20 16:04 r9y9

I think the recipe is helpful for researchers but not very friendly for those who are not familiar with the internal of singing voice synthesis systems. I plan to make a Jupyter notebook to demonstrate the usage and how it works.

r9y9 avatar Apr 25 '20 16:04 r9y9

I realized that SVS systems are more complicated than I initially thought. There are lots of things we need to do!

r9y9 avatar Apr 25 '20 16:04 r9y9

Hi, just notice the project. It's awesome! There are not any open-source toolkit for singing voices out there.

I'm not sure but it seems there are systems directly using the singing and alignment for training (e.g. https://github.com/seaniezhao/torch_npss). A possible direction might be pre-trained on raw data (maybe some alignment) and then refined on data with musicxml (after all, those strictly aligned data is much hard to obtain).

BTW, do you have any intention to make the project with more general framework that not only confined to synthesis? As for ESPNet, it also has tasks including asr, speech translation, and speech enhancement.

ftshijt avatar Apr 30 '20 02:04 ftshijt

Hi @ftshijt. Thanks :)

The paper "A Neural Parametric Singing Synthesizer" is very interesting. They propose a multi-stream autoregressive model for vocoder parameters; that's what I planned to do next! I was inspired by the paper "Autoregressive Neural F0 Model for Statistical Parametric Speech Synthesis" https://ieeexplore.ieee.org/abstract/document/8341752/.

As for alignment, yes, it is sometimes hard to obtain. Regarding the Japanese Kiritan database, they provide annotated alignments, so I am using it (with small corrections). If there are no manual alignments, we can take the learning-based approach. For example, similar to what the authors of the above paper have done, we can use HMM to obtain alignment in an unsupervised manner.

For this project direction, I want to focus on singing voice synthesis. ESPnet is an excellent tool for many speech tasks (I am one of the authors of ESPnet-TTS paper). However, it comes with complexity. Some of my friends in the TTS community told me it was difficult to use. To simplify the codebase and make it hackable, extensible, I want to focus on SVS. That said, I want to make a generic tool to support a broader range of models, from parametric to end-to-end.

r9y9 avatar Apr 30 '20 11:04 r9y9

Not planned yet, but the speech-to-singing voice conversion task may fit in ESPnet's unified approach.

r9y9 avatar Apr 30 '20 12:04 r9y9

Whoa! seems like OpenAI just released the GPT2 of music! I wonder how hard it would be to reproduce this without a million songs or hundreds of gpus. And if it works for songs with instrumentation then maybe it would be easier to train on a purely vocal dataset? The paper doesn't mention much about finetuning but perhaps there's some transfer learning opportunities here?

apeguero1 avatar May 01 '20 17:05 apeguero1

I was so surprised that OpenAI's model is able to generate singing voices and instrumental simultaneously. It would be easier to train on a vocal dataset and transfer learning is definitely worth trying.

r9y9 avatar May 02 '20 13:05 r9y9

As a minor issue, let me rename the repo from dnnsvs to nnsvs.

r9y9 avatar May 02 '20 13:05 r9y9

I have created a jupyter notebook to demonstrate how we can use pre-trained models to generate singing voice samples.

Neural network-based singing voice synthesis demo using kiritan_singing database (Japanese)

  • Open In Colab
  • Nbviewer: https://nbviewer.jupyter.org/gist/r9y9/79705665ed5a94f0028839ca40992751

Here goes if any of you are interested. If you want to just see the demo, check the pre-rendered nbviewer's page. If you want a interactive demo, look the google colab's one.

r9y9 avatar May 03 '20 02:05 r9y9

I pushed all the code for feature extraction, training, and inference as well. Models used in the above demo can be reproduced by running the following recipe:

https://github.com/r9y9/nnsvs/tree/master/egs/kiritan_singing/00-svs-world

r9y9 avatar May 03 '20 03:05 r9y9

The notebook is great! The step by step approach makes it easier to follow (: voice sounds good so far!

apeguero1 avatar May 03 '20 20:05 apeguero1

I made a new recipe for nit-song070, which is a singing voice dataset provided by the HTS working group. The dataset contains 31 songs recorded by a female Japanese singer. Data size is not huge but it is good for testing.

  • Sample: https://soundcloud.com/r9y9/20200522-haru-ga-kita-3-nit-song070
  • Recipe: https://github.com/r9y9/nnsvs/tree/master/egs/nit-song070/00-svs-world

r9y9 avatar May 21 '20 15:05 r9y9

I have added another recipe for jsut-song dataset.

  • Sample: https://soundcloud.com/r9y9/20200525-haru-ga-kita-5-jsut-song?in=r9y9/sets/dnn-based-singing-voice
  • Recipe: https://github.com/r9y9/nnsvs/tree/master/egs/jsut-song/00-svs-world

r9y9 avatar May 25 '20 14:05 r9y9

Good news: the author of NSF published a pytorch implementation of NSF: https://github.com/nii-yamagishilab/project-NN-Pytorch-scripts

It should be easy to integrate it with our codebase.

r9y9 avatar Jun 03 '20 23:06 r9y9

out_acoustic directory contains 1) acoustic features (*-feats.npy) and 2) waveform (*-wave.npy), which can be used for training neural vocoders.

ls -l dump/kiritan/norm/train_no_dev/out_acoustic/ | head
total 1254736
-rw-rw-r-- 1 ryuichi ryuichi 2315692  5月 27 00:28 03_seg0-feats.npy
-rw-rw-r-- 1 ryuichi ryuichi 2792768  5月 27 00:28 03_seg0-wave.npy
-rw-rw-r-- 1 ryuichi ryuichi 1161492  5月 27 00:28 03_seg1-feats.npy
-rw-rw-r-- 1 ryuichi ryuichi 1400768  5月 27 00:28 03_seg1-wave.npy
-rw-rw-r-- 1 ryuichi ryuichi 1567452  5月 27 00:28 03_seg2-feats.npy
-rw-rw-r-- 1 ryuichi ryuichi 1890368  5月 27 00:28 03_seg2-wave.npy
-rw-rw-r-- 1 ryuichi ryuichi 1624764  5月 27 00:28 03_seg3-feats.npy
-rw-rw-r-- 1 ryuichi ryuichi 1959488  5月 27 00:28 03_seg3-wave.npy
-rw-rw-r-- 1 ryuichi ryuichi 2060972  5月 27 00:28 03_seg4-feats.npy

r9y9 avatar Jun 03 '20 23:06 r9y9