Timothy Mak

Results 12 issues of Timothy Mak

Hi I'm encountering the following error. The setup is identical to #359, except that I'm setting the `subset` option to 0. ``` Number of jobs starting: 1 DEBUG - Set...

Would it be possible to separate the source codes (i.e. what's in this repository) from the dependencies (kaldi, etc.) in the conda MFA package? In this way, it will be...

I think if you want your data to be non-commercial, the license should be CC-BY-NC (https://creativecommons.org/licenses/by-nc/4.0/) rather than CC-BY (https://creativecommons.org/licenses/by/4.0/).

I'm training the flowtron model from scratch on the LJSpeech dataset. It seems to run ok. However, after nearly three days, the attention matrix still has the following form and...

Hi, I am trying to understand the maths behind Praat. Would you be able to explain where the algorithm [here](https://github.com/praat/praat/blob/382c64e43c64bf73b93fcec32ebfd788b5970a8d/fon/PitchTier_to_PointProcess.cpp#L39) comes from? Many thanks! Tim

I've upgraded my MFA to version v2.2.12. However, for my downstream task (TTS), my old MFA version (v2.0.0a23) seems to perform better. What are the major changes since v2.0.0a23 that...

This is more of a question. Is there an option to specify the sub-directory used within ? Currently, it seems to default to /. However, I'd like to be able...

enhancement

I see that on line https://github.com/DigitalPhonetics/IMS-Toucan/blob/v2.5/Utility/path_to_transcript_dicts.py#L478, you are treating the "%" characters in the AISHELL3 transcript as if they are commas. However, they are actually "word delimiters" and do not...

Hi, I found that some characters, e.g. 凹 has duplicate entry in the dictionary: ``` 凹 凹 au3/nap1 凹 凹 waa1 ``` leading to incomplete results when we try to...

### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.35 - Python version: 3.11.8 - Huggingface_hub version: 0.22.2 - Safetensors version: 0.4.2 - PyTorch version (GPU?): 2.2.2 (True) - Tensorflow...

Core: Tokenization