Mathias Gatti
Mathias Gatti
Hey! No, but it's a good idea, I just added that feature :) Please try it and let me know how that goes
Oh sorry and thanks for reviewing it, I just pushed some possible fixes I will test it thoroughly tomorrow
ah right, I don't know much about japanese and mandarin. If you can send me a text example and how it should be splitted into notes that would be great....
Do you have some text to try? I added a basic tokenization, given some text like this: 曲项向天歌 it assigns each character to a different note "曲项向天歌" is converted into...
Awesome! Do you want to share how you fixed it?
is there an easy fix for this? I'm trying to train aitextgen on a big spanish corpus (20 GB) and I'm checking if it's possible to do so by preparing...
I receive the same error trying to load [this](https://huggingface.co/flax-community/gpt-2-spanish/) gpt-2 spanish model from hugging face. ```python ai = aitextgen(model_folder = "trained_model", config = "trained_model/config.json", tokenizer_file="trained_model/tokenizer.json", to_gpu=True) generated_text = ai.generate_one(max_length=30, prompt="Esto...
Are there any updates on these issue? It would be super useful
check the new code example, you should import lsd like this ``` from pylsd.lsd import lsd ```
Hi! I also failed trying MBROLA in the past unfortunately and I'm too lazy to debug C code right now. About the notes per syllable it should be doing that....