cnlinxi
cnlinxi
@MorganCZY This completely failed. Can you show the sample of your training corpus and the alignment during training?
@CathyW77 欸,为啥,这个好奇怪。不过我确实没有尝试过关闭这个dropout。
> I trained this model with thch30s. > [alignment.zip](https://github.com/cnlinxi/style-token_tacotron2/files/5243754/alignment.zip) > Here are the latest three alignment graphs, corresponding to 6w, 6.5w, 7w steps. @MorganCZY This is a bit strange. I'm...
This audio is missing now :( Can you provide one? thanks a lot @superbock @instr3 @Aaaaaaada
> set **save_safetensors=False** in Seq2SeqTrainingArguments can help After set `save_safetensors=False`, I cannot run `infer_tfs.py`, pytorch_model.bin adds `_orig_mod.model.` and `_orig_mod.` prefixes, how to solve it?
Same problem. Do you have any solution?
I follow [supervised_fine_tuning/LLaMA-MoE-v2.md](https://github.com/OpenSparseLLMs/LLaMA-MoE-v2/blob/main/docs/supervised_fine_tuning/LLaMA-MoE-v2.md) and dataset is: ### First-stage [OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) [sharegpt_gpt4](https://huggingface.co/datasets/shibing624/sharegpt_gpt4) [lima](https://huggingface.co/datasets/GAIR/lima) [Infinity-Instruct](https://huggingface.co/datasets/BAAI/Infinity-Instruct) ### Two-stage [Infinity-Instruct](https://huggingface.co/datasets/BAAI/Infinity-Instruct) [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)
By the way, can you show the training loss and trend? The loss I tried was always around 7.8 during second stage and could not be reduced. Thank you for...
Any progress? I encountered the same error when using `whisper-tiny`. Thank you