foreverhell
foreverhell
In llava/__init__.py, I modify the code from .model import LlavaLlamaForCausalLM to from .model.language_model.llava_llama import LlavaLlamaForCausalLM and fix it. I'm not sure if it makes effort for someone else, but I...
> @20191864218 You should make change as follow: Thanks a lot! It's a great help.
I have tries to split the long audio files naively and concatenate the EnCodec tokens, but the produce results are not consistent except the first clip. I do not know...
same problem
> > > with tp=4, CPU RAM usage continuously increases. > > > > > > Can you create a separate issue for this? > > Regarding the issue itself,...