Wang Binluo
Wang Binluo
> @wangbluo Could you please help me solve this issue? Thanks Hi, could you please offer the model size you use?
@lyzKF Hello, I have the same error with you, when I tried to run: colossalai run --nproc_per_node 8 --host 10.90.5.14,10.90.8.153 --master_addr 10.90.5.14 auto_parallel_with_gpt.py I got the error : Error: failed...
> Hi! Thanks for this work. I am not a part of hpcaitech, but just wondering: are you planning to keep bumping the version up? Since HF transformers version is...
Hi, Colossal-LLama is not for qwen model, as they have different prompts. You can use ColossalChat to do sft,rm,ppo but pt. If your gpu resources is limit, we recommend you...
Hi, of course it's possible, as FasterMoe also use Expert Parallel.  Thank you for your attention for ColossalAI, firstly you can fork a repository of ColossalAI, then you can...