Wenxuan Tan
Wenxuan Tan
Thanks for your issue. Could you try pulling the most recent repo? I fixed this last week.
Thanks for your issue. Could you share which script did you run, or a minimal reproducible example?
Hi, Yes, I believe based on the readme you need torch 1.12 to run it. In fact some of these legacy APIs are under migration and are not guaranteed to...
Sorry, I think the current auto parallel is less performant and popular so we didn't adapt it to the newest version. Do you have a compelling reason to use it?...
Other demos should work on torch 2.0
Could you try examples/language/gpt/gemini and examples/language/gpt/hybridparallelism?
I have fixed this so pulling from the newest main branch should work
Could you either install apex from source or set enable_all_optimization=False? Thanks.
You'll need to either set enable_all_optimization=False or pip install flash-attn