Open-Assistant
Open-Assistant copied to clipboard
Use Cerebras-GPT for fine tuning.
Cerebras(Cerebras huggingface model) just released fully open source model trained optimally and licensed under Apache 2.0. This could be a good candidate for fine-tuning.
With #2276 merged we now have the code change and basic configs ready for anyone who would like to take this issue on and run Cerebras-GPT experiments
Hi, @olliestanley and @LuposX, I have access to some A100s and would be willing to take this on as my first issue! Should I just follow the instructions under Open-Assistant/model/model_training/ and train an SFT model with the newly added Cerebras-GPT configs?
Hi, @olliestanley and @LuposX, I have access to some A100s and would be willing to take this on as my first issue! Should I just follow the instructions under Open-Assistant/model/model_training/ and train an SFT model with the newly added Cerebras-GPT configs?
Hi @alif-munim we would be happy for you to run some experiments. If you would like to discuss further with the ML team it would be great if you could join our Discord and give me a ping, we can also help get you set up on weights and biases
Did you guys start training cerebras gpt? any checkpoints that you can share ?
@djaym7 just checked in our weights&biases project and I did not find any runs with cerebras gpt. Is testing this model still something we consider @andreaskoepf ?
Core AFAIK did not work on this. If someone has compute please give it a try.