flan-alpaca
flan-alpaca copied to clipboard
Any plan to support trl-peft load_in_8bit for training.py ?
Hello,
I am fairly new with LLM in general (only started to study 2 weeks ago). So if I say/ask something silly, please excuse me.
And I stumble upon this blog post from HuggingFace https://huggingface.co/blog/trl-peft
After a quick check it seem that training.py
currently not support load_in_8bit
.
And I wonder if there are any specific reason to not do so?
(I also want try to add such support to flan-alpaca
)