NickyDark1
NickyDark1
model_id = "h2oai/h2o-danube-1.8b-chat"# 
same title
multiple GPUs is it possible to train?
Is it possible to add new token and special tokens to be trained? What would the code be like?
### Feature request how to fine tune LoRA to HQQ? ### Motivation how to fine tune LoRA to HQQ? ### Your contribution how to fine tune LoRA to HQQ?
model: gemma-2b-it-function respond:  my idea: Basically you could have a mini agent who makes the calls to the functions and within them you could have an LLM more qualified...