tensimixt
tensimixt
@patrickvonplaten Hi do you know if mistral-inference works for lora+mixtral8x7b instruct v0.1? It does work for lora+mistral-7b v0.3 but getting error about LoRA weights file being loaded missing an expected...
> Nice catch! We should fix this indeed Thank you! Do you think that mistral-finetune is creating bad LoRA's for when finetuning mixtral 8x7b v0.1 instruct? Is there a place...
@ErikKaum once NeMO is able to run through TGI, although the vocab size is > 130k, do you know if TGI will work with NeMO+ LoRA Adapter? Thank you!