Alexandre Strube
Alexandre Strube
This is because you have been using this vicuna-13-delta. It works just fine with the vicuna-13b-v1.3 and 13b-v1.5. I will close this one. Please reopen if you feel that we...
This is fixed. I add and remove models all the time and never reload the model list, the controller or the web interface. We can close this one.
@ZYHowell any progress on this? There was an issue about using slurm I've seen around as well...
@Burgeon0110 this is an old issue, the transformers library fixed that a while ago. Mind if we close this one? Do you still need help with it?
I had this error with older version of libraries and when there was not enough gpu memory. Can you try with a clean new virtual environment with the latest versions...
16gb is very little to train a model. I am not sure you can without some quantization. In any case, did you manage it? Should we still look into this...
@samarthsarin it's been a while. Did you manage to do so on the small-ram gpus?
@ch930410 this looks like a non-issue - it was a problem of the model version, not of fastChat. And now even less with the latest vicunas based on llama2. Let's...
@vince62s seems like this is a bit stalled here. Do you still have this question? :-)
@ch930410 you need to download the model again. This is not a problem of FastChat, but of a model. Let's close this one?