Hao Zhang
Hao Zhang
related with #211 . Is this issue solved?
For now, we have migrated our evaluation to [MT-bench](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge). In the short term, it seems we do not have a plan or capacity to investigate the model's performance on untruthful,...
Unfortunately, I cannot read Japanese. Could you explain why it is strange? Also, note -- vicuna's Japanese capability isn't tested.
@ycat3 thanks for the answer!
@andy-yang-1 Great work, thanks!
now we use `lmsys/vicuna-7b-v1.3` and `lmsys/vicuna-13b-v.1.3` See instructions here: https://github.com/lm-sys/FastChat#vicuna-weights No more delta weights and things should work well!
How many GPUs do you use?
@Michaelvll might be the right person to clarify the training config
@HaniItani is your problem solved?
yes, @ZYHowell is looking into this. But we need to first investigate if 30B w/ lora can improve the chatbot performance compared to 13B w/o lora, otherwise, it does not...