Hao Zhang
Hao Zhang
Closing as it is not planned.
Meanwhile, we have supported Dolly-v2, koala, and we're considering open assistant and probably the stabilityLM
this can be resolved by setting your OS' ulimit to 65535
maybe refer to this? https://askubuntu.com/questions/875173/nmi-watchdog-bug-soft-lockup-cpu2-stuck-for-23s-plymouthd305
Did you try set `ulimit -l` ?
It seems the registration failed, because when you register the model worker there should be some log printed in the controller side. Please double check your ports and not restricted...
Please try to pull the latest version of Fastchat and the latest version of our weights.
you can replace flash_atten with the normal attention in pytorch. Things will still work, despite that the training speed will be very slow.
@suquark seems like a bug on the CLI? It currently uses enter as an exit signal
1. use the latest vicuan weight v1.1 2. update your transformer to latest version 3. update your fastchat to latest version Will solve the problem.