niansa/tuxifan

Results 303 comments of niansa/tuxifan

Seeing this as a "no".

Closing, because new editor doesn't have icons.

> Yes, the weights are fp16. You can convert and run 4-bit using https://github.com/ggerganov/llama.cpp. I think 30B with full precision might be at least on par to 65B 4-bit in...

There's the OpenAPI integrated into the chat client now.

Should be solved by now

Stale, please open a new issue if this is still relevant.

Stale, please open a new issue if this still occurs

This model was suggested: https://github.com/ymcui/Chinese-LLaMA-Alpaca