llama2-webui
llama2-webui copied to clipboard
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use `llama2-wrapper` as your local llama2 backend for Generative Agents/Apps.
How to load. bin file weights ? file in /mnt/Llama-2-13b-chat-hf/ 
Why can't llama understand Chinese so much and can't reply directly in Chinese? I tested Llama-2-7b-chat-hf again today. Test using GPU platform: matpool.com Memory usage: Open 8BIT occupies 8G+, GPU...
Support [https://github.com/karpathy/llama2.c](https://github.com/karpathy/llama2.c) to rum small llama2 models.
I use `pip install llama2-wrapper` and run `python -m llama2_wrapper.server --model_path /home/wcc/codellama/CodeLlama-7b`, but it cause this error: "/home/wcc/miniconda3/envs/codellama/lib/python3.10/site-packages/pydantic/_internal/_fields.py:127: UserWarning: Field \"model_path" has conflict with protected namespace "model_\". You may be...