llm-vscode-inference-server icon indicating copy to clipboard operation
llm-vscode-inference-server copied to clipboard

Running out of memory with TheBloke/CodeLlama-7B-AWQ

Open bonuschild opened this issue 2 years ago • 2 comments

  • Already posted on https://github.com/vllm-project/vllm/issues/1479
  • My GPU is RTX 3060 with 12GB VRAM
  • My target model isCodeLlama-7B-AWQ, which size is <= 4GB

Looking for help from 2 communities 😄 thx!

bonuschild avatar Oct 26 '23 05:10 bonuschild

I've re-tested this on A100 instead of RTX3060, it show that finally it occupy about 20+GB VRAM! Why is that? I use command:

python api_server.py --model path/to/7b-awq/model --port 8000 -q awq --dtype half --trust-remote-code

That was so weired...

bonuschild avatar Oct 29 '23 10:10 bonuschild

I had success running Mistral-7B-v0.1-AWQ and CodeLlama-7B-AWQ of TheBloke on an A6000 with 48G VRAM, restricted to ~8G VRAM with the following parameters:

python api_server.py --model path/to/model --port 8000 --quantization awq --dtype float16 --gpu-memory-utilization 0.167 --max-model-len 4096 --max-num-batched-tokens 4096

nvidia-smi then shows around 8G memory consumed by the python process, should run on the 3060 as well I hope (need to omit the --gpu-memory-utilization obviously).

jkrauss82 avatar Nov 27 '23 10:11 jkrauss82