llama icon indicating copy to clipboard operation
llama copied to clipboard

Inference on GPU

Open sarmientoj24 opened this issue 1 year ago • 15 comments

Is it possible to host this locally on an RTX3XXX or 4XXX with 8GB just to test?

sarmientoj24 avatar Feb 24 '23 19:02 sarmientoj24

According to my napkin math, even the smallest model with 7B parameters will probably take close to 30GB of space. 8GB is unlikely to suffice. But I have no access to the weights yet, it's just my rough guess.

dizys avatar Feb 24 '23 20:02 dizys

Could be possible with https://github.com/FMInference/FlexGen

ekiwi111 avatar Feb 24 '23 22:02 ekiwi111

Could be possible with https://github.com/FMInference/FlexGen

This project looks amazing 🤩. However, in its example, it seems like a 6.7B OPT model would still need at least 15GB of GPU memory. So, the chances are mere 🥲. I would so wanna run it on my 3080 10GB.

dizys avatar Feb 24 '23 23:02 dizys

Flexgen only supports opt models

kir152 avatar Feb 25 '23 09:02 kir152

With KoboldAi I was able to run GPT J 6b on my 8gb 3070 ti by offloading the model to my ram

CyberTimon avatar Feb 25 '23 10:02 CyberTimon

7B in float16 with be 14GB and if quantized to uint8 could be as low as 7GB. But on the graphics cards, from what I've tried with other models it can take 2x the VRAM.

My guess is that 32GB would be the minimum but some clever person may be able to run it with 16GB VRAM.

But the question is, how fast would it be? If it is one character per second then it would not be that useful!

elephantpanda avatar Feb 25 '23 19:02 elephantpanda

Can I use the model on Intel iRIS Xe graphics card?

I appreciate, if possible to as well recommend which libraries to use.

tmgthb avatar Feb 25 '23 19:02 tmgthb

With KoboldAi I was able to run GPT J 6b on my 8gb 3070 ti by offloading the model to my ram

How fast was it?

elephantpanda avatar Feb 27 '23 05:02 elephantpanda

7B in float16 with be 14GB and if quantized to uint8 could be as low as 7GB. But on the graphics cards, from what I've tried with other models it can take 2x the VRAM.

My guess is that 32GB would be the minimum but some clever person may be able to run it with 16GB VRAM.

But the question is, how fast would it be? If it is one character per second then it would not be that useful!

The 7B model generates quickly on a 3090ti (~30 seconds for ~500 tokens, ~17 tokens/s), much faster than the ChatGPT interface. It is using ~14GB VRAM during generation. This is also with batch_size=1, meaning theoretical throughput is higher than this.

https://user-images.githubusercontent.com/32109055/222568019-44f590ae-724f-4b51-848e-5273fdfe16e2.mp4

See my fork for the code for rolling generation and the Gradio interface.

bjoernpl avatar Mar 02 '23 22:03 bjoernpl

Trying to run the 7B model in Colab with 15GB GPU is failing. Is there a way to configure this to be using fp16 or thats already baked into the existing model. *update: Using batch_size=2 seems to make it work in Colab+ with GPU

doublebishop avatar Mar 02 '23 22:03 doublebishop

I was able to run 7B on two 1080 Ti (only inference). Next, I'll try 13B and 33B. It still needs refining but it works! I forked LLaMA here:

https://github.com/modular-ml/wrapyfi-examples_llama

and have a readme with the instructions on how to do it:

LLaMA with Wrapyfi

Wrapyfi enables distributing LLaMA (inference only) on multiple GPUs/machines, each with less than 16GB VRAM

currently distributes on two cards only using ZeroMQ. Will support flexible distribution soon!

This approach has only been tested on 7B model for now, using Ubuntu 20.04 with two 1080 Tis. Testing 13B/30B models soon! UPDATE: Tested on Two 3080 Tis as well!!!

How to?

  1. Replace all instances of <YOUR_IP> and <YOUR CHECKPOINT DIRECTORY> before running the scripts

  2. Download LLaMA weights using the official form below and install this wrapyfi-examples_llama inside conda or virtual env:

git clone https://github.com/modular-ml/wrapyfi-examples_llama.git
cd wrapyfi-examples_llama
pip install -r requirements.txt
pip install -e .
  1. Install Wrapyfi with the same environment:
git clone https://github.com/fabawi/wrapyfi.git
cd wrapyfi
pip install .[pyzmq]
  1. Start the Wrapyfi ZeroMQ broker from within the Wrapyfi repo:
cd wrapyfi/standalone 
python zeromq_proxy_broker.py --comm_type pubsubpoll
  1. Start the first instance of the Wrapyfi-wrapped LLaMA from within this repo and env (order is important, dont start wrapyfi_device_idx=0 before wrapyfi_device_idx=1):
CUDA_VISIBLE_DEVICES="0" OMP_NUM_THREADS=1 torchrun --nproc_per_node 1 example.py --ckpt_dir <YOUR CHECKPOINT DIRECTORY>/checkpoints/7B --tokenizer_path <YOUR CHECKPOINT DIRECTORY>/checkpoints/tokenizer.model --wrapyfi_device_idx 1
  1. Now start the second instance (within this repo and env) :
CUDA_VISIBLE_DEVICES="1" OMP_NUM_THREADS=1 torchrun --master_port=29503 --nproc_per_node 1 example.py --ckpt_dir <YOUR CHECKPOINT DIRECTORY>/checkpoints/7B --tokenizer_path <YOUR CHECKPOINT DIRECTORY>/checkpoints/tokenizer.model --wrapyfi_device_idx 0
  1. You will now see the output on both terminals

  2. EXTRA: To run on different machines, the broker must be running on a specific IP in step 4. Start the ZeroMQ broker by setting the IP and provide the env variables for steps 5+6 e.g.,

### (replace 10.0.0.101 with <YOUR_IP> ###

# step 4 modification 
python zeromq_proxy_broker.py --socket_ip 10.0.0.101 --comm_type pubsubpoll

# step 5 modification
CUDA_VISIBLE_DEVICES="0" OMP_NUM_THREADS=1 WRAPYFI_ZEROMQ_SOCKET_IP='10.0.0.101' torchrun --nproc_per_node 1 example.py --ckpt_dir <YOUR CHECKPOINT DIRECTORY>/checkpoints/7B --tokenizer_path <YOUR CHECKPOINT DIRECTORY>/checkpoints/tokenizer.model --wrapyfi_device_idx 1

# step 6 modification
CUDA_VISIBLE_DEVICES="1" OMP_NUM_THREADS=1 WRAPYFI_ZEROMQ_SOCKET_IP='10.0.0.101' torchrun --master_port=29503 --nproc_per_node 1 example.py --ckpt_dir <YOUR CHECKPOINT DIRECTORY>/checkpoints/7B --tokenizer_path <YOUR CHECKPOINT DIRECTORY>/checkpoints/tokenizer.model --wrapyfi_device_idx 0

fabawi avatar Mar 03 '23 23:03 fabawi

@fabawi Good work. 👍

elephantpanda avatar Mar 03 '23 23:03 elephantpanda

See my fork for the code for rolling generation and the Gradio interface.

@bjoernpl Works great, thanks!

Have you tried changing the gradio interface to use the gradio chatbot component?

neuhaus avatar Mar 04 '23 10:03 neuhaus

Thank you! Works great.

robertavram-md avatar Mar 04 '23 14:03 robertavram-md

Have you tried changing the gradio interface to use the gradio chatbot component?

I think this doesn't quite fit, since LLama is not fine-tuned for chatbot-like capabilities. I think it would definitely be possible (even if it probably doesn't work too well) to use it as a chatbot with some clever prompting. Might be worth a try, thanks for the idea and the feedback.

bjoernpl avatar Mar 04 '23 17:03 bjoernpl

Closing this issue - great work @fabawi !!

jspisak avatar Aug 24 '23 04:08 jspisak