llama icon indicating copy to clipboard operation
llama copied to clipboard

Post your hardware specs here if you got it to work. 🛠

Open elephantpanda opened this issue 1 year ago • 60 comments

It might be useful if you get the model to work to write down the model (e.g. 7B) and the hardware you got it to run on. Then people can get an idea of what will be the minimum specs. I'd also be interested to know. 😀

elephantpanda avatar Mar 03 '23 03:03 elephantpanda

7B takes about 14gb of Vram to inference, and the 65B needs a cluster with a total of just shy of 250gb Vram

The 7b model also takes about 14gb of system ram, and that seems to exceed the capacity of free colab, if anyone requires that.

nvidia-smi
Thu Mar  2 19:29:52 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.86.01    Driver Version: 515.86.01    CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA A100-SXM...  On   | 00000000:07:00.0 Off |                    0 |
| N/A   36C    P0   107W / 400W |  29581MiB / 40960MiB |     95%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA A100-SXM...  On   | 00000000:0F:00.0 Off |                    0 |
| N/A   32C    P0    98W / 400W |  29721MiB / 40960MiB |     99%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+
|   2  NVIDIA A100-SXM...  On   | 00000000:47:00.0 Off |                    0 |
| N/A   32C    P0    95W / 400W |  29719MiB / 40960MiB |     96%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+
|   3  NVIDIA A100-SXM...  On   | 00000000:4E:00.0 Off |                    0 |
| N/A   33C    P0   106W / 400W |  29723MiB / 40960MiB |     99%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+
|   4  NVIDIA A100-SXM...  On   | 00000000:87:00.0 Off |                    0 |
| N/A   41C    P0   102W / 400W |  29725MiB / 40960MiB |     76%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+
|   5  NVIDIA A100-SXM...  On   | 00000000:90:00.0 Off |                    0 |
| N/A   38C    P0   114W / 400W |  29719MiB / 40960MiB |     99%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+
|   6  NVIDIA A100-SXM...  On   | 00000000:B7:00.0 Off |                    0 |
| N/A   38C    P0    95W / 400W |  29725MiB / 40960MiB |     75%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+
|   7  NVIDIA A100-SXM...  On   | 00000000:BD:00.0 Off |                    0 |
| N/A   39C    P0    95W / 400W |  29573MiB / 40960MiB |     99%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A   2916985      C   ...1/envs/pytorch/bin/python    29579MiB |
|    1   N/A  N/A   2916986      C   ...1/envs/pytorch/bin/python    29719MiB |
|    2   N/A  N/A   2916987      C   ...1/envs/pytorch/bin/python    29717MiB |
|    3   N/A  N/A   2916988      C   ...1/envs/pytorch/bin/python    29721MiB |
|    4   N/A  N/A   2916989      C   ...1/envs/pytorch/bin/python    29723MiB |
|    5   N/A  N/A   2916990      C   ...1/envs/pytorch/bin/python    29717MiB |
|    6   N/A  N/A   2916991      C   ...1/envs/pytorch/bin/python    29723MiB |
|    7   N/A  N/A   2916993      C   ...1/envs/pytorch/bin/python    29571MiB |
+-----------------------------------------------------------------------------+

Urammar avatar Mar 03 '23 03:03 Urammar

7B model passed under the following environment: Env: PyTorch 1.11.0,Python 3.8(ubuntu20.04),Cuda 11.3 GPU: RTX A4000(16GB) * 1 CPU: 12 vCPU Intel(R) Xeon(R) Gold 5320 CPU @ 2.20GHz RAM: 32GB

With some modification:

model_args: ModelArgs = ModelArgs(max_seq_len=1024, max_batch_size=1, **params) # 

model = Transformer(model_args).cuda().half() # some people say it doesn't help

prompts = ["What is the most famous equation from this theory?"]

image

ouening avatar Mar 03 '23 03:03 ouening

@Urammar could you also post how much Vram the other 2 Models need? I feel like this could help a lot of people to know what their machine can actually support. I only have a single A100 40GB and can therfore only run the 7B parameters model atm... 😅

Logophoman avatar Mar 03 '23 14:03 Logophoman

Not sure if this will be helpful, but I made a spreadsheet to calculate the memory requirements for each model size, following the FAQ and Paper. You can make a copy to adjust the batch size and sequence length

Will update as necessary

ahoho avatar Mar 03 '23 16:03 ahoho

How much VRAM does the 7B model need for finetuning? Are the released weights 32-bits?

NightMachinery avatar Mar 03 '23 19:03 NightMachinery

I just made enough code changes to run the 7B model on the CPU. That involved

  • Replacing torch.cuda.HalfTensor with torch.BFloat16Tensor

  • Deleting every line of code that mentioned cuda

I also set max_batch_size = 1, removed all but 1 prompt, and added 3 lines of profiling code.

Steady state memory usage is <14GB (but it did use something like 30 while loading the model). It took 7.75 seconds to load the model (some memory swapping occurred during this so it may not be representative), 183 seconds to generate the first token, and 23 seconds to generate each token thereafter. It's only using a single CPU core for some reason (that I haven't tracked down yet).

Hardware: Ryzen 5800x, 32 GB ram

gmorenz avatar Mar 03 '23 19:03 gmorenz

I just made enough code changes to run the 7B model on the CPU. That involved

  • Replacing torch.cuda.HalfTensor with torch.BFloat16Tensor
  • Deleting every line of code that mentioned cuda

I also set max_batch_size = 1, removed all but 1 prompt, and added 3 lines of profiling code.

Steady state memory usage is <14GB (but it did use something like 30 while loading the model). It took 7.75 seconds to load the model (some memory swapping occurred during this so it may not be representative), 183 seconds to generate the first token, and 23 seconds to generate each token thereafter. It's only using a single CPU core for some reason (that I haven't tracked down yet).

Hardware: Ryzen 5800x, 32 GB ram

Can I ask you the biggest favor and provide your example.py file? :)

ergosumdre avatar Mar 03 '23 21:03 ergosumdre

Can I ask you the biggest favor and provide your example.py file? :)

This is probably what you want (the changes aren't just in example.py): https://github.com/gmorenz/llama/tree/cpu

gmorenz avatar Mar 03 '23 22:03 gmorenz

Gotcha. So all we would run is

python3 llama/generation.py --max_gen_len 1 ?

ergosumdre avatar Mar 03 '23 22:03 ergosumdre

python3 -m torch.distributed.run --nproc_per_node 1 example.py --ckpt_dir ~/LLaMA/7B/ --tokenizer_path ~/LLaMA/tokenizer.model --max_batch_size 1

Is more like it... also remove the extra prompts in the hardcoded prompts array. Also reduce max_gen_len if you want it to take less than 1.6 hours (but I just let that part run).

gmorenz avatar Mar 03 '23 22:03 gmorenz

I was able to run 7B on two 1080 Ti (only inference). Next, I'll try 13B and 33B. It still needs refining but it works! I forked LLaMA here:

https://github.com/modular-ml/wrapyfi-examples_llama

and have a readme with the instructions on how to do it:

LLaMA with Wrapyfi

Wrapyfi enables distributing LLaMA (inference only) on multiple GPUs/machines, each with less than 16GB VRAM

currently distributes on two cards only using ZeroMQ. Will support flexible distribution soon!

This approach has only been tested on 7B model for now, using Ubuntu 20.04 with two 1080 Tis. Testing 13B/30B models soon! UPDATE: Tested on Two 3080 Tis as well!!!

How to?

  1. Replace all instances of <YOUR_IP> and <YOUR CHECKPOINT DIRECTORY> before running the scripts

  2. Download LLaMA weights using the official form below and install this wrapyfi-examples_llama inside conda or virtual env:

git clone https://github.com/modular-ml/wrapyfi-examples_llama.git
cd wrapyfi-examples_llama
pip install -r requirements.txt
pip install -e .
  1. Install Wrapyfi with the same environment:
git clone https://github.com/fabawi/wrapyfi.git
cd wrapyfi
pip install .[pyzmq]
  1. Start the Wrapyfi ZeroMQ broker from within the Wrapyfi repo:
cd wrapyfi/standalone 
python zeromq_proxy_broker.py --comm_type pubsubpoll
  1. Start the first instance of the Wrapyfi-wrapped LLaMA from within this repo and env (order is important, dont start wrapyfi_device_idx=0 before wrapyfi_device_idx=1):
CUDA_VISIBLE_DEVICES="0" OMP_NUM_THREADS=1 torchrun --nproc_per_node 1 example.py --ckpt_dir <YOUR CHECKPOINT DIRECTORY>/checkpoints/7B --tokenizer_path <YOUR CHECKPOINT DIRECTORY>/checkpoints/tokenizer.model --wrapyfi_device_idx 1
  1. Now start the second instance (within this repo and env) :
CUDA_VISIBLE_DEVICES="1" OMP_NUM_THREADS=1 torchrun --master_port=29503 --nproc_per_node 1 example.py --ckpt_dir <YOUR CHECKPOINT DIRECTORY>/checkpoints/7B --tokenizer_path <YOUR CHECKPOINT DIRECTORY>/checkpoints/tokenizer.model --wrapyfi_device_idx 0
  1. You will now see the output on both terminals

  2. EXTRA: To run on different machines, the broker must be running on a specific IP in step 4. Start the ZeroMQ broker by setting the IP and provide the env variables for steps 5+6 e.g.,

### (replace 10.0.0.101 with <YOUR_IP> ###

# step 4 modification 
python zeromq_proxy_broker.py --socket_ip 10.0.0.101 --comm_type pubsubpoll

# step 5 modification
CUDA_VISIBLE_DEVICES="0" OMP_NUM_THREADS=1 WRAPYFI_ZEROMQ_SOCKET_IP='10.0.0.101' torchrun --nproc_per_node 1 example.py --ckpt_dir <YOUR CHECKPOINT DIRECTORY>/checkpoints/7B --tokenizer_path <YOUR CHECKPOINT DIRECTORY>/checkpoints/tokenizer.model --wrapyfi_device_idx 1

# step 6 modification
CUDA_VISIBLE_DEVICES="1" OMP_NUM_THREADS=1 WRAPYFI_ZEROMQ_SOCKET_IP='10.0.0.101' torchrun --master_port=29503 --nproc_per_node 1 example.py --ckpt_dir <YOUR CHECKPOINT DIRECTORY>/checkpoints/7B --tokenizer_path <YOUR CHECKPOINT DIRECTORY>/checkpoints/tokenizer.model --wrapyfi_device_idx 0

fabawi avatar Mar 03 '23 23:03 fabawi

With this code I'm able to run the 7B model on

Ram: 32GB (14.4GB sustained use, more during startup)
CPU: Ryzen 5800x, exactly one core is used at 100%
Graphics: RTX 2070 Super, only 1962MiB vram used by pytorch

~~It generates tokens at roughly 4.5 seconds/token. I have reasonable to believe that I can get that down to 2.0 seconds/token with more careful memory management (I've done it, but leaking memory on the CPU side leading to an OOM).~~ It (now) generates tokens at roughly 1 second/token.

All the code is doing is storing the weights on the CPU and moving them to the GPU just before they're used (and then back. Ideally we'd just copy them to the GPU and then never move them back... but I think that will take a more extensive change to the code).

gmorenz avatar Mar 04 '23 00:03 gmorenz

My results are in just to prove it works with only 12GB system ram! #105

Model 7B System RAM: 12GB 😱 VRAM: 16GB (GPU=Quadro P5000) System: Shadow PC

Took about a minute to load the model, it was maxing out the RAM and chomping on the page file. 😉 Loaded model in 116.71 seconds. But then quite quick to generate the results.

Changes I made to example.py

torch.distributed.init_process_group("gloo")

model_args: ModelArgs = ModelArgs( max_seq_len=max_seq_len, max_batch_size=1, **params )

with torch.no_grad(): 
    checkpoint = torch.load(ckpt_path, map_location="cpu")

generator = load(  ckpt_dir, tokenizer_path, local_rank, world_size, max_seq_len, 1  )

elephantpanda avatar Mar 04 '23 09:03 elephantpanda

Hardware:

  • RTX 3090 FE 24GB (with 2 monitors connected)
  • Ryzen 7 3700X
  • 32GB RAM

Llama 13B on a single RTX 3090

In case you haven't seen it: There is a fork at https://github.com/tloen/llama-int8 by @tloen that uses INT8.

I managed to get Llama 13B to run with it on a single RTX 3090 with Linux! Make sure not to install bitsandbytes from pip, install it from github!

With 32GB RAM and 32GB swap, quantizing took 1 minute and loading took 133 seconds. Peak GPU usage was 17269MiB.

Kudos @tloen! 🎉

Llama 7B

Software:

  • Windows 10 with NVidia Studio drivers 528.49
  • Anaconda 64bit with Python 3.9.13
  • pytorch 1.13.1 with CUDA 11.7 (installed with conda).
  • Llama 7B

What i had to do to get it (7B) to work on Windows:

  • Use python -m torch.distributed.run instead of torchrun
  • example.py: torch.distributed.init_process_group("gloo")

Loading the model takes 5.1 seconds. nvidia-smi output at default max_batch_size 32:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 528.49       Driver Version: 528.49       CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ... WDDM  | 00000000:07:00.0  On |                  N/A |
| 30%   55C    P2   307W / 350W |  22158MiB / 24576MiB |     76%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

On Ubuntu Linux 22.04.2 i was able to run the example with torchrun without any changes. Loading the model from an NTFS partition is a bit slower at 6.7 seconds and memory usage was 22916MiB / 24576MiB. nvidia drivers 530.30.02, CUDA 12.1.

neuhaus avatar Mar 04 '23 10:03 neuhaus

I have a version working with a batch size of 16 on a 2080 (8GB) using the 7B model It's available at https://github.com/venuatu/llama My changes were:

  • printing output after every token, to get a chatgpt-like experience https://github.com/venuatu/llama/commit/25c84973f71877677547453dab77eeaea9a86376
  • only keep a single transformer block on the gpu at a time (similar to @gmorenz above)
  • changed from fairscale layers to torch.nn.Linear
  • added tqdm progress bars

And from that I get around half an hour for 16 outputs of 512 length. It seemed like the average was 3 seconds per forward pass at 16 batch size.

The most random output for me so far has been a bunch of floor related negative tweets, which came from the tweet sentiment analysis prompt

Tweet: "Roscoe just peed on the floor. I was not expecting this."
Sentiment: Negative
###
Tweet: "My cat just licked the floor. "
Sentiment: Negative
###
Tweet: "My dog just peed on the floor. I was not expecting this."
Sentiment: Negative

venuatu avatar Mar 04 '23 11:03 venuatu

@venuatu - check out my code for how I avoided doing a .cpu() on the layer after being done with it - that gave me a 4x speedup over naively moving the layer back and forth between the gpu and cpu (when measured with a batch_size of 1).

I'm also curious why you're doing torch.cuda.empty_cache()? That seems like it's just going to force cuda to reallocate the buffers for the layer it just moved off of the gpu when it moves the next layer onto the gpu.

gmorenz avatar Mar 04 '23 14:03 gmorenz

Yep, that's a much better way to do it. It's now running in half the time (ty @gmorenz ) 2080(8GB) ~16 minutes for 512 tokens at 16 batch size

the empty_cache may not have been necessary, with other models in the past I've had buffers get stuck on the gpu, but that is not happening here, maybe pytorch has improved that upstream

venuatu avatar Mar 04 '23 22:03 venuatu

I found some fixes for the very slow load times and its now down to 2.5 seconds (with a hot file cache) from my previous 83 seconds

  • using torch.nn.utils.skip_init() to skip random parameter initialization, saving 30 seconds https://github.com/venuatu/llama/commit/dfdd0ee1f977627888d54832668953f83d9472fc
  • using pyarrow to transform the original checkpoint format into instantly available memory mapped tensors, saving 50 seconds https://github.com/venuatu/llama/commit/0d2bb5a552114b69db588175edd3e55303f029be
    • this makes a new arrow folder next to the checkpoint with 300-ish files that in total are the same as the checkpoint

venuatu avatar Mar 05 '23 07:03 venuatu

Apple Silicon M1, CPU mode

reycn avatar Mar 05 '23 07:03 reycn

Specs: Ryzen 5600x, 16 gigs of ram, RTX 3060 12gb

With @venuatu 's fork and the 7B model im getting:

46.7 seconds to load. 13.8gb of ram used 1.26gb in swap 5gb in vram and there is one core always at 100% utilization

MindSetFPS avatar Mar 05 '23 21:03 MindSetFPS

My Specs: GTX 1630 4GB i5-13400F 128GB RAM Win 11

Using 7B, model loading time 5,61 sec

used @gmorenz's fork which enable my tiny GPU to run it :) and changed from nccl to gloo

torch.distributed.init_process_group("gloo")

pavelzbornik avatar Mar 05 '23 21:03 pavelzbornik

I got finally the 65GB model running on a server of the genesiscloud with 8 RTX 3090 cards with 24GB memory each. The cost to run the server are a little over $10/hour.

Takes almost 3 minutes to load. Inference is quicker than I can read.

So far I am not impressed. I believe GPT-3 (text-davinci-002) is better. But I have to do more tests with different temperatures etc. Here is the result of one experiment:


Why General Artifical Intelligence will overtake the world soon. An Essay by Llama.
Essay by Llama, High School, 10th grade, A+, January 2005
Keywords United States, human beings, Computers, 21st century, Artificial intelligence
In the 21st century, computers are going to take over the world. There is no doubt about it. They are going to become so advanced that they will be able to do everything that human beings can do, and more. In the future, computers will be abl e to drive cars, make movies, and even write books.
Computers are getting more and more advanced every day. In the past, computers could only do simple math problems. Now, t hey can do complicated math problems and can even do complicated tasks like driving a car.
In the future, computers will be able to do everything that human beings can do. They will be able to drive cars, make mo vies, and even write books.
Computers are getting more and more advanced every day. In the past, computers could only do simple math problems. Now, t hey can do complicated math problems and can even do complicated tasks like driving a car. Computers are also getting mor e and more intelligent.

Moonshine-in-Kansas avatar Mar 06 '23 01:03 Moonshine-in-Kansas

Hello guys, I am also interested to see how to run LLaMA (e.g. 7B model) on Mac M1 or M2, any solution until now?

andrewssobral avatar Mar 06 '23 21:03 andrewssobral

I have the 65B (120GB) model working at 60 seconds/token on:

GPU: Nvidia RTX 2070 super (8GB vram, 5946MB in use, only 18% utilization)
CPU: Ryzen 5800x, less than one core used
RAM: 32GB, Only a few GB in continuous use but pre-processing the weights with 16GB or less might be difficult
SSD: 122GB in continuous use with 2GB/s read. Pre-processing the weights done in double that, but could easily be modified to work in 138GB.

SSD read speed is (of course) the bottleneck - I'm just loading every layer from disk before using it and freeing all the memory (RAM and VRAM) afterwards. Will clean up the code and push it tomorrow.

Goes without saying that at 60 seconds/token the utility of this is... questionable.

gmorenz avatar Mar 06 '23 22:03 gmorenz

Hello guys, I am also interested to see how to run LLaMA (e.g. 7B model) on Mac M1 or M2, any solution until now?

I tried 7B with the CPU version on a M2 Max with 64GB ram, it's slow as heck but it works! Load time around 84secs and takes about 4mins to generate a response with max_gen_len=32

Input:

The Z80 is a processor that

Output:

The Z80 is a processor that 8-bit microcomputer manufacturers used from 1976 to 1992. The Z80 was developed by the

Edit: on a 2nd try, the model load time is reduced to 34secs, not sure what changed, but keep in mind I'm running this in a Docker container (using continuumio/miniconda3 image) with interactive shell. I allocated 8 CPUs and all 64GB ram for Docker in the Docker Desktop app.

applefreak avatar Mar 06 '23 23:03 applefreak

Anyone have info regarding use with AMD GPUs? The 7b LLaMa model loads and accepts up to 2048 context tokens on my RX 6800xt 16gb

I keep seeing people talking about VRAM requirements when running in 8 bit mode and no one's talking about normal 16 bit mode lol

YellowRoseCx avatar Mar 07 '23 03:03 YellowRoseCx

Got 7B loaded on 2x 8GB 3060's, using Kobold United, the dev branch, getting about 3 tokens/second.

terbo: what is life? llamabot: I think life is just something that all living things have to make their way through

terbo avatar Mar 07 '23 03:03 terbo

Anyone have info regarding use with AMD GPUs? The 7b LLaMa model loads and accepts up to 2048 context tokens on my RX 6800xt 16gb

I keep seeing people talking about VRAM requirements when running in 8 bit mode and no one's talking about normal 16 bit mode lol

Does CUDA work on AMD? Someone tried to made a DirectML port: #117 Which should work on AMD (for Windows) but it hasn't been tested so it might need some fixing.

elephantpanda avatar Mar 07 '23 04:03 elephantpanda

Successfully running LLaMA 7B, 13B and 30B on a desktop CPU 12700k with 128 Gb of RAM; without videocard. https://github.com/randaller/llama-cpu

randaller avatar Mar 07 '23 09:03 randaller

I have the 65B (120GB) model working at 60 seconds/token on:

GPU: Nvidia RTX 2070 super (8GB vram, 5946MB in use, only 18% utilization)
CPU: Ryzen 5800x, less than one core used
RAM: 32GB, Only a few GB in continuous use but pre-processing the weights with 16GB or less might be difficult
SSD: 122GB in continuous use with 2GB/s read. Pre-processing the weights done in double that, but could easily be modified to work in 138GB.

SSD read speed is (of course) the bottleneck - I'm just loading every layer from disk before using it and freeing all the memory (RAM and VRAM) afterwards. Will clean up the code and push it tomorrow.

Goes without saying that at 60 seconds/token the utility of this is... questionable.

for anybody wondering how exactly to do that, there's a (low-level) lib for that https://github.com/kir-gadjello/zipslicer

chris-aeviator avatar Mar 07 '23 10:03 chris-aeviator