exllama icon indicating copy to clipboard operation
exllama copied to clipboard

Is it possible and efficient if load layer on demand?

Open fahadh4ilyas opened this issue 2 years ago • 2 comments

I have a gpu that I want to load multiple model in it. Your exllama model is loading all weight to gpu after instantiate the ExLlama. Is it possible if I load every decoder layer to cpu first and then load it to gpu when forward is called to that layer (then move it back to cpu if done forward pass). It seems possible but I'm not sure how the generation time will be affected.

fahadh4ilyas avatar Aug 30 '23 11:08 fahadh4ilyas

I experimented with this early on, but I couldn't find a way to make it even remotely usable. The bottleneck during text generation is largely memory bandwidth, since every parameter of the model* is read at least once during a forward pass. If you're streaming layers from system RAM, even if you can get it running completely asynchronously, your inference speed will be limited by PCIe bandwidth. So you can expect a slowdown of whatever is the ratio between your PCIe and VRAM bandwidths, likely a slowdown on the order of 30x.

*) The exception is the token embedding layer, which ExLlama already keeps in system RAM.

turboderp avatar Aug 30 '23 12:08 turboderp

I experimented with this early on, but I couldn't find a way to make it even remotely usable. The bottleneck during text generation is largely memory bandwidth, since every parameter of the model* is read at least once during a forward pass. If you're streaming layers from system RAM, even if you can get it running completely asynchronously, your inference speed will be limited by PCIe bandwidth. So you can expect a slowdown of whatever is the ratio between your PCIe and VRAM bandwidths, likely a slowdown on the order of 30x.

*) The exception is the token embedding layer, which ExLlama already keeps in system RAM.

Yeah, I've been testing by copying tensors variable from your exllama to gpu (I skip the process copying tensors to gpu in ExLlama.__init__ method) whenever q4 property is called from Ex4bitLinear. The generation process takes a really long time. I was thinking maybe putting the model to gpu everytime generation called and then putting it back to cpu when idle will works better. But, then again why bother do that because the number of model in gpu wont be increased.

fahadh4ilyas avatar Aug 30 '23 15:08 fahadh4ilyas