vllm
vllm copied to clipboard
Modify the current PyTorch model to C++
Expected gain: For 13B models, we should see a 20%-30% latency gain on a single GPU and 2-3x on 4 GPUs. For smaller models, the gain should be even higher.
Having a single iteration's computation being completely C++ should be enough for high performance. In this way, we can keep most complicated scheduling logics in Python, including weight loading.
Potential sources of overheads:
- Python v.s. C++.
- PyTorch (even in C++) v.s. FasterTransformer.
How to implement a C++ version:
- (Fake C++) Torch compiler (torch.jit).
- Libtorch, C++ version of PyTorch (easier to implement and extend, but can only solve overhead 1).
- Prune out the useful single model code from FasterTransformer to CacheFlow. This solves both overheads but is harder to implement.
Specific optimizations for smaller kernels (~100m parameters).
- Improve sampling efficiency.
- We may need to merge more models.
This should not be prioritized. Because the core technique of CacheFlow (memory saving) is not helpful for small models at all, but still, they may benefit from iteration-level scheduling.
After the C++ version, we might need to rerun all the experiments with the new implementation.
@zhuohan123 can this work be considered complete?
If interested, I've been building a c++ native deep learning framework for the past few years that I want to get open sourced soon. This framework aims for optimal performance, here is my framework training AlexNet, mostly the kernels here are cublasLt and cuDNN:
I'd certainly like for it to be part of VLLM. Is this something that there'd be interest in? If so I can go ensure that I support (or add support) for all of the pieces that you need and get it connected. I can provide access to the private repo if requested.