vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[Usage]: How to use vLLM with `Tensor` input (customized tokenizer).

Open keli-wen opened this issue 2 years ago • 2 comments

Your current environment

Collecting environment information...
PyTorch version: 2.1.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35

Python version: 3.9.18 (main, Sep 11 2023, 13:41:44)  [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1036-azure-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.3.107
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe
Nvidia driver version: 545.23.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Address sizes:                   48 bits physical, 48 bits virtual
Byte Order:                      Little Endian
CPU(s):                          24
On-line CPU(s) list:             0-23
Vendor ID:                       AuthenticAMD
Model name:                      AMD EPYC 7V13 64-Core Processor
CPU family:                      25
Model:                           1
Thread(s) per core:              1
Core(s) per socket:              24
Socket(s):                       1
Stepping:                        1
BogoMIPS:                        4890.89
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr rdpru arat umip vaes vpclmulqdq rdpid fsrm
Hypervisor vendor:               Microsoft
Virtualization type:             full
L1d cache:                       768 KiB (24 instances)
L1i cache:                       768 KiB (24 instances)
L2 cache:                        12 MiB (24 instances)
L3 cache:                        96 MiB (3 instances)
NUMA node(s):                    1
NUMA node0 CPU(s):               0-23
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Not affected
Vulnerability Retbleed:          Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected

Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] flake8-bugbear==24.1.17
[pip3] mypy==1.8.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.2
[pip3] nvidia-pytriton==0.5.3
[pip3] pytorch-lightning==2.1.4
[pip3] torch==2.1.2
[pip3] torchaudio==2.2.0
[pip3] torchmetrics==1.3.1
[pip3] torchvision==0.17.0
[pip3] triton==2.1.0
[pip3] tritonclient==2.43.0
[conda] numpy                     1.25.2                   pypi_0    pypi
[conda] nvidia-pytriton           0.5.3                    pypi_0    pypi
[conda] pytorch-lightning         2.1.4                    pypi_0    pypi
[conda] torch                     2.1.2                    pypi_0    pypi
[conda] torchaudio                2.2.0                    pypi_0    pypi
[conda] torchmetrics              1.3.1                    pypi_0    pypi
[conda] torchvision               0.17.0                   pypi_0    pypi
[conda] triton                    2.1.0                    pypi_0    pypi
[conda] tritonclient              2.43.0                   pypi_0    pypiROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.3.3
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      0-23    0               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

How would you like to use vllm

Hello,

I am currently working on a finance-related Large Language Model (LLM) project. In this project, I'm using a customized tokenizer which is inherited from nn.Module instead of transformers.PreTrainedTokenizer.

class BaseOrderTokenizer(nn.Module):
    """Tokenizer for order info."""

    def __init__(
        self,
        *args,
        **kwargs,
    ) -> None:
        super().__init__()
        ...

Our model employs the Llama2 architecture for the decoder. However, I am uncertain about how to effectively integrate vLLM with our model that utilizes our customized tokenizer.

I would like to know if the following pipeline is feasible with the current version of vLLM: executing our tokenize method first, followed by using LLM.generate for generation tasks.

# pseudocode
tokens = tokenize(input)
output = llm.generate(tokens)

More specifically:

  • Does vLLM currently support Tensor input?
  • Is it possible to bypass providing a tokenizer, or to only provide a dummy tokenizer without actually employing it in the process?

Thank you for your patience and assistance. I eagerly await a response from the vLLM team.

keli-wen avatar Mar 27 '24 08:03 keli-wen

I believe if you implement tokenizer class that works with https://github.com/vllm-project/vllm/blob/3492859b687ba18db47720bcf6f07289999a2df5/vllm/transformers_utils/tokenizer_group/tokenizer_group.py#L42 this API, you can use https://github.com/vllm-project/vllm/blob/3492859b687ba18db47720bcf6f07289999a2df5/vllm/entrypoints/llm.py#L118 to set tokenizer.

rkooo567 avatar Mar 28 '24 04:03 rkooo567

✨Thanks for your reply!

It appears that my issue aligns closely with the following discussions:

Our tokenizer is actually a simple nn.Module and is quite different from PreTrainedTokenizer.

class BaseOrderTokenizer(nn.Module):
    """Tokenizer for order info."""

    def __init__(
        self,
        max_order_index: int,
        emb_dim: int,
        num_max_orders: int,
    ) -> None:
        super().__init__()
        self.max_order_index = max_order_index
        self.num_max_orders = num_max_orders
        self.emb_dim = emb_dim

    def forward(self, features: Tensor) -> Tensor:
        raise NotImplementedError()

Essentially, it's an embedding layer. While I can implement a tokenizer.encode by modifying the forward function, implementing some functions in detokenize (e.g., code) is not feasible.

Moreover, the initialization of LLM requires the provision of a tokenizer. However, from an engineering perspective, decoupling generate from tokenizer might make the usage of vLLM more flexible. Currently, it seems there are limitations in using vLLM for non-NLP tasks - I can't directly use Tensor and custom tokenizer as input, even though I'm working with the Llama2 architecture, which is supported by vLLM.

keli-wen avatar Mar 28 '24 08:03 keli-wen