wlwqq

Results 4 comments of wlwqq

@youkaichao i also get this error on Tesla T4 run model gemma-2-2b-it ``` INFO 08-12 14:54:48 selector.py:79] Using Flashinfer backend. WARNING 08-12 14:54:48 selector.py:80] Flashinfer will be stuck on llama-2-7b,...

```text 2024-08-23 09:29:43,409 - lmdeploy - INFO - prompt='user\nhello\nmodel\n', gen_config=EngineGenerationConfig(n=1, max_new_tokens=256, top_p=0.1, top_k=40, temperature=0.0, repetition_penalty=1.0, ignore_eos=False, random_seed=10490577554887956612, stop_words=[107], bad_words=None, min_new_tokens=None, skip_special_tokens=True, logprobs=None), prompt_token_id=[2, 106, 1645, 108, 17534, 107, 108, 106,...

> I just commented lines 688-694 in /vllm/worker/model_runner.py and not use flashinfer. saw no performance difference. what backend you use, FLASH_ATTN? @HeegonJin i set VLLM_ATTENTION_BACKEND = FLASH_ATTN ,but not work...