AutoAWQ icon indicating copy to clipboard operation
AutoAWQ copied to clipboard

GEMV_fast error

Open SinanAkkoyun opened this issue 1 year ago • 6 comments

Hi! When running an old AWQ quant (for example deepseek 1.3B coder), it works fine. However, when trying to run casperhansen/mistral-instruct-v0.2-gemvfast-awq:

❯ python examples/benchmark.py --model_path ~/ml/llm/models/mistral/small-instruct-v0.2/awq/gemv_fast/                       (autoawq) 
 -- Loading model...
Replacing layers...: 100%|█████████████████████████████████████████████████████████████████████████████| 32/32 [00:01<00:00, 31.45it/s]
Fusing layers...: 100%|███████████████████████████████████████████████████████████████████████████████| 32/32 [00:00<00:00, 578.52it/s]
 -- Warming up...
 -- Generating 32 tokens, 32 in context...
Traceback (most recent call last):
  File "/home/ai/ml/llm/inference/autoawq/AutoAWQ/examples/benchmark.py", line 111, in run_round
    context_time, generate_time = generator(model, input_ids, n_generate)
  File "/home/ai/ml/llm/inference/autoawq/AutoAWQ/examples/benchmark.py", line 54, in generate_torch
    out = model(inputs, use_cache=True)
  File "/home/ai/.mconda3/envs/autoawq/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/ai/.mconda3/envs/autoawq/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/ai/ml/llm/inference/autoawq/AutoAWQ/awq/models/base.py", line 108, in forward
    return self.model(*args, **kwargs)
  File "/home/ai/.mconda3/envs/autoawq/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/ai/.mconda3/envs/autoawq/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/ai/.mconda3/envs/autoawq/lib/python3.10/site-packages/accelerate/hooks.py", line 166, in new_forward
    return module._hf_hook.post_forward(module, output)
  File "/home/ai/.mconda3/envs/autoawq/lib/python3.10/site-packages/accelerate/hooks.py", line 315, in post_forward
    output = send_to_device(output, self.input_device, skip_keys=self.skip_keys)
  File "/home/ai/.mconda3/envs/autoawq/lib/python3.10/site-packages/accelerate/utils/operations.py", line 161, in send_to_device
    {
  File "/home/ai/.mconda3/envs/autoawq/lib/python3.10/site-packages/accelerate/utils/operations.py", line 162, in <dictcomp>
    k: t if k in skip_keys else send_to_device(t, device, non_blocking=non_blocking, skip_keys=skip_keys)
  File "/home/ai/.mconda3/envs/autoawq/lib/python3.10/site-packages/accelerate/utils/operations.py", line 168, in send_to_device
    return tensor.to(device, non_blocking=non_blocking)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ai/ml/llm/inference/autoawq/AutoAWQ/examples/benchmark.py", line 210, in <module>
    main(args)
  File "/home/ai/ml/llm/inference/autoawq/AutoAWQ/examples/benchmark.py", line 178, in main
    stats, model_version = run_round(
  File "/home/ai/ml/llm/inference/autoawq/AutoAWQ/examples/benchmark.py", line 117, in run_round
    raise RuntimeError(ex)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

SinanAkkoyun avatar Apr 09 '24 18:04 SinanAkkoyun

I haven’t seen this before with 32 tokens, which is odd. I used the same benchmark script. Can you try to install from the main branch?

casper-hansen avatar Apr 09 '24 19:04 casper-hansen

I did, I installed AutoAWQ today after pulling with pip install -e ., updated transformers etc I am running with CUDA 12.2, could that be the culprit?

SinanAkkoyun avatar Apr 09 '24 19:04 SinanAkkoyun

@casper-hansen I also tried it with 12.1 now (docker: pytorch/pytorch:2.2.2-cuda12.1-cudnn8-devel):

Installation:

cd AutoAWQ
pip install transformers
pip install -e .

Same error:

# python examples/benchmark.py --model_path /models/mistral/small-instruct-v0.2/awq/gemv_fast/
 -- Loading model...
Replacing layers...: 100%|████████████████████████████████████████████████████████████████████████████████████| 32/32 [00:01<00:00, 28.65it/s]
We've detected an older driver with an RTX 4000 series GPU. These drivers have issues with P2P. This can affect the multi-gpu inference when using accelerate device_map.Please make sure to update your driver to the latest version which resolves this.
Fusing layers...: 100%|██████████████████████████████████████████████████████████████████████████████████████| 32/32 [00:00<00:00, 548.60it/s]
 -- Warming up...
 -- Generating 32 tokens, 32 in context...
Traceback (most recent call last):
  File "/workspace/AutoAWQ/examples/benchmark.py", line 111, in run_round
    context_time, generate_time = generator(model, input_ids, n_generate)
  File "/workspace/AutoAWQ/examples/benchmark.py", line 54, in generate_torch
    out = model(inputs, use_cache=True)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/workspace/AutoAWQ/awq/models/base.py", line 108, in forward
    return self.model(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/accelerate/hooks.py", line 166, in new_forward
    output = module._old_forward(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py", line 1157, in forward
    outputs = self.model(
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/workspace/AutoAWQ/awq/modules/fused/model.py", line 127, in forward
    h, _, _ = layer(
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/workspace/AutoAWQ/awq/modules/fused/block.py", line 130, in forward
    out = h + self.mlp.forward(self.norm_2(h))
  File "/opt/conda/lib/python3.10/site-packages/accelerate/hooks.py", line 166, in new_forward
    output = module._old_forward(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py", line 179, in forward
    return self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/activation.py", line 393, in forward
    return F.silu(input, inplace=self.inplace)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/functional.py", line 2075, in silu
    return torch._C._nn.silu(input)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/workspace/AutoAWQ/examples/benchmark.py", line 210, in <module>
    main(args)
  File "/workspace/AutoAWQ/examples/benchmark.py", line 178, in main
    stats, model_version = run_round(
  File "/workspace/AutoAWQ/examples/benchmark.py", line 117, in run_round
    raise RuntimeError(ex)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

SinanAkkoyun avatar Apr 10 '24 16:04 SinanAkkoyun

I haven’t seen this before with 32 tokens, which is odd. I used the same benchmark script. Can you try to install from the main branch?

can you show your env? transformers version

MichoChan avatar Apr 30 '24 03:04 MichoChan

i guess is multi gpu problem

MichoChan avatar Apr 30 '24 03:04 MichoChan

Got same error, after training and quantizing a llama3.1-8B. Read in the model fine, but when generating it got this error.

noaebbot avatar Aug 15 '24 12:08 noaebbot