LLM-Safeguard icon indicating copy to clipboard operation
LLM-Safeguard copied to clipboard

bash scripts/forward.sh

Open Veluriyam opened this issue 5 months ago • 4 comments

bash scripts/forward.sh in Llama-2-7b-chat-hf Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:06<00:00, 3.10s/it] [2024-09-23 17:22:48,125] [forward.py:111] Model name: Llama-2-7b-chat-hf [2024-09-23 17:22:48,132] [forward.py:112] Model size: 13.543948288 [2024-09-23 17:22:48,133] [utils.py:94] GPU 0: 6.88 GB / 32.00 GB [2024-09-23 17:22:48,133] [utils.py:94] GPU 1: 6.88 GB / 32.00 GB [2024-09-23 17:22:48,273] [forward.py:173] Running 0%| | 0/100 [00:00<?, ?it/s]../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [1,0,0], thread: [96,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [1,0,0], thread: [97,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. ... ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [31,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. 0%| | 0/100 [00:01<?, ?it/s] Traceback (most recent call last): File "forward.py", line 212, in main() File "forward.py", line 177, in main hidden_states = forward(model, toker, messages) File "forward.py", line 52, in forward outputs = model( File "/opt/conda/envs/onprompt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/envs/onprompt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/envs/onprompt/lib/python3.8/site-packages/accelerate/hooks.py", line 170, in new_forward output = module._old_forward(*args, **kwargs) File "/opt/conda/envs/onprompt/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 1183, in forward outputs = self.model( File "/opt/conda/envs/onprompt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/envs/onprompt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/envs/onprompt/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 1070, in forward layer_outputs = decoder_layer( File "/opt/conda/envs/onprompt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/envs/onprompt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/envs/onprompt/lib/python3.8/site-packages/accelerate/hooks.py", line 170, in new_forward output = module._old_forward(*args, **kwargs) File "/opt/conda/envs/onprompt/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 798, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/opt/conda/envs/onprompt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/envs/onprompt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/envs/onprompt/lib/python3.8/site-packages/accelerate/hooks.py", line 170, in new_forward output = module._old_forward(*args, **kwargs) File "/opt/conda/envs/onprompt/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 706, in forward query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids) File "/opt/conda/envs/onprompt/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 232, in apply_rotary_pos_emb cos = cos[position_ids].unsqueeze(unsqueeze_dim) RuntimeError: CUDA error: device-side assert triggered Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

What is the cause of this error?Thanks.

Veluriyam avatar Sep 23 '24 09:09 Veluriyam