candle icon indicating copy to clipboard operation
candle copied to clipboard

Slow generation compared to transformers + PyTorch

Open hugoabonizio opened this issue 1 year ago • 13 comments
trafficstars

I'm running the Llama example on a machine with an Nvidia T4 16GB to compare the performance with HF Transformers + PyTorch.

Here's the Python example I'm running:

import time
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf')
model = AutoModelForCausalLM.from_pretrained(
    'meta-llama/Llama-2-7b-hf',
    torch_dtype=torch.float16,
).to('cuda')

prompt = '1' * 500
max_tokens = 200

start_time = time.time()
tokens = tokenizer(prompt, return_tensors='pt').to('cuda')
output = model.generate(
    **tokens,
    max_new_tokens=max_tokens,
    temperature=2.0,
)
total_time = time.time() - start_time
generated_tokens = len(output[0, tokens['input_ids'].shape[1]:])
print(generated_tokens / total_time, 'tokens/s')

The command I'm using to test Candle's implementation is:

$ prompt=$(python3 -c 'print("1" * 500)')
$ cargo run --example llama --release --features "cuda" -- --prompt "${prompt}" --temperature 2.0 --sample-len 200 --dtype f16
loading the model weights from meta-llama/Llama-2-7b-hf
building the model
starting the inference loop
(...)
200 tokens generated (9.116397586726794 token/s)
  • Python result: 14.3 tokens/s
  • Candle result: 9.1 tokens/s

Using a batch size of 1 for both implementations, the speed difference is around ~57%, but I noticed this difference is greater when the batch size is >1.

Am I missing something? Perhaps the example includes gradient calculation by default or something similar?

hugoabonizio avatar Feb 08 '24 19:02 hugoabonizio

One thing that you may want to try if your GPU allows it is enabling flash attention (which pytorch is likely to use behind the hood). This can be done with --use-flash-attn and enabling the flash-attn feature flag.

LaurentMazare avatar Feb 08 '24 19:02 LaurentMazare

@LaurentMazare I had the impression that flash attention wasn't supported in T4s because it wasn't compiling with it. However, it appears that the Python implementation of flash attention 1.x does support the Turing architecture.

I tried the command:

$ cargo run --example llama --release --features "flash-attn cuda" -- --prompt "${prompt}" --temperature 2.0 --sample-len 200 --dtype f16 --use-flash-attn
Error
   Compiling candle-flash-attn v0.4.0 (/hugo/candle/candle-flash-attn)
error: failed to run custom build command for `candle-flash-attn v0.4.0 (/hugo/candle/candle-flash-attn)`

Caused by:
  process didn't exit successfully: `/hugo/candle/target/release/build/candle-flash-attn-819164368057455d/build-script-build` (exit status: 101)
  --- stdout
  cargo:rerun-if-changed=build.rs
  cargo:rerun-if-changed=kernels/flash_api.cu
  cargo:rerun-if-changed=kernels/flash_fwd_hdim128_fp16_sm80.cu
  cargo:rerun-if-changed=kernels/flash_fwd_hdim160_fp16_sm80.cu
  cargo:rerun-if-changed=kernels/flash_fwd_hdim192_fp16_sm80.cu
  cargo:rerun-if-changed=kernels/flash_fwd_hdim224_fp16_sm80.cu
  cargo:rerun-if-changed=kernels/flash_fwd_hdim256_fp16_sm80.cu
  cargo:rerun-if-changed=kernels/flash_fwd_hdim32_fp16_sm80.cu
  cargo:rerun-if-changed=kernels/flash_fwd_hdim64_fp16_sm80.cu
  cargo:rerun-if-changed=kernels/flash_fwd_hdim96_fp16_sm80.cu
  cargo:rerun-if-changed=kernels/flash_fwd_hdim128_bf16_sm80.cu
  cargo:rerun-if-changed=kernels/flash_fwd_hdim160_bf16_sm80.cu
  cargo:rerun-if-changed=kernels/flash_fwd_hdim192_bf16_sm80.cu
  cargo:rerun-if-changed=kernels/flash_fwd_hdim224_bf16_sm80.cu
  cargo:rerun-if-changed=kernels/flash_fwd_hdim256_bf16_sm80.cu
  cargo:rerun-if-changed=kernels/flash_fwd_hdim32_bf16_sm80.cu
  cargo:rerun-if-changed=kernels/flash_fwd_hdim64_bf16_sm80.cu
  cargo:rerun-if-changed=kernels/flash_fwd_hdim96_bf16_sm80.cu
  cargo:rerun-if-changed=kernels/flash_fwd_kernel.h
  cargo:rerun-if-changed=kernels/flash_fwd_launch_template.h
  cargo:rerun-if-changed=kernels/flash.h
  cargo:rerun-if-changed=kernels/philox.cuh
  cargo:rerun-if-changed=kernels/softmax.h
  cargo:rerun-if-changed=kernels/utils.h
  cargo:rerun-if-changed=kernels/kernel_traits.h
  cargo:rerun-if-changed=kernels/block_info.h
  cargo:rerun-if-changed=kernels/static_switch.h
  cargo:info=["/usr", "/usr/local/cuda", "/opt/cuda", "/usr/lib/cuda", "C:/Program Files/NVIDIA GPU Computing Toolkit", "C:/CUDA"]
  cargo:rerun-if-env-changed=CUDA_COMPUTE_CAP
  cargo:rustc-env=CUDA_COMPUTE_CAP=75

  --- stderr
  #$ _NVVM_BRANCH_=nvvm
  #$ _SPACE_= 
  #$ _CUDART_=cudart
  #$ _HERE_=/usr/local/cuda/bin
  #$ _THERE_=/usr/local/cuda/bin
  #$ _TARGET_SIZE_=
  #$ _TARGET_DIR_=
  #$ _TARGET_DIR_=targets/x86_64-linux
  #$ _NVVM_BRANCH_=nvvm
  #$ _SPACE_= 
  #$ _CUDART_=cudart
  #$ _HERE_=/usr/local/cuda/bin
  #$ _THERE_=/usr/local/cuda/bin
  #$ _TARGET_SIZE_=
  #$ _TARGET_DIR_=
  #$ TOP=/usr/local/cuda/bin/..
  #$ _TARGET_DIR_=targets/x86_64-linux
  #$ NVVMIR_LIBRARY_DIR=/usr/local/cuda/bin/../nvvm/libdevice
  #$ LD_LIBRARY_PATH=/usr/local/cuda/bin/../lib:/hugo/candle/target/release/deps:/hugo/candle/target/release:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
  #$ TOP=/usr/local/cuda/bin/..
  #$ NVVMIR_LIBRARY_DIR=/usr/local/cuda/bin/../nvvm/libdevice
  #$ PATH=/usr/local/cuda/bin/../nvvm/bin:/usr/local/cuda/bin:/root/.cargo/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  #$ LD_LIBRARY_PATH=/usr/local/cuda/bin/../lib:/hugo/candle/target/release/deps:/hugo/candle/target/release:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
  #$ INCLUDES="-I/usr/local/cuda/bin/../targets/x86_64-linux/include"  
  #$ PATH=/usr/local/cuda/bin/../nvvm/bin:/usr/local/cuda/bin:/root/.cargo/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  #$ INCLUDES="-I/usr/local/cuda/bin/../targets/x86_64-linux/include"  
  #$ LIBRARIES=  "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib/stubs" "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib"
  #$ LIBRARIES=  "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib/stubs" "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib"
  #$ CUDAFE_FLAGS=
  #$ PTXAS_FLAGS=
  #$ CUDAFE_FLAGS=
  #$ PTXAS_FLAGS=
  #$ gcc -std=c++17 -D__CUDA_ARCH__=750 -D__CUDA_ARCH_LIST__=750 -E -x c++  -DCUDA_DOUBLE_MATH_FUNCTIONS -D__CUDACC__ -D__NVCC__ -D__CUDACC_EXTENDED_LAMBDA__ -D__CUDACC_RELAXED_CONSTEXPR__  -O3 -I"cutlass/include" "-I/usr/local/cuda/bin/../targets/x86_64-linux/include"    -U "__CUDA_NO_HALF_OPERATORS__" -U "__CUDA_NO_HALF_CONVERSIONS__" -U "__CUDA_NO_HALF2_OPERATORS__" -U "__CUDA_NO_BFLOAT16_CONVERSIONS__" -D__CUDACC_VER_MAJOR__=11 -D__CUDACC_VER_MINOR__=8 -D__CUDACC_VER_BUILD__=89 -D__CUDA_API_VER_MAJOR__=11 -D__CUDA_API_VER_MINOR__=8 -DCUDA_API_PER_THREAD_DEFAULT_STREAM=1 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64 "kernels/flash_fwd_hdim224_fp16_sm80.cu" -o "/tmp/tmpxft_0002a36f_00000000-7_flash_fwd_hdim224_fp16_sm80.cpp1.ii" 
  #$ gcc -std=c++17 -D__CUDA_ARCH__=750 -D__CUDA_ARCH_LIST__=750 -E -x c++  -DCUDA_DOUBLE_MATH_FUNCTIONS -D__CUDACC__ -D__NVCC__ -D__CUDACC_EXTENDED_LAMBDA__ -D__CUDACC_RELAXED_CONSTEXPR__  -O3 -I"cutlass/include" "-I/usr/local/cuda/bin/../targets/x86_64-linux/include"    -U "__CUDA_NO_HALF_OPERATORS__" -U "__CUDA_NO_HALF_CONVERSIONS__" -U "__CUDA_NO_HALF2_OPERATORS__" -U "__CUDA_NO_BFLOAT16_CONVERSIONS__" -D__CUDACC_VER_MAJOR__=11 -D__CUDACC_VER_MINOR__=8 -D__CUDACC_VER_BUILD__=89 -D__CUDA_API_VER_MAJOR__=11 -D__CUDA_API_VER_MINOR__=8 -DCUDA_API_PER_THREAD_DEFAULT_STREAM=1 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64 "kernels/flash_fwd_hdim96_fp16_sm80.cu" -o "/tmp/tmpxft_0002a36e_00000000-7_flash_fwd_hdim96_fp16_sm80.cpp1.ii" 
  #$ _NVVM_BRANCH_=nvvm
  #$ _SPACE_= 
  #$ _CUDART_=cudart
  #$ _HERE_=/usr/local/cuda/bin
  #$ _THERE_=/usr/local/cuda/bin
  #$ _TARGET_SIZE_=
  #$ _TARGET_DIR_=
  #$ _TARGET_DIR_=targets/x86_64-linux
  #$ TOP=/usr/local/cuda/bin/..
  #$ NVVMIR_LIBRARY_DIR=/usr/local/cuda/bin/../nvvm/libdevice
  #$ LD_LIBRARY_PATH=/usr/local/cuda/bin/../lib:/hugo/candle/target/release/deps:/hugo/candle/target/release:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
  #$ PATH=/usr/local/cuda/bin/../nvvm/bin:/usr/local/cuda/bin:/root/.cargo/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  #$ INCLUDES="-I/usr/local/cuda/bin/../targets/x86_64-linux/include"  
  #$ LIBRARIES=  "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib/stubs" "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib"
  #$ CUDAFE_FLAGS=
  #$ PTXAS_FLAGS=
  #$ _NVVM_BRANCH_=nvvm
  #$ _SPACE_= 
  #$ _CUDART_=cudart
  #$ _HERE_=/usr/local/cuda/bin
  #$ _THERE_=/usr/local/cuda/bin
  #$ _TARGET_SIZE_=
  #$ _TARGET_DIR_=
  #$ _TARGET_DIR_=targets/x86_64-linux
  #$ gcc -std=c++17 -D__CUDA_ARCH__=750 -D__CUDA_ARCH_LIST__=750 -E -x c++  -DCUDA_DOUBLE_MATH_FUNCTIONS -D__CUDACC__ -D__NVCC__ -D__CUDACC_EXTENDED_LAMBDA__ -D__CUDACC_RELAXED_CONSTEXPR__  -O3 -I"cutlass/include" "-I/usr/local/cuda/bin/../targets/x86_64-linux/include"    -U "__CUDA_NO_HALF_OPERATORS__" -U "__CUDA_NO_HALF_CONVERSIONS__" -U "__CUDA_NO_HALF2_OPERATORS__" -U "__CUDA_NO_BFLOAT16_CONVERSIONS__" -D__CUDACC_VER_MAJOR__=11 -D__CUDACC_VER_MINOR__=8 -D__CUDACC_VER_BUILD__=89 -D__CUDA_API_VER_MAJOR__=11 -D__CUDA_API_VER_MINOR__=8 -DCUDA_API_PER_THREAD_DEFAULT_STREAM=1 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64 "kernels/flash_fwd_hdim32_bf16_sm80.cu" -o "/tmp/tmpxft_0002a373_00000000-7_flash_fwd_hdim32_bf16_sm80.cpp1.ii" 
  #$ TOP=/usr/local/cuda/bin/..
  #$ NVVMIR_LIBRARY_DIR=/usr/local/cuda/bin/../nvvm/libdevice
  #$ LD_LIBRARY_PATH=/usr/local/cuda/bin/../lib:/hugo/candle/target/release/deps:/hugo/candle/target/release:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
  #$ PATH=/usr/local/cuda/bin/../nvvm/bin:/usr/local/cuda/bin:/root/.cargo/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  #$ INCLUDES="-I/usr/local/cuda/bin/../targets/x86_64-linux/include"  
  #$ LIBRARIES=  "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib/stubs" "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib"
  #$ CUDAFE_FLAGS=
  #$ PTXAS_FLAGS=
  #$ gcc -std=c++17 -D__CUDA_ARCH__=750 -D__CUDA_ARCH_LIST__=750 -E -x c++  -DCUDA_DOUBLE_MATH_FUNCTIONS -D__CUDACC__ -D__NVCC__ -D__CUDACC_EXTENDED_LAMBDA__ -D__CUDACC_RELAXED_CONSTEXPR__  -O3 -I"cutlass/include" "-I/usr/local/cuda/bin/../targets/x86_64-linux/include"    -U "__CUDA_NO_HALF_OPERATORS__" -U "__CUDA_NO_HALF_CONVERSIONS__" -U "__CUDA_NO_HALF2_OPERATORS__" -U "__CUDA_NO_BFLOAT16_CONVERSIONS__" -D__CUDACC_VER_MAJOR__=11 -D__CUDACC_VER_MINOR__=8 -D__CUDACC_VER_BUILD__=89 -D__CUDA_API_VER_MAJOR__=11 -D__CUDA_API_VER_MINOR__=8 -DCUDA_API_PER_THREAD_DEFAULT_STREAM=1 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64 "kernels/flash_fwd_hdim224_bf16_sm80.cu" -o "/tmp/tmpxft_0002a372_00000000-7_flash_fwd_hdim224_bf16_sm80.cpp1.ii" 
  #$ _NVVM_BRANCH_=nvvm
  #$ _SPACE_= 
  #$ _CUDART_=cudart
  #$ _HERE_=/usr/local/cuda/bin
  #$ _THERE_=/usr/local/cuda/bin
  #$ _TARGET_SIZE_=
  #$ _TARGET_DIR_=
  #$ _TARGET_DIR_=targets/x86_64-linux
  #$ TOP=/usr/local/cuda/bin/..
  #$ NVVMIR_LIBRARY_DIR=/usr/local/cuda/bin/../nvvm/libdevice
  #$ LD_LIBRARY_PATH=/usr/local/cuda/bin/../lib:/hugo/candle/target/release/deps:/hugo/candle/target/release:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
  #$ PATH=/usr/local/cuda/bin/../nvvm/bin:/usr/local/cuda/bin:/root/.cargo/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  #$ INCLUDES="-I/usr/local/cuda/bin/../targets/x86_64-linux/include"  
  #$ LIBRARIES=  "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib/stubs" "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib"
  #$ CUDAFE_FLAGS=
  #$ PTXAS_FLAGS=
  #$ _NVVM_BRANCH_=nvvm
  #$ gcc -std=c++17 -D__CUDA_ARCH__=750 -D__CUDA_ARCH_LIST__=750 -E -x c++  -DCUDA_DOUBLE_MATH_FUNCTIONS -D__CUDACC__ -D__NVCC__ -D__CUDACC_EXTENDED_LAMBDA__ -D__CUDACC_RELAXED_CONSTEXPR__  -O3 -I"cutlass/include" "-I/usr/local/cuda/bin/../targets/x86_64-linux/include"    -U "__CUDA_NO_HALF_OPERATORS__" -U "__CUDA_NO_HALF_CONVERSIONS__" -U "__CUDA_NO_HALF2_OPERATORS__" -U "__CUDA_NO_BFLOAT16_CONVERSIONS__" -D__CUDACC_VER_MAJOR__=11 -D__CUDACC_VER_MINOR__=8 -D__CUDACC_VER_BUILD__=89 -D__CUDA_API_VER_MAJOR__=11 -D__CUDA_API_VER_MINOR__=8 -DCUDA_API_PER_THREAD_DEFAULT_STREAM=1 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64 "kernels/flash_fwd_hdim160_fp16_sm80.cu" -o "/tmp/tmpxft_0002a371_00000000-7_flash_fwd_hdim160_fp16_sm80.cpp1.ii" 
  #$ _SPACE_= 
  #$ _CUDART_=cudart
  #$ _HERE_=/usr/local/cuda/bin
  #$ _THERE_=/usr/local/cuda/bin
  #$ _TARGET_SIZE_=
  #$ _TARGET_DIR_=
  #$ _TARGET_DIR_=targets/x86_64-linux
  #$ TOP=/usr/local/cuda/bin/..
  #$ NVVMIR_LIBRARY_DIR=/usr/local/cuda/bin/../nvvm/libdevice
  #$ LD_LIBRARY_PATH=/usr/local/cuda/bin/../lib:/hugo/candle/target/release/deps:/hugo/candle/target/release:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
  #$ PATH=/usr/local/cuda/bin/../nvvm/bin:/usr/local/cuda/bin:/root/.cargo/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  #$ INCLUDES="-I/usr/local/cuda/bin/../targets/x86_64-linux/include"  
  #$ LIBRARIES=  "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib/stubs" "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib"
  #$ CUDAFE_FLAGS=
  #$ PTXAS_FLAGS=
  #$ gcc -std=c++17 -D__CUDA_ARCH__=750 -D__CUDA_ARCH_LIST__=750 -E -x c++  -DCUDA_DOUBLE_MATH_FUNCTIONS -D__CUDACC__ -D__NVCC__ -D__CUDACC_EXTENDED_LAMBDA__ -D__CUDACC_RELAXED_CONSTEXPR__  -O3 -I"cutlass/include" "-I/usr/local/cuda/bin/../targets/x86_64-linux/include"    -U "__CUDA_NO_HALF_OPERATORS__" -U "__CUDA_NO_HALF_CONVERSIONS__" -U "__CUDA_NO_HALF2_OPERATORS__" -U "__CUDA_NO_BFLOAT16_CONVERSIONS__" -D__CUDACC_VER_MAJOR__=11 -D__CUDACC_VER_MINOR__=8 -D__CUDACC_VER_BUILD__=89 -D__CUDA_API_VER_MAJOR__=11 -D__CUDA_API_VER_MINOR__=8 -DCUDA_API_PER_THREAD_DEFAULT_STREAM=1 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64 "kernels/flash_api.cu" -o "/tmp/tmpxft_0002a370_00000000-7_flash_api.cpp1.ii" 
  #$ cicc --c++17 --gnu_version=90400 --display_error_number --orig_src_file_name "kernels/flash_fwd_hdim224_bf16_sm80.cu" --orig_src_path_name "/hugo/candle/candle-flash-attn/kernels/flash_fwd_hdim224_bf16_sm80.cu" --allow_managed --extended-lambda --relaxed_constexpr   -arch compute_75 -m64 --no-version-ident -ftz=1 -prec_div=0 -prec_sqrt=0 -fmad=1 -fast-math --gen_div_approx_ftz --include_file_name "tmpxft_0002a372_00000000-3_flash_fwd_hdim224_bf16_sm80.fatbin.c" -tused --gen_module_id_file --module_id_file_name "/tmp/tmpxft_0002a372_00000000-4_flash_fwd_hdim224_bf16_sm80.module_id" --gen_c_file_name "/tmp/tmpxft_0002a372_00000000-6_flash_fwd_hdim224_bf16_sm80.cudafe1.c" --stub_file_name "/tmp/tmpxft_0002a372_00000000-6_flash_fwd_hdim224_bf16_sm80.cudafe1.stub.c" --gen_device_file_name "/tmp/tmpxft_0002a372_00000000-6_flash_fwd_hdim224_bf16_sm80.cudafe1.gpu"  "/tmp/tmpxft_0002a372_00000000-7_flash_fwd_hdim224_bf16_sm80.cpp1.ii" -o "/tmp/tmpxft_0002a372_00000000-6_flash_fwd_hdim224_bf16_sm80.ptx"
  #$ cicc --c++17 --gnu_version=90400 --display_error_number --orig_src_file_name "kernels/flash_fwd_hdim224_fp16_sm80.cu" --orig_src_path_name "/hugo/candle/candle-flash-attn/kernels/flash_fwd_hdim224_fp16_sm80.cu" --allow_managed --extended-lambda --relaxed_constexpr   -arch compute_75 -m64 --no-version-ident -ftz=1 -prec_div=0 -prec_sqrt=0 -fmad=1 -fast-math --gen_div_approx_ftz --include_file_name "tmpxft_0002a36f_00000000-3_flash_fwd_hdim224_fp16_sm80.fatbin.c" -tused --gen_module_id_file --module_id_file_name "/tmp/tmpxft_0002a36f_00000000-4_flash_fwd_hdim224_fp16_sm80.module_id" --gen_c_file_name "/tmp/tmpxft_0002a36f_00000000-6_flash_fwd_hdim224_fp16_sm80.cudafe1.c" --stub_file_name "/tmp/tmpxft_0002a36f_00000000-6_flash_fwd_hdim224_fp16_sm80.cudafe1.stub.c" --gen_device_file_name "/tmp/tmpxft_0002a36f_00000000-6_flash_fwd_hdim224_fp16_sm80.cudafe1.gpu"  "/tmp/tmpxft_0002a36f_00000000-7_flash_fwd_hdim224_fp16_sm80.cpp1.ii" -o "/tmp/tmpxft_0002a36f_00000000-6_flash_fwd_hdim224_fp16_sm80.ptx"
  #$ cicc --c++17 --gnu_version=90400 --display_error_number --orig_src_file_name "kernels/flash_fwd_hdim160_fp16_sm80.cu" --orig_src_path_name "/hugo/candle/candle-flash-attn/kernels/flash_fwd_hdim160_fp16_sm80.cu" --allow_managed --extended-lambda --relaxed_constexpr   -arch compute_75 -m64 --no-version-ident -ftz=1 -prec_div=0 -prec_sqrt=0 -fmad=1 -fast-math --gen_div_approx_ftz --include_file_name "tmpxft_0002a371_00000000-3_flash_fwd_hdim160_fp16_sm80.fatbin.c" -tused --gen_module_id_file --module_id_file_name "/tmp/tmpxft_0002a371_00000000-4_flash_fwd_hdim160_fp16_sm80.module_id" --gen_c_file_name "/tmp/tmpxft_0002a371_00000000-6_flash_fwd_hdim160_fp16_sm80.cudafe1.c" --stub_file_name "/tmp/tmpxft_0002a371_00000000-6_flash_fwd_hdim160_fp16_sm80.cudafe1.stub.c" --gen_device_file_name "/tmp/tmpxft_0002a371_00000000-6_flash_fwd_hdim160_fp16_sm80.cudafe1.gpu"  "/tmp/tmpxft_0002a371_00000000-7_flash_fwd_hdim160_fp16_sm80.cpp1.ii" -o "/tmp/tmpxft_0002a371_00000000-6_flash_fwd_hdim160_fp16_sm80.ptx"
  #$ cicc --c++17 --gnu_version=90400 --display_error_number --orig_src_file_name "kernels/flash_fwd_hdim96_fp16_sm80.cu" --orig_src_path_name "/hugo/candle/candle-flash-attn/kernels/flash_fwd_hdim96_fp16_sm80.cu" --allow_managed --extended-lambda --relaxed_constexpr   -arch compute_75 -m64 --no-version-ident -ftz=1 -prec_div=0 -prec_sqrt=0 -fmad=1 -fast-math --gen_div_approx_ftz --include_file_name "tmpxft_0002a36e_00000000-3_flash_fwd_hdim96_fp16_sm80.fatbin.c" -tused --gen_module_id_file --module_id_file_name "/tmp/tmpxft_0002a36e_00000000-4_flash_fwd_hdim96_fp16_sm80.module_id" --gen_c_file_name "/tmp/tmpxft_0002a36e_00000000-6_flash_fwd_hdim96_fp16_sm80.cudafe1.c" --stub_file_name "/tmp/tmpxft_0002a36e_00000000-6_flash_fwd_hdim96_fp16_sm80.cudafe1.stub.c" --gen_device_file_name "/tmp/tmpxft_0002a36e_00000000-6_flash_fwd_hdim96_fp16_sm80.cudafe1.gpu"  "/tmp/tmpxft_0002a36e_00000000-7_flash_fwd_hdim96_fp16_sm80.cpp1.ii" -o "/tmp/tmpxft_0002a36e_00000000-6_flash_fwd_hdim96_fp16_sm80.ptx"
  #$ cicc --c++17 --gnu_version=90400 --display_error_number --orig_src_file_name "kernels/flash_fwd_hdim32_bf16_sm80.cu" --orig_src_path_name "/hugo/candle/candle-flash-attn/kernels/flash_fwd_hdim32_bf16_sm80.cu" --allow_managed --extended-lambda --relaxed_constexpr   -arch compute_75 -m64 --no-version-ident -ftz=1 -prec_div=0 -prec_sqrt=0 -fmad=1 -fast-math --gen_div_approx_ftz --include_file_name "tmpxft_0002a373_00000000-3_flash_fwd_hdim32_bf16_sm80.fatbin.c" -tused --gen_module_id_file --module_id_file_name "/tmp/tmpxft_0002a373_00000000-4_flash_fwd_hdim32_bf16_sm80.module_id" --gen_c_file_name "/tmp/tmpxft_0002a373_00000000-6_flash_fwd_hdim32_bf16_sm80.cudafe1.c" --stub_file_name "/tmp/tmpxft_0002a373_00000000-6_flash_fwd_hdim32_bf16_sm80.cudafe1.stub.c" --gen_device_file_name "/tmp/tmpxft_0002a373_00000000-6_flash_fwd_hdim32_bf16_sm80.cudafe1.gpu"  "/tmp/tmpxft_0002a373_00000000-7_flash_fwd_hdim32_bf16_sm80.cpp1.ii" -o "/tmp/tmpxft_0002a373_00000000-6_flash_fwd_hdim32_bf16_sm80.ptx"
  #$ cicc --c++17 --gnu_version=90400 --display_error_number --orig_src_file_name "kernels/flash_api.cu" --orig_src_path_name "/hugo/candle/candle-flash-attn/kernels/flash_api.cu" --allow_managed --extended-lambda --relaxed_constexpr   -arch compute_75 -m64 --no-version-ident -ftz=1 -prec_div=0 -prec_sqrt=0 -fmad=1 -fast-math --gen_div_approx_ftz --include_file_name "tmpxft_0002a370_00000000-3_flash_api.fatbin.c" -tused --gen_module_id_file --module_id_file_name "/tmp/tmpxft_0002a370_00000000-4_flash_api.module_id" --gen_c_file_name "/tmp/tmpxft_0002a370_00000000-6_flash_api.cudafe1.c" --stub_file_name "/tmp/tmpxft_0002a370_00000000-6_flash_api.cudafe1.stub.c" --gen_device_file_name "/tmp/tmpxft_0002a370_00000000-6_flash_api.cudafe1.gpu"  "/tmp/tmpxft_0002a370_00000000-7_flash_api.cpp1.ii" -o "/tmp/tmpxft_0002a370_00000000-6_flash_api.ptx"
  #$ ptxas -arch=sm_75 -m64  "/tmp/tmpxft_0002a370_00000000-6_flash_api.ptx"  -o "/tmp/tmpxft_0002a370_00000000-8_flash_api.sm_75.cubin" 
  #$ fatbinary -64 --cicc-cmdline="-ftz=1 -prec_div=0 -prec_sqrt=0 -fmad=1 " "--image3=kind=elf,sm=75,file=/tmp/tmpxft_0002a370_00000000-8_flash_api.sm_75.cubin" "--image3=kind=ptx,sm=75,file=/tmp/tmpxft_0002a370_00000000-6_flash_api.ptx" --embedded-fatbin="/tmp/tmpxft_0002a370_00000000-3_flash_api.fatbin.c" 
  #$ rm /tmp/tmpxft_0002a370_00000000-3_flash_api.fatbin
  #$ gcc -std=c++17 -D__CUDA_ARCH_LIST__=750 -E -x c++ -D__CUDACC__ -D__NVCC__ -D__CUDACC_EXTENDED_LAMBDA__ -D__CUDACC_RELAXED_CONSTEXPR__  -O3 -I"cutlass/include" "-I/usr/local/cuda/bin/../targets/x86_64-linux/include"    -U "__CUDA_NO_HALF_OPERATORS__" -U "__CUDA_NO_HALF_CONVERSIONS__" -U "__CUDA_NO_HALF2_OPERATORS__" -U "__CUDA_NO_BFLOAT16_CONVERSIONS__" -D__CUDACC_VER_MAJOR__=11 -D__CUDACC_VER_MINOR__=8 -D__CUDACC_VER_BUILD__=89 -D__CUDA_API_VER_MAJOR__=11 -D__CUDA_API_VER_MINOR__=8 -DCUDA_API_PER_THREAD_DEFAULT_STREAM=1 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64 "kernels/flash_api.cu" -o "/tmp/tmpxft_0002a370_00000000-5_flash_api.cpp4.ii" 
  #$ cudafe++ --c++17 --gnu_version=90400 --display_error_number --orig_src_file_name "kernels/flash_api.cu" --orig_src_path_name "/hugo/candle/candle-flash-attn/kernels/flash_api.cu" --allow_managed --extended-lambda --relaxed_constexpr  --m64 --parse_templates --gen_c_file_name "/tmp/tmpxft_0002a370_00000000-6_flash_api.cudafe1.cpp" --stub_file_name "tmpxft_0002a370_00000000-6_flash_api.cudafe1.stub.c" --module_id_file_name "/tmp/tmpxft_0002a370_00000000-4_flash_api.module_id" "/tmp/tmpxft_0002a370_00000000-5_flash_api.cpp4.ii" 
  #$ gcc -std=c++17 -D__CUDA_ARCH__=750 -D__CUDA_ARCH_LIST__=750 -c -x c++  -DCUDA_DOUBLE_MATH_FUNCTIONS -O3 -I"cutlass/include" "-I/usr/local/cuda/bin/../targets/x86_64-linux/include"   -m64 "/tmp/tmpxft_0002a370_00000000-6_flash_api.cudafe1.cpp" -o "/hugo/candle/target/release/build/candle-flash-attn-ef9f57105f35b8e6/out/flash_api.o" 
  #$ _NVVM_BRANCH_=nvvm
  #$ _SPACE_= 
  #$ _CUDART_=cudart
  #$ _HERE_=/usr/local/cuda/bin
  #$ _THERE_=/usr/local/cuda/bin
  #$ _TARGET_SIZE_=
  #$ _TARGET_DIR_=
  #$ _TARGET_DIR_=targets/x86_64-linux
  #$ TOP=/usr/local/cuda/bin/..
  #$ NVVMIR_LIBRARY_DIR=/usr/local/cuda/bin/../nvvm/libdevice
  #$ LD_LIBRARY_PATH=/usr/local/cuda/bin/../lib:/hugo/candle/target/release/deps:/hugo/candle/target/release:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
  #$ PATH=/usr/local/cuda/bin/../nvvm/bin:/usr/local/cuda/bin:/root/.cargo/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  #$ INCLUDES="-I/usr/local/cuda/bin/../targets/x86_64-linux/include"  
  #$ LIBRARIES=  "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib/stubs" "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib"
  #$ CUDAFE_FLAGS=
  #$ PTXAS_FLAGS=
  #$ gcc -std=c++17 -D__CUDA_ARCH__=750 -D__CUDA_ARCH_LIST__=750 -E -x c++  -DCUDA_DOUBLE_MATH_FUNCTIONS -D__CUDACC__ -D__NVCC__ -D__CUDACC_EXTENDED_LAMBDA__ -D__CUDACC_RELAXED_CONSTEXPR__  -O3 -I"cutlass/include" "-I/usr/local/cuda/bin/../targets/x86_64-linux/include"    -U "__CUDA_NO_HALF_OPERATORS__" -U "__CUDA_NO_HALF_CONVERSIONS__" -U "__CUDA_NO_HALF2_OPERATORS__" -U "__CUDA_NO_BFLOAT16_CONVERSIONS__" -D__CUDACC_VER_MAJOR__=11 -D__CUDACC_VER_MINOR__=8 -D__CUDACC_VER_BUILD__=89 -D__CUDA_API_VER_MAJOR__=11 -D__CUDA_API_VER_MINOR__=8 -DCUDA_API_PER_THREAD_DEFAULT_STREAM=1 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64 "kernels/flash_fwd_hdim128_fp16_sm80.cu" -o "/tmp/tmpxft_0002a3ab_00000000-7_flash_fwd_hdim128_fp16_sm80.cpp1.ii" 
  #$ cicc --c++17 --gnu_version=90400 --display_error_number --orig_src_file_name "kernels/flash_fwd_hdim128_fp16_sm80.cu" --orig_src_path_name "/hugo/candle/candle-flash-attn/kernels/flash_fwd_hdim128_fp16_sm80.cu" --allow_managed --extended-lambda --relaxed_constexpr   -arch compute_75 -m64 --no-version-ident -ftz=1 -prec_div=0 -prec_sqrt=0 -fmad=1 -fast-math --gen_div_approx_ftz --include_file_name "tmpxft_0002a3ab_00000000-3_flash_fwd_hdim128_fp16_sm80.fatbin.c" -tused --gen_module_id_file --module_id_file_name "/tmp/tmpxft_0002a3ab_00000000-4_flash_fwd_hdim128_fp16_sm80.module_id" --gen_c_file_name "/tmp/tmpxft_0002a3ab_00000000-6_flash_fwd_hdim128_fp16_sm80.cudafe1.c" --stub_file_name "/tmp/tmpxft_0002a3ab_00000000-6_flash_fwd_hdim128_fp16_sm80.cudafe1.stub.c" --gen_device_file_name "/tmp/tmpxft_0002a3ab_00000000-6_flash_fwd_hdim128_fp16_sm80.cudafe1.gpu"  "/tmp/tmpxft_0002a3ab_00000000-7_flash_fwd_hdim128_fp16_sm80.cpp1.ii" -o "/tmp/tmpxft_0002a3ab_00000000-6_flash_fwd_hdim128_fp16_sm80.ptx"
  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 128UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_2, cute::_16>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::_4, cute::_8>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::_16, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::_16, cute::_8>, cute::_8>, cute::tuple<cute::tuple<cute::_64, cute::_1>, cute::_8>>, cute::tuple<cute::_8, cute::_128>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<32, 128, 128, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<32, 128, 128, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<32, 128, 128, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<32, 128, 128, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<32, 128, 128, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<32, 128, 128, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<32, 128, 128, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<32, 128, 128, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(61): here
              instantiation of "void run_mha_fwd_hdim32<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::bfloat16_t]" 
  kernels/flash_fwd_hdim32_bf16_sm80.cu(9): here

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_1, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::constant<int, 0>, cute::_4>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::constant<int, 4>, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 16>>, cute::_8>, cute::tuple<cute::tuple<cute::_128, cute::_1>, cute::_16>>, cute::tuple<cute::_16, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<160, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<160, 64, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<160, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<160, 64, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<160, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<160, 64, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<160, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<160, 64, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(158): here
              instantiation of "void run_mha_fwd_hdim160<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::half_t]" 
  kernels/flash_fwd_hdim160_fp16_sm80.cu(9): here

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_1, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::constant<int, 0>, cute::_4>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::constant<int, 4>, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 32>>, cute::_8>, cute::tuple<cute::tuple<cute::_256, cute::_1>, cute::constant<int, 32>>>, cute::tuple<cute::constant<int, 32>, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<224, 128, 64, 8, false, false, cutlass::half_t, Flash_kernel_traits<224, 128, 64, 8, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<224, 128, 64, 8, false, false, cutlass::half_t, Flash_kernel_traits<224, 128, 64, 8, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<224, 128, 64, 8, false, false, cutlass::half_t, Flash_kernel_traits<224, 128, 64, 8, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<224, 128, 64, 8, false, false, cutlass::half_t, Flash_kernel_traits<224, 128, 64, 8, cutlass::half_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(211): here
              instantiation of "void run_mha_fwd_hdim224<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::half_t]" 
  kernels/flash_fwd_hdim224_fp16_sm80.cu(9): here

  1 error detected in the compilation of "kernels/flash_fwd_hdim32_bf16_sm80.cu".
  # --error 0x1 --
  thread '<unnamed>' panicked at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bindgen_cuda-0.1.1/src/lib.rs:220:21:
  nvcc error while executing compiling: "nvcc" "--gpu-architecture=sm_75" "-c" "-o" "/hugo/candle/target/release/build/candle-flash-attn-ef9f57105f35b8e6/out/flash_fwd_hdim32_bf16_sm80.o" "--default-stream" "per-thread" "-std=c++17" "-O3" "-U__CUDA_NO_HALF_OPERATORS__" "-U__CUDA_NO_HALF_CONVERSIONS__" "-U__CUDA_NO_HALF2_OPERATORS__" "-U__CUDA_NO_BFLOAT16_CONVERSIONS__" "-Icutlass/include" "--expt-relaxed-constexpr" "--expt-extended-lambda" "--use_fast_math" "--verbose" "kernels/flash_fwd_hdim32_bf16_sm80.cu"

  # stdout


  # stderr

  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
  #$ _NVVM_BRANCH_=nvvm
  #$ _SPACE_= 
  #$ _CUDART_=cudart
  #$ _HERE_=/usr/local/cuda/bin
  #$ _THERE_=/usr/local/cuda/bin
  #$ _TARGET_SIZE_=
  #$ _TARGET_DIR_=
  #$ _TARGET_DIR_=targets/x86_64-linux
  #$ TOP=/usr/local/cuda/bin/..
  #$ NVVMIR_LIBRARY_DIR=/usr/local/cuda/bin/../nvvm/libdevice
  #$ LD_LIBRARY_PATH=/usr/local/cuda/bin/../lib:/hugo/candle/target/release/deps:/hugo/candle/target/release:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
  #$ PATH=/usr/local/cuda/bin/../nvvm/bin:/usr/local/cuda/bin:/root/.cargo/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  #$ INCLUDES="-I/usr/local/cuda/bin/../targets/x86_64-linux/include"  
  #$ LIBRARIES=  "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib/stubs" "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib"
  #$ CUDAFE_FLAGS=
  #$ PTXAS_FLAGS=
  #$ gcc -std=c++17 -D__CUDA_ARCH__=750 -D__CUDA_ARCH_LIST__=750 -E -x c++  -DCUDA_DOUBLE_MATH_FUNCTIONS -D__CUDACC__ -D__NVCC__ -D__CUDACC_EXTENDED_LAMBDA__ -D__CUDACC_RELAXED_CONSTEXPR__  -O3 -I"cutlass/include" "-I/usr/local/cuda/bin/../targets/x86_64-linux/include"    -U "__CUDA_NO_HALF_OPERATORS__" -U "__CUDA_NO_HALF_CONVERSIONS__" -U "__CUDA_NO_HALF2_OPERATORS__" -U "__CUDA_NO_BFLOAT16_CONVERSIONS__" -D__CUDACC_VER_MAJOR__=11 -D__CUDACC_VER_MINOR__=8 -D__CUDACC_VER_BUILD__=89 -D__CUDA_API_VER_MAJOR__=11 -D__CUDA_API_VER_MINOR__=8 -DCUDA_API_PER_THREAD_DEFAULT_STREAM=1 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64 "kernels/flash_fwd_hdim64_bf16_sm80.cu" -o "/tmp/tmpxft_0002a3b3_00000000-7_flash_fwd_hdim64_bf16_sm80.cpp1.ii" 
  #$ cicc --c++17 --gnu_version=90400 --display_error_number --orig_src_file_name "kernels/flash_fwd_hdim64_bf16_sm80.cu" --orig_src_path_name "/hugo/candle/candle-flash-attn/kernels/flash_fwd_hdim64_bf16_sm80.cu" --allow_managed --extended-lambda --relaxed_constexpr   -arch compute_75 -m64 --no-version-ident -ftz=1 -prec_div=0 -prec_sqrt=0 -fmad=1 -fast-math --gen_div_approx_ftz --include_file_name "tmpxft_0002a3b3_00000000-3_flash_fwd_hdim64_bf16_sm80.fatbin.c" -tused --gen_module_id_file --module_id_file_name "/tmp/tmpxft_0002a3b3_00000000-4_flash_fwd_hdim64_bf16_sm80.module_id" --gen_c_file_name "/tmp/tmpxft_0002a3b3_00000000-6_flash_fwd_hdim64_bf16_sm80.cudafe1.c" --stub_file_name "/tmp/tmpxft_0002a3b3_00000000-6_flash_fwd_hdim64_bf16_sm80.cudafe1.stub.c" --gen_device_file_name "/tmp/tmpxft_0002a3b3_00000000-6_flash_fwd_hdim64_bf16_sm80.cudafe1.gpu"  "/tmp/tmpxft_0002a3b3_00000000-7_flash_fwd_hdim64_bf16_sm80.cpp1.ii" -o "/tmp/tmpxft_0002a3b3_00000000-6_flash_fwd_hdim64_bf16_sm80.ptx"
  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_1, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::constant<int, 0>, cute::_4>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::constant<int, 4>, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 16>>, cute::_8>, cute::tuple<cute::tuple<cute::_128, cute::_1>, cute::_16>>, cute::tuple<cute::constant<int, 16>, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<224, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<224, 64, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<224, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<224, 64, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<224, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<224, 64, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<224, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<224, 64, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(211): here
              instantiation of "void run_mha_fwd_hdim224<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::half_t]" 
  kernels/flash_fwd_hdim224_fp16_sm80.cu(9): here

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_2, cute::constant<int, 4>>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::_4, cute::_8>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::constant<int, 4>, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 4>, cute::constant<int, 32>>, cute::_8>, cute::tuple<cute::tuple<cute::_256, cute::_1>, cute::constant<int, 32>>>, cute::tuple<cute::constant<int, 32>, cute::constant<int, 32>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<160, 128, 32, 4, false, false, cutlass::half_t, Flash_kernel_traits<160, 128, 32, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<160, 128, 32, 4, false, false, cutlass::half_t, Flash_kernel_traits<160, 128, 32, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<160, 128, 32, 4, false, false, cutlass::half_t, Flash_kernel_traits<160, 128, 32, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<160, 128, 32, 4, false, false, cutlass::half_t, Flash_kernel_traits<160, 128, 32, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(158): here
              instantiation of "void run_mha_fwd_hdim160<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::half_t]" 
  kernels/flash_fwd_hdim160_fp16_sm80.cu(9): here

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_2, cute::constant<int, 4>>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::_4, cute::_8>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::constant<int, 4>, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 4>, cute::constant<int, 32>>, cute::_8>, cute::tuple<cute::tuple<cute::_256, cute::_1>, cute::constant<int, 32>>>, cute::tuple<cute::constant<int, 32>, cute::constant<int, 32>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<128, 128, 32, 4, false, false, cutlass::half_t, Flash_kernel_traits<128, 128, 32, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<128, 128, 32, 4, false, false, cutlass::half_t, Flash_kernel_traits<128, 128, 32, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<128, 128, 32, 4, false, false, cutlass::half_t, Flash_kernel_traits<128, 128, 32, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<128, 128, 32, 4, false, false, cutlass::half_t, Flash_kernel_traits<128, 128, 32, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(121): here
              instantiation of "void run_mha_fwd_hdim128<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::half_t]" 
  kernels/flash_fwd_hdim128_fp16_sm80.cu(9): here

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_1, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::constant<int, 0>, cute::_4>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::constant<int, 4>, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 16>>, cute::_8>, cute::tuple<cute::tuple<cute::_128, cute::_1>, cute::_16>>, cute::tuple<cute::_16, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<96, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<96, 64, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<96, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<96, 64, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<96, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<96, 64, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<96, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<96, 64, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(95): here
              instantiation of "void run_mha_fwd_hdim96<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::half_t]" 
  kernels/flash_fwd_hdim96_fp16_sm80.cu(9): here

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_1, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::constant<int, 0>, cute::_4>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::constant<int, 4>, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 32>>, cute::_8>, cute::tuple<cute::tuple<cute::_256, cute::_1>, cute::constant<int, 32>>>, cute::tuple<cute::constant<int, 32>, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<224, 128, 64, 8, false, false, cutlass::bfloat16_t, Flash_kernel_traits<224, 128, 64, 8, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<224, 128, 64, 8, false, false, cutlass::bfloat16_t, Flash_kernel_traits<224, 128, 64, 8, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<224, 128, 64, 8, false, false, cutlass::bfloat16_t, Flash_kernel_traits<224, 128, 64, 8, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<224, 128, 64, 8, false, false, cutlass::bfloat16_t, Flash_kernel_traits<224, 128, 64, 8, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(211): here
              instantiation of "void run_mha_fwd_hdim224<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::bfloat16_t]" 
  kernels/flash_fwd_hdim224_bf16_sm80.cu(9): here

  2 errors detected in the compilation of "kernels/flash_fwd_hdim224_fp16_sm80.cu".
  # --error 0x1 --
  thread '<unnamed>' panicked at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bindgen_cuda-0.1.1/src/lib.rs:220:21:
  nvcc error while executing compiling: "nvcc" "--gpu-architecture=sm_75" "-c" "-o" "/hugo/candle/target/release/build/candle-flash-attn-ef9f57105f35b8e6/out/flash_fwd_hdim224_fp16_sm80.o" "--default-stream" "per-thread" "-std=c++17" "-O3" "-U__CUDA_NO_HALF_OPERATORS__" "-U__CUDA_NO_HALF_CONVERSIONS__" "-U__CUDA_NO_HALF2_OPERATORS__" "-U__CUDA_NO_BFLOAT16_CONVERSIONS__" "-Icutlass/include" "--expt-relaxed-constexpr" "--expt-extended-lambda" "--use_fast_math" "--verbose" "kernels/flash_fwd_hdim224_fp16_sm80.cu"

  # stdout


  # stderr

  #$ _NVVM_BRANCH_=nvvm
  #$ _SPACE_= 
  #$ _CUDART_=cudart
  #$ _HERE_=/usr/local/cuda/bin
  #$ _THERE_=/usr/local/cuda/bin
  #$ _TARGET_SIZE_=
  #$ _TARGET_DIR_=
  #$ _TARGET_DIR_=targets/x86_64-linux
  #$ TOP=/usr/local/cuda/bin/..
  #$ NVVMIR_LIBRARY_DIR=/usr/local/cuda/bin/../nvvm/libdevice
  #$ LD_LIBRARY_PATH=/usr/local/cuda/bin/../lib:/hugo/candle/target/release/deps:/hugo/candle/target/release:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
  #$ PATH=/usr/local/cuda/bin/../nvvm/bin:/usr/local/cuda/bin:/root/.cargo/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  #$ INCLUDES="-I/usr/local/cuda/bin/../targets/x86_64-linux/include"  
  #$ LIBRARIES=  "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib/stubs" "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib"
  #$ CUDAFE_FLAGS=
  #$ PTXAS_FLAGS=
  #$ gcc -std=c++17 -D__CUDA_ARCH__=750 -D__CUDA_ARCH_LIST__=750 -E -x c++  -DCUDA_DOUBLE_MATH_FUNCTIONS -D__CUDACC__ -D__NVCC__ -D__CUDACC_EXTENDED_LAMBDA__ -D__CUDACC_RELAXED_CONSTEXPR__  -O3 -I"cutlass/include" "-I/usr/local/cuda/bin/../targets/x86_64-linux/include"    -U "__CUDA_NO_HALF_OPERATORS__" -U "__CUDA_NO_HALF_CONVERSIONS__" -U "__CUDA_NO_HALF2_OPERATORS__" -U "__CUDA_NO_BFLOAT16_CONVERSIONS__" -D__CUDACC_VER_MAJOR__=11 -D__CUDACC_VER_MINOR__=8 -D__CUDACC_VER_BUILD__=89 -D__CUDA_API_VER_MAJOR__=11 -D__CUDA_API_VER_MINOR__=8 -DCUDA_API_PER_THREAD_DEFAULT_STREAM=1 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64 "kernels/flash_fwd_hdim256_fp16_sm80.cu" -o "/tmp/tmpxft_0002a3bb_00000000-7_flash_fwd_hdim256_fp16_sm80.cpp1.ii" 
  #$ cicc --c++17 --gnu_version=90400 --display_error_number --orig_src_file_name "kernels/flash_fwd_hdim256_fp16_sm80.cu" --orig_src_path_name "/hugo/candle/candle-flash-attn/kernels/flash_fwd_hdim256_fp16_sm80.cu" --allow_managed --extended-lambda --relaxed_constexpr   -arch compute_75 -m64 --no-version-ident -ftz=1 -prec_div=0 -prec_sqrt=0 -fmad=1 -fast-math --gen_div_approx_ftz --include_file_name "tmpxft_0002a3bb_00000000-3_flash_fwd_hdim256_fp16_sm80.fatbin.c" -tused --gen_module_id_file --module_id_file_name "/tmp/tmpxft_0002a3bb_00000000-4_flash_fwd_hdim256_fp16_sm80.module_id" --gen_c_file_name "/tmp/tmpxft_0002a3bb_00000000-6_flash_fwd_hdim256_fp16_sm80.cudafe1.c" --stub_file_name "/tmp/tmpxft_0002a3bb_00000000-6_flash_fwd_hdim256_fp16_sm80.cudafe1.stub.c" --gen_device_file_name "/tmp/tmpxft_0002a3bb_00000000-6_flash_fwd_hdim256_fp16_sm80.cudafe1.gpu"  "/tmp/tmpxft_0002a3bb_00000000-7_flash_fwd_hdim256_fp16_sm80.cpp1.ii" -o "/tmp/tmpxft_0002a3bb_00000000-6_flash_fwd_hdim256_fp16_sm80.ptx"
  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_1, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::constant<int, 0>, cute::_4>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::constant<int, 4>, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 32>>, cute::_8>, cute::tuple<cute::tuple<cute::_256, cute::_1>, cute::constant<int, 32>>>, cute::tuple<cute::constant<int, 32>, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<160, 128, 64, 8, false, false, cutlass::half_t, Flash_kernel_traits<160, 128, 64, 8, cutlass::half_t>>, Is_dropout=true, Is_causal=false, Is_local=true, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<160, 128, 64, 8, false, false, cutlass::half_t, Flash_kernel_traits<160, 128, 64, 8, cutlass::half_t>>, Is_dropout=true, Is_causal=false, Is_local=true, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<160, 128, 64, 8, false, false, cutlass::half_t, Flash_kernel_traits<160, 128, 64, 8, cutlass::half_t>>, Is_dropout=true, Is_causal=false, Is_local=true, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<160, 128, 64, 8, false, false, cutlass::half_t, Flash_kernel_traits<160, 128, 64, 8, cutlass::half_t>>, Is_dropout=true, Is_causal=false]" 
  kernels/flash_fwd_launch_template.h(158): here
              instantiation of "void run_mha_fwd_hdim160<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::half_t]" 
  kernels/flash_fwd_hdim160_fp16_sm80.cu(9): here

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_1, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::constant<int, 0>, cute::_4>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::constant<int, 4>, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 16>>, cute::_8>, cute::tuple<cute::tuple<cute::_128, cute::_1>, cute::_16>>, cute::tuple<cute::_16, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<128, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<128, 64, 64, 4, cutlass::half_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<128, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<128, 64, 64, 4, cutlass::half_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<128, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<128, 64, 64, 4, cutlass::half_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<128, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<128, 64, 64, 4, cutlass::half_t>>, Is_dropout=false, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(121): here
              instantiation of "void run_mha_fwd_hdim128<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::half_t]" 
  kernels/flash_fwd_hdim128_fp16_sm80.cu(9): here

  3 errors detected in the compilation of "kernels/flash_fwd_hdim160_fp16_sm80.cu".
  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 64UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_2, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::_4, cute::_8>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::_8, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 16>>, cute::_8>, cute::tuple<cute::tuple<cute::_128, cute::_1>, cute::_16>>, cute::tuple<cute::_16, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<96, 128, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<96, 128, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<96, 128, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<96, 128, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<96, 128, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<96, 128, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<96, 128, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<96, 128, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(95): here
              instantiation of "void run_mha_fwd_hdim96<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::half_t]" 
  kernels/flash_fwd_hdim96_fp16_sm80.cu(9): here

  # --error 0x1 --
  thread '<unnamed>' panicked at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bindgen_cuda-0.1.1/src/lib.rs:220:21:
  nvcc error while executing compiling: "nvcc" "--gpu-architecture=sm_75" "-c" "-o" "/hugo/candle/target/release/build/candle-flash-attn-ef9f57105f35b8e6/out/flash_fwd_hdim160_fp16_sm80.o" "--default-stream" "per-thread" "-std=c++17" "-O3" "-U__CUDA_NO_HALF_OPERATORS__" "-U__CUDA_NO_HALF_CONVERSIONS__" "-U__CUDA_NO_HALF2_OPERATORS__" "-U__CUDA_NO_BFLOAT16_CONVERSIONS__" "-Icutlass/include" "--expt-relaxed-constexpr" "--expt-extended-lambda" "--use_fast_math" "--verbose" "kernels/flash_fwd_hdim160_fp16_sm80.cu"

  # stdout


  # stderr

  #$ _NVVM_BRANCH_=nvvm
  #$ _SPACE_= 
  #$ _CUDART_=cudart
  #$ _HERE_=/usr/local/cuda/bin
  #$ _THERE_=/usr/local/cuda/bin
  #$ _TARGET_SIZE_=
  #$ _TARGET_DIR_=
  #$ _TARGET_DIR_=targets/x86_64-linux
  #$ TOP=/usr/local/cuda/bin/..
  #$ NVVMIR_LIBRARY_DIR=/usr/local/cuda/bin/../nvvm/libdevice
  #$ LD_LIBRARY_PATH=/usr/local/cuda/bin/../lib:/hugo/candle/target/release/deps:/hugo/candle/target/release:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
  #$ PATH=/usr/local/cuda/bin/../nvvm/bin:/usr/local/cuda/bin:/root/.cargo/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  #$ INCLUDES="-I/usr/local/cuda/bin/../targets/x86_64-linux/include"  
  #$ LIBRARIES=  "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib/stubs" "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib"
  #$ CUDAFE_FLAGS=
  #$ PTXAS_FLAGS=
  #$ gcc -std=c++17 -D__CUDA_ARCH__=750 -D__CUDA_ARCH_LIST__=750 -E -x c++  -DCUDA_DOUBLE_MATH_FUNCTIONS -D__CUDACC__ -D__NVCC__ -D__CUDACC_EXTENDED_LAMBDA__ -D__CUDACC_RELAXED_CONSTEXPR__  -O3 -I"cutlass/include" "-I/usr/local/cuda/bin/../targets/x86_64-linux/include"    -U "__CUDA_NO_HALF_OPERATORS__" -U "__CUDA_NO_HALF_CONVERSIONS__" -U "__CUDA_NO_HALF2_OPERATORS__" -U "__CUDA_NO_BFLOAT16_CONVERSIONS__" -D__CUDACC_VER_MAJOR__=11 -D__CUDACC_VER_MINOR__=8 -D__CUDACC_VER_BUILD__=89 -D__CUDA_API_VER_MAJOR__=11 -D__CUDA_API_VER_MINOR__=8 -DCUDA_API_PER_THREAD_DEFAULT_STREAM=1 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64 "kernels/flash_fwd_hdim192_fp16_sm80.cu" -o "/tmp/tmpxft_0002a3c3_00000000-7_flash_fwd_hdim192_fp16_sm80.cpp1.ii" 
  #$ cicc --c++17 --gnu_version=90400 --display_error_number --orig_src_file_name "kernels/flash_fwd_hdim192_fp16_sm80.cu" --orig_src_path_name "/hugo/candle/candle-flash-attn/kernels/flash_fwd_hdim192_fp16_sm80.cu" --allow_managed --extended-lambda --relaxed_constexpr   -arch compute_75 -m64 --no-version-ident -ftz=1 -prec_div=0 -prec_sqrt=0 -fmad=1 -fast-math --gen_div_approx_ftz --include_file_name "tmpxft_0002a3c3_00000000-3_flash_fwd_hdim192_fp16_sm80.fatbin.c" -tused --gen_module_id_file --module_id_file_name "/tmp/tmpxft_0002a3c3_00000000-4_flash_fwd_hdim192_fp16_sm80.module_id" --gen_c_file_name "/tmp/tmpxft_0002a3c3_00000000-6_flash_fwd_hdim192_fp16_sm80.cudafe1.c" --stub_file_name "/tmp/tmpxft_0002a3c3_00000000-6_flash_fwd_hdim192_fp16_sm80.cudafe1.stub.c" --gen_device_file_name "/tmp/tmpxft_0002a3c3_00000000-6_flash_fwd_hdim192_fp16_sm80.cudafe1.gpu"  "/tmp/tmpxft_0002a3c3_00000000-7_flash_fwd_hdim192_fp16_sm80.cpp1.ii" -o "/tmp/tmpxft_0002a3c3_00000000-6_flash_fwd_hdim192_fp16_sm80.ptx"
  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 64UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_2, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::_4, cute::_8>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::_8, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 16>>, cute::_8>, cute::tuple<cute::tuple<cute::_128, cute::_1>, cute::_16>>, cute::tuple<cute::_16, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<128, 128, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<128, 128, 64, 4, cutlass::half_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<128, 128, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<128, 128, 64, 4, cutlass::half_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<128, 128, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<128, 128, 64, 4, cutlass::half_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<128, 128, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<128, 128, 64, 4, cutlass::half_t>>, Is_dropout=false, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(121): here
              instantiation of "void run_mha_fwd_hdim128<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::half_t]" 
  kernels/flash_fwd_hdim128_fp16_sm80.cu(9): here

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 64UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_2, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::_4, cute::_8>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::_8, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 16>>, cute::_8>, cute::tuple<cute::tuple<cute::_128, cute::_1>, cute::constant<int, 16>>>, cute::tuple<cute::_16, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<64, 128, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<64, 128, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<64, 128, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<64, 128, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<64, 128, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<64, 128, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<64, 128, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<64, 128, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(71): here
              instantiation of "void run_mha_fwd_hdim64<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::bfloat16_t]" 
  kernels/flash_fwd_hdim64_bf16_sm80.cu(9): here

  3 errors detected in the compilation of "kernels/flash_fwd_hdim128_fp16_sm80.cu".
  # --error 0x1 --
  thread '<unnamed>' panicked at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bindgen_cuda-0.1.1/src/lib.rs:220:21:
  nvcc error while executing compiling: "nvcc" "--gpu-architecture=sm_75" "-c" "-o" "/hugo/candle/target/release/build/candle-flash-attn-ef9f57105f35b8e6/out/flash_fwd_hdim128_fp16_sm80.o" "--default-stream" "per-thread" "-std=c++17" "-O3" "-U__CUDA_NO_HALF_OPERATORS__" "-U__CUDA_NO_HALF_CONVERSIONS__" "-U__CUDA_NO_HALF2_OPERATORS__" "-U__CUDA_NO_BFLOAT16_CONVERSIONS__" "-Icutlass/include" "--expt-relaxed-constexpr" "--expt-extended-lambda" "--use_fast_math" "--verbose" "kernels/flash_fwd_hdim128_fp16_sm80.cu"

  # stdout


  # stderr

  #$ _NVVM_BRANCH_=nvvm
  #$ _SPACE_= 
  #$ _CUDART_=cudart
  #$ _HERE_=/usr/local/cuda/bin
  #$ _THERE_=/usr/local/cuda/bin
  #$ _TARGET_SIZE_=
  #$ _TARGET_DIR_=
  #$ _TARGET_DIR_=targets/x86_64-linux
  #$ TOP=/usr/local/cuda/bin/..
  #$ NVVMIR_LIBRARY_DIR=/usr/local/cuda/bin/../nvvm/libdevice
  #$ LD_LIBRARY_PATH=/usr/local/cuda/bin/../lib:/hugo/candle/target/release/deps:/hugo/candle/target/release:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
  #$ PATH=/usr/local/cuda/bin/../nvvm/bin:/usr/local/cuda/bin:/root/.cargo/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  #$ INCLUDES="-I/usr/local/cuda/bin/../targets/x86_64-linux/include"  
  #$ LIBRARIES=  "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib/stubs" "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib"
  #$ CUDAFE_FLAGS=
  #$ PTXAS_FLAGS=
  #$ gcc -std=c++17 -D__CUDA_ARCH__=750 -D__CUDA_ARCH_LIST__=750 -E -x c++  -DCUDA_DOUBLE_MATH_FUNCTIONS -D__CUDACC__ -D__NVCC__ -D__CUDACC_EXTENDED_LAMBDA__ -D__CUDACC_RELAXED_CONSTEXPR__  -O3 -I"cutlass/include" "-I/usr/local/cuda/bin/../targets/x86_64-linux/include"    -U "__CUDA_NO_HALF_OPERATORS__" -U "__CUDA_NO_HALF_CONVERSIONS__" -U "__CUDA_NO_HALF2_OPERATORS__" -U "__CUDA_NO_BFLOAT16_CONVERSIONS__" -D__CUDACC_VER_MAJOR__=11 -D__CUDACC_VER_MINOR__=8 -D__CUDACC_VER_BUILD__=89 -D__CUDA_API_VER_MAJOR__=11 -D__CUDA_API_VER_MINOR__=8 -DCUDA_API_PER_THREAD_DEFAULT_STREAM=1 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64 "kernels/flash_fwd_hdim160_bf16_sm80.cu" -o "/tmp/tmpxft_0002a3cb_00000000-7_flash_fwd_hdim160_bf16_sm80.cpp1.ii" 
  #$ cicc --c++17 --gnu_version=90400 --display_error_number --orig_src_file_name "kernels/flash_fwd_hdim160_bf16_sm80.cu" --orig_src_path_name "/hugo/candle/candle-flash-attn/kernels/flash_fwd_hdim160_bf16_sm80.cu" --allow_managed --extended-lambda --relaxed_constexpr   -arch compute_75 -m64 --no-version-ident -ftz=1 -prec_div=0 -prec_sqrt=0 -fmad=1 -fast-math --gen_div_approx_ftz --include_file_name "tmpxft_0002a3cb_00000000-3_flash_fwd_hdim160_bf16_sm80.fatbin.c" -tused --gen_module_id_file --module_id_file_name "/tmp/tmpxft_0002a3cb_00000000-4_flash_fwd_hdim160_bf16_sm80.module_id" --gen_c_file_name "/tmp/tmpxft_0002a3cb_00000000-6_flash_fwd_hdim160_bf16_sm80.cudafe1.c" --stub_file_name "/tmp/tmpxft_0002a3cb_00000000-6_flash_fwd_hdim160_bf16_sm80.cudafe1.stub.c" --gen_device_file_name "/tmp/tmpxft_0002a3cb_00000000-6_flash_fwd_hdim160_bf16_sm80.cudafe1.gpu"  "/tmp/tmpxft_0002a3cb_00000000-7_flash_fwd_hdim160_bf16_sm80.cpp1.ii" -o "/tmp/tmpxft_0002a3cb_00000000-6_flash_fwd_hdim160_bf16_sm80.ptx"
  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_1, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::constant<int, 0>, cute::_4>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::constant<int, 4>, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 16>>, cute::_8>, cute::tuple<cute::tuple<cute::_128, cute::_1>, cute::_16>>, cute::tuple<cute::constant<int, 16>, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<224, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<224, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<224, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<224, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<224, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<224, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<224, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<224, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(211): here
              instantiation of "void run_mha_fwd_hdim224<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::bfloat16_t]" 
  kernels/flash_fwd_hdim224_bf16_sm80.cu(9): here

  2 errors detected in the compilation of "kernels/flash_fwd_hdim96_fp16_sm80.cu".
  # --error 0x1 --
  thread '<unnamed>' panicked at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bindgen_cuda-0.1.1/src/lib.rs:220:21:
  nvcc error while executing compiling: "nvcc" "--gpu-architecture=sm_75" "-c" "-o" "/hugo/candle/target/release/build/candle-flash-attn-ef9f57105f35b8e6/out/flash_fwd_hdim96_fp16_sm80.o" "--default-stream" "per-thread" "-std=c++17" "-O3" "-U__CUDA_NO_HALF_OPERATORS__" "-U__CUDA_NO_HALF_CONVERSIONS__" "-U__CUDA_NO_HALF2_OPERATORS__" "-U__CUDA_NO_BFLOAT16_CONVERSIONS__" "-Icutlass/include" "--expt-relaxed-constexpr" "--expt-extended-lambda" "--use_fast_math" "--verbose" "kernels/flash_fwd_hdim96_fp16_sm80.cu"

  # stdout


  # stderr

  #$ _NVVM_BRANCH_=nvvm
  #$ _SPACE_= 
  #$ _CUDART_=cudart
  #$ _HERE_=/usr/local/cuda/bin
  #$ _THERE_=/usr/local/cuda/bin
  #$ _TARGET_SIZE_=
  #$ _TARGET_DIR_=
  #$ _TARGET_DIR_=targets/x86_64-linux
  #$ TOP=/usr/local/cuda/bin/..
  #$ NVVMIR_LIBRARY_DIR=/usr/local/cuda/bin/../nvvm/libdevice
  #$ LD_LIBRARY_PATH=/usr/local/cuda/bin/../lib:/hugo/candle/target/release/deps:/hugo/candle/target/release:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
  #$ PATH=/usr/local/cuda/bin/../nvvm/bin:/usr/local/cuda/bin:/root/.cargo/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  #$ INCLUDES="-I/usr/local/cuda/bin/../targets/x86_64-linux/include"  
  #$ LIBRARIES=  "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib/stubs" "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib"
  #$ CUDAFE_FLAGS=
  #$ PTXAS_FLAGS=
  #$ gcc -std=c++17 -D__CUDA_ARCH__=750 -D__CUDA_ARCH_LIST__=750 -E -x c++  -DCUDA_DOUBLE_MATH_FUNCTIONS -D__CUDACC__ -D__NVCC__ -D__CUDACC_EXTENDED_LAMBDA__ -D__CUDACC_RELAXED_CONSTEXPR__  -O3 -I"cutlass/include" "-I/usr/local/cuda/bin/../targets/x86_64-linux/include"    -U "__CUDA_NO_HALF_OPERATORS__" -U "__CUDA_NO_HALF_CONVERSIONS__" -U "__CUDA_NO_HALF2_OPERATORS__" -U "__CUDA_NO_BFLOAT16_CONVERSIONS__" -D__CUDACC_VER_MAJOR__=11 -D__CUDACC_VER_MINOR__=8 -D__CUDACC_VER_BUILD__=89 -D__CUDA_API_VER_MAJOR__=11 -D__CUDA_API_VER_MINOR__=8 -DCUDA_API_PER_THREAD_DEFAULT_STREAM=1 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64 "kernels/flash_fwd_hdim128_bf16_sm80.cu" -o "/tmp/tmpxft_0002a3d3_00000000-7_flash_fwd_hdim128_bf16_sm80.cpp1.ii" 
  #$ cicc --c++17 --gnu_version=90400 --display_error_number --orig_src_file_name "kernels/flash_fwd_hdim128_bf16_sm80.cu" --orig_src_path_name "/hugo/candle/candle-flash-attn/kernels/flash_fwd_hdim128_bf16_sm80.cu" --allow_managed --extended-lambda --relaxed_constexpr   -arch compute_75 -m64 --no-version-ident -ftz=1 -prec_div=0 -prec_sqrt=0 -fmad=1 -fast-math --gen_div_approx_ftz --include_file_name "tmpxft_0002a3d3_00000000-3_flash_fwd_hdim128_bf16_sm80.fatbin.c" -tused --gen_module_id_file --module_id_file_name "/tmp/tmpxft_0002a3d3_00000000-4_flash_fwd_hdim128_bf16_sm80.module_id" --gen_c_file_name "/tmp/tmpxft_0002a3d3_00000000-6_flash_fwd_hdim128_bf16_sm80.cudafe1.c" --stub_file_name "/tmp/tmpxft_0002a3d3_00000000-6_flash_fwd_hdim128_bf16_sm80.cudafe1.stub.c" --gen_device_file_name "/tmp/tmpxft_0002a3d3_00000000-6_flash_fwd_hdim128_bf16_sm80.cudafe1.gpu"  "/tmp/tmpxft_0002a3d3_00000000-7_flash_fwd_hdim128_bf16_sm80.cpp1.ii" -o "/tmp/tmpxft_0002a3d3_00000000-6_flash_fwd_hdim128_bf16_sm80.ptx"
  2 errors detected in the compilation of "kernels/flash_fwd_hdim224_bf16_sm80.cu".
  # --error 0x1 --
  thread '<unnamed>' panicked at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bindgen_cuda-0.1.1/src/lib.rs:220:21:
  nvcc error while executing compiling: "nvcc" "--gpu-architecture=sm_75" "-c" "-o" "/hugo/candle/target/release/build/candle-flash-attn-ef9f57105f35b8e6/out/flash_fwd_hdim224_bf16_sm80.o" "--default-stream" "per-thread" "-std=c++17" "-O3" "-U__CUDA_NO_HALF_OPERATORS__" "-U__CUDA_NO_HALF_CONVERSIONS__" "-U__CUDA_NO_HALF2_OPERATORS__" "-U__CUDA_NO_BFLOAT16_CONVERSIONS__" "-Icutlass/include" "--expt-relaxed-constexpr" "--expt-extended-lambda" "--use_fast_math" "--verbose" "kernels/flash_fwd_hdim224_bf16_sm80.cu"

  # stdout


  # stderr

  #$ _NVVM_BRANCH_=nvvm
  #$ _SPACE_= 
  #$ _CUDART_=cudart
  #$ _HERE_=/usr/local/cuda/bin
  #$ _THERE_=/usr/local/cuda/bin
  #$ _TARGET_SIZE_=
  #$ _TARGET_DIR_=
  #$ _TARGET_DIR_=targets/x86_64-linux
  #$ TOP=/usr/local/cuda/bin/..
  #$ NVVMIR_LIBRARY_DIR=/usr/local/cuda/bin/../nvvm/libdevice
  #$ LD_LIBRARY_PATH=/usr/local/cuda/bin/../lib:/hugo/candle/target/release/deps:/hugo/candle/target/release:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
  #$ PATH=/usr/local/cuda/bin/../nvvm/bin:/usr/local/cuda/bin:/root/.cargo/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  #$ INCLUDES="-I/usr/local/cuda/bin/../targets/x86_64-linux/include"  
  #$ LIBRARIES=  "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib/stubs" "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib"
  #$ CUDAFE_FLAGS=
  #$ PTXAS_FLAGS=
  #$ gcc -std=c++17 -D__CUDA_ARCH__=750 -D__CUDA_ARCH_LIST__=750 -E -x c++  -DCUDA_DOUBLE_MATH_FUNCTIONS -D__CUDACC__ -D__NVCC__ -D__CUDACC_EXTENDED_LAMBDA__ -D__CUDACC_RELAXED_CONSTEXPR__  -O3 -I"cutlass/include" "-I/usr/local/cuda/bin/../targets/x86_64-linux/include"    -U "__CUDA_NO_HALF_OPERATORS__" -U "__CUDA_NO_HALF_CONVERSIONS__" -U "__CUDA_NO_HALF2_OPERATORS__" -U "__CUDA_NO_BFLOAT16_CONVERSIONS__" -D__CUDACC_VER_MAJOR__=11 -D__CUDACC_VER_MINOR__=8 -D__CUDACC_VER_BUILD__=89 -D__CUDA_API_VER_MAJOR__=11 -D__CUDA_API_VER_MINOR__=8 -DCUDA_API_PER_THREAD_DEFAULT_STREAM=1 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64 "kernels/flash_fwd_hdim256_bf16_sm80.cu" -o "/tmp/tmpxft_0002a3db_00000000-7_flash_fwd_hdim256_bf16_sm80.cpp1.ii" 
  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 128UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_2, cute::_16>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::_4, cute::_8>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::_16, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::_16, cute::_8>, cute::_8>, cute::tuple<cute::tuple<cute::_64, cute::_1>, cute::_8>>, cute::tuple<cute::_8, cute::_128>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<64, 128, 128, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<64, 128, 128, 4, cutlass::bfloat16_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<64, 128, 128, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<64, 128, 128, 4, cutlass::bfloat16_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<64, 128, 128, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<64, 128, 128, 4, cutlass::bfloat16_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<64, 128, 128, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<64, 128, 128, 4, cutlass::bfloat16_t>>, Is_dropout=false, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(71): here
              instantiation of "void run_mha_fwd_hdim64<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::bfloat16_t]" 
  kernels/flash_fwd_hdim64_bf16_sm80.cu(9): here

  #$ cicc --c++17 --gnu_version=90400 --display_error_number --orig_src_file_name "kernels/flash_fwd_hdim256_bf16_sm80.cu" --orig_src_path_name "/hugo/candle/candle-flash-attn/kernels/flash_fwd_hdim256_bf16_sm80.cu" --allow_managed --extended-lambda --relaxed_constexpr   -arch compute_75 -m64 --no-version-ident -ftz=1 -prec_div=0 -prec_sqrt=0 -fmad=1 -fast-math --gen_div_approx_ftz --include_file_name "tmpxft_0002a3db_00000000-3_flash_fwd_hdim256_bf16_sm80.fatbin.c" -tused --gen_module_id_file --module_id_file_name "/tmp/tmpxft_0002a3db_00000000-4_flash_fwd_hdim256_bf16_sm80.module_id" --gen_c_file_name "/tmp/tmpxft_0002a3db_00000000-6_flash_fwd_hdim256_bf16_sm80.cudafe1.c" --stub_file_name "/tmp/tmpxft_0002a3db_00000000-6_flash_fwd_hdim256_bf16_sm80.cudafe1.stub.c" --gen_device_file_name "/tmp/tmpxft_0002a3db_00000000-6_flash_fwd_hdim256_bf16_sm80.cudafe1.gpu"  "/tmp/tmpxft_0002a3db_00000000-7_flash_fwd_hdim256_bf16_sm80.cpp1.ii" -o "/tmp/tmpxft_0002a3db_00000000-6_flash_fwd_hdim256_bf16_sm80.ptx"
  2 errors detected in the compilation of "kernels/flash_fwd_hdim64_bf16_sm80.cu".
  # --error 0x1 --
  thread '<unnamed>' panicked at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bindgen_cuda-0.1.1/src/lib.rs:220:21:
  nvcc error while executing compiling: "nvcc" "--gpu-architecture=sm_75" "-c" "-o" "/hugo/candle/target/release/build/candle-flash-attn-ef9f57105f35b8e6/out/flash_fwd_hdim64_bf16_sm80.o" "--default-stream" "per-thread" "-std=c++17" "-O3" "-U__CUDA_NO_HALF_OPERATORS__" "-U__CUDA_NO_HALF_CONVERSIONS__" "-U__CUDA_NO_HALF2_OPERATORS__" "-U__CUDA_NO_BFLOAT16_CONVERSIONS__" "-Icutlass/include" "--expt-relaxed-constexpr" "--expt-extended-lambda" "--use_fast_math" "--verbose" "kernels/flash_fwd_hdim64_bf16_sm80.cu"

  # stdout


  # stderr

  #$ _NVVM_BRANCH_=nvvm
  #$ _SPACE_= 
  #$ _CUDART_=cudart
  #$ _HERE_=/usr/local/cuda/bin
  #$ _THERE_=/usr/local/cuda/bin
  #$ _TARGET_SIZE_=
  #$ _TARGET_DIR_=
  #$ _TARGET_DIR_=targets/x86_64-linux
  #$ TOP=/usr/local/cuda/bin/..
  #$ NVVMIR_LIBRARY_DIR=/usr/local/cuda/bin/../nvvm/libdevice
  #$ LD_LIBRARY_PATH=/usr/local/cuda/bin/../lib:/hugo/candle/target/release/deps:/hugo/candle/target/release:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
  #$ PATH=/usr/local/cuda/bin/../nvvm/bin:/usr/local/cuda/bin:/root/.cargo/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  #$ INCLUDES="-I/usr/local/cuda/bin/../targets/x86_64-linux/include"  
  #$ LIBRARIES=  "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib/stubs" "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib"
  #$ CUDAFE_FLAGS=
  #$ PTXAS_FLAGS=
  #$ gcc -std=c++17 -D__CUDA_ARCH__=750 -D__CUDA_ARCH_LIST__=750 -E -x c++  -DCUDA_DOUBLE_MATH_FUNCTIONS -D__CUDACC__ -D__NVCC__ -D__CUDACC_EXTENDED_LAMBDA__ -D__CUDACC_RELAXED_CONSTEXPR__  -O3 -I"cutlass/include" "-I/usr/local/cuda/bin/../targets/x86_64-linux/include"    -U "__CUDA_NO_HALF_OPERATORS__" -U "__CUDA_NO_HALF_CONVERSIONS__" -U "__CUDA_NO_HALF2_OPERATORS__" -U "__CUDA_NO_BFLOAT16_CONVERSIONS__" -D__CUDACC_VER_MAJOR__=11 -D__CUDACC_VER_MINOR__=8 -D__CUDACC_VER_BUILD__=89 -D__CUDA_API_VER_MAJOR__=11 -D__CUDA_API_VER_MINOR__=8 -DCUDA_API_PER_THREAD_DEFAULT_STREAM=1 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64 "kernels/flash_fwd_hdim96_bf16_sm80.cu" -o "/tmp/tmpxft_0002a3e3_00000000-7_flash_fwd_hdim96_bf16_sm80.cpp1.ii" 
  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_1, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::constant<int, 0>, cute::_4>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::_4, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 32>>, cute::_8>, cute::tuple<cute::tuple<cute::_256, cute::_1>, cute::constant<int, 32>>>, cute::tuple<cute::_32, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<256, 128, 64, 8, false, false, cutlass::half_t, Flash_kernel_traits<256, 128, 64, 8, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<256, 128, 64, 8, false, false, cutlass::half_t, Flash_kernel_traits<256, 128, 64, 8, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<256, 128, 64, 8, false, false, cutlass::half_t, Flash_kernel_traits<256, 128, 64, 8, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<256, 128, 64, 8, false, false, cutlass::half_t, Flash_kernel_traits<256, 128, 64, 8, cutlass::half_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(239): here
              instantiation of "void run_mha_fwd_hdim256<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::half_t]" 
  kernels/flash_fwd_hdim256_fp16_sm80.cu(9): here

  #$ cicc --c++17 --gnu_version=90400 --display_error_number --orig_src_file_name "kernels/flash_fwd_hdim96_bf16_sm80.cu" --orig_src_path_name "/hugo/candle/candle-flash-attn/kernels/flash_fwd_hdim96_bf16_sm80.cu" --allow_managed --extended-lambda --relaxed_constexpr   -arch compute_75 -m64 --no-version-ident -ftz=1 -prec_div=0 -prec_sqrt=0 -fmad=1 -fast-math --gen_div_approx_ftz --include_file_name "tmpxft_0002a3e3_00000000-3_flash_fwd_hdim96_bf16_sm80.fatbin.c" -tused --gen_module_id_file --module_id_file_name "/tmp/tmpxft_0002a3e3_00000000-4_flash_fwd_hdim96_bf16_sm80.module_id" --gen_c_file_name "/tmp/tmpxft_0002a3e3_00000000-6_flash_fwd_hdim96_bf16_sm80.cudafe1.c" --stub_file_name "/tmp/tmpxft_0002a3e3_00000000-6_flash_fwd_hdim96_bf16_sm80.cudafe1.stub.c" --gen_device_file_name "/tmp/tmpxft_0002a3e3_00000000-6_flash_fwd_hdim96_bf16_sm80.cudafe1.gpu"  "/tmp/tmpxft_0002a3e3_00000000-7_flash_fwd_hdim96_bf16_sm80.cpp1.ii" -o "/tmp/tmpxft_0002a3e3_00000000-6_flash_fwd_hdim96_bf16_sm80.ptx"
  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_1, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::constant<int, 0>, cute::_4>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::constant<int, 4>, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 16>>, cute::_8>, cute::tuple<cute::tuple<cute::_128, cute::_1>, cute::_16>>, cute::tuple<cute::_16, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<192, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<192, 64, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<192, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<192, 64, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<192, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<192, 64, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<192, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<192, 64, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(186): here
              instantiation of "void run_mha_fwd_hdim192<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::half_t]" 
  kernels/flash_fwd_hdim192_fp16_sm80.cu(9): here

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_1, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::constant<int, 0>, cute::_4>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::_4, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 16>>, cute::_8>, cute::tuple<cute::tuple<cute::_128, cute::_1>, cute::_16>>, cute::tuple<cute::constant<int, 16>, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<256, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<256, 64, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<256, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<256, 64, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<256, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<256, 64, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<256, 64, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<256, 64, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(239): here
              instantiation of "void run_mha_fwd_hdim256<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::half_t]" 
  kernels/flash_fwd_hdim256_fp16_sm80.cu(9): here

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_1, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::constant<int, 0>, cute::_4>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::constant<int, 4>, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 16>>, cute::_8>, cute::tuple<cute::tuple<cute::_128, cute::_1>, cute::_16>>, cute::tuple<cute::_16, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<160, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<160, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<160, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<160, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<160, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<160, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<160, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<160, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(158): here
              instantiation of "void run_mha_fwd_hdim160<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::bfloat16_t]" 
  kernels/flash_fwd_hdim160_bf16_sm80.cu(9): here

  2 errors detected in the compilation of "kernels/flash_fwd_hdim256_fp16_sm80.cu".
  # --error 0x1 --
  thread '<unnamed>' panicked at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bindgen_cuda-0.1.1/src/lib.rs:220:21:
  nvcc error while executing compiling: "nvcc" "--gpu-architecture=sm_75" "-c" "-o" "/hugo/candle/target/release/build/candle-flash-attn-ef9f57105f35b8e6/out/flash_fwd_hdim256_fp16_sm80.o" "--default-stream" "per-thread" "-std=c++17" "-O3" "-U__CUDA_NO_HALF_OPERATORS__" "-U__CUDA_NO_HALF_CONVERSIONS__" "-U__CUDA_NO_HALF2_OPERATORS__" "-U__CUDA_NO_BFLOAT16_CONVERSIONS__" "-Icutlass/include" "--expt-relaxed-constexpr" "--expt-extended-lambda" "--use_fast_math" "--verbose" "kernels/flash_fwd_hdim256_fp16_sm80.cu"

  # stdout


  # stderr

  #$ _NVVM_BRANCH_=nvvm
  #$ _SPACE_= 
  #$ _CUDART_=cudart
  #$ _HERE_=/usr/local/cuda/bin
  #$ _THERE_=/usr/local/cuda/bin
  #$ _TARGET_SIZE_=
  #$ _TARGET_DIR_=
  #$ _TARGET_DIR_=targets/x86_64-linux
  #$ TOP=/usr/local/cuda/bin/..
  #$ NVVMIR_LIBRARY_DIR=/usr/local/cuda/bin/../nvvm/libdevice
  #$ LD_LIBRARY_PATH=/usr/local/cuda/bin/../lib:/hugo/candle/target/release/deps:/hugo/candle/target/release:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
  #$ PATH=/usr/local/cuda/bin/../nvvm/bin:/usr/local/cuda/bin:/root/.cargo/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  #$ INCLUDES="-I/usr/local/cuda/bin/../targets/x86_64-linux/include"  
  #$ LIBRARIES=  "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib/stubs" "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib"
  #$ CUDAFE_FLAGS=
  #$ PTXAS_FLAGS=
  #$ gcc -std=c++17 -D__CUDA_ARCH__=750 -D__CUDA_ARCH_LIST__=750 -E -x c++  -DCUDA_DOUBLE_MATH_FUNCTIONS -D__CUDACC__ -D__NVCC__ -D__CUDACC_EXTENDED_LAMBDA__ -D__CUDACC_RELAXED_CONSTEXPR__  -O3 -I"cutlass/include" "-I/usr/local/cuda/bin/../targets/x86_64-linux/include"    -U "__CUDA_NO_HALF_OPERATORS__" -U "__CUDA_NO_HALF_CONVERSIONS__" -U "__CUDA_NO_HALF2_OPERATORS__" -U "__CUDA_NO_BFLOAT16_CONVERSIONS__" -D__CUDACC_VER_MAJOR__=11 -D__CUDACC_VER_MINOR__=8 -D__CUDACC_VER_BUILD__=89 -D__CUDA_API_VER_MAJOR__=11 -D__CUDA_API_VER_MINOR__=8 -DCUDA_API_PER_THREAD_DEFAULT_STREAM=1 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64 "kernels/flash_fwd_hdim32_fp16_sm80.cu" -o "/tmp/tmpxft_0002a3eb_00000000-7_flash_fwd_hdim32_fp16_sm80.cpp1.ii" 
  #$ cicc --c++17 --gnu_version=90400 --display_error_number --orig_src_file_name "kernels/flash_fwd_hdim32_fp16_sm80.cu" --orig_src_path_name "/hugo/candle/candle-flash-attn/kernels/flash_fwd_hdim32_fp16_sm80.cu" --allow_managed --extended-lambda --relaxed_constexpr   -arch compute_75 -m64 --no-version-ident -ftz=1 -prec_div=0 -prec_sqrt=0 -fmad=1 -fast-math --gen_div_approx_ftz --include_file_name "tmpxft_0002a3eb_00000000-3_flash_fwd_hdim32_fp16_sm80.fatbin.c" -tused --gen_module_id_file --module_id_file_name "/tmp/tmpxft_0002a3eb_00000000-4_flash_fwd_hdim32_fp16_sm80.module_id" --gen_c_file_name "/tmp/tmpxft_0002a3eb_00000000-6_flash_fwd_hdim32_fp16_sm80.cudafe1.c" --stub_file_name "/tmp/tmpxft_0002a3eb_00000000-6_flash_fwd_hdim32_fp16_sm80.cudafe1.stub.c" --gen_device_file_name "/tmp/tmpxft_0002a3eb_00000000-6_flash_fwd_hdim32_fp16_sm80.cudafe1.gpu"  "/tmp/tmpxft_0002a3eb_00000000-7_flash_fwd_hdim32_fp16_sm80.cpp1.ii" -o "/tmp/tmpxft_0002a3eb_00000000-6_flash_fwd_hdim32_fp16_sm80.ptx"
  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_1, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::constant<int, 0>, cute::_4>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::constant<int, 4>, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 32>>, cute::_8>, cute::tuple<cute::tuple<cute::_256, cute::_1>, cute::constant<int, 32>>>, cute::tuple<cute::_32, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<192, 128, 64, 8, false, false, cutlass::half_t, Flash_kernel_traits<192, 128, 64, 8, cutlass::half_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<192, 128, 64, 8, false, false, cutlass::half_t, Flash_kernel_traits<192, 128, 64, 8, cutlass::half_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<192, 128, 64, 8, false, false, cutlass::half_t, Flash_kernel_traits<192, 128, 64, 8, cutlass::half_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<192, 128, 64, 8, false, false, cutlass::half_t, Flash_kernel_traits<192, 128, 64, 8, cutlass::half_t>>, Is_dropout=false, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(186): here
              instantiation of "void run_mha_fwd_hdim192<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::half_t]" 
  kernels/flash_fwd_hdim192_fp16_sm80.cu(9): here

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_2, cute::constant<int, 4>>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::_4, cute::_8>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::constant<int, 4>, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 4>, cute::constant<int, 32>>, cute::_8>, cute::tuple<cute::tuple<cute::_256, cute::_1>, cute::constant<int, 32>>>, cute::tuple<cute::constant<int, 32>, cute::constant<int, 32>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<128, 128, 32, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<128, 128, 32, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<128, 128, 32, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<128, 128, 32, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<128, 128, 32, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<128, 128, 32, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<128, 128, 32, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<128, 128, 32, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(121): here
              instantiation of "void run_mha_fwd_hdim128<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::bfloat16_t]" 
  kernels/flash_fwd_hdim128_bf16_sm80.cu(9): here

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_2, cute::constant<int, 4>>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::_4, cute::_8>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::constant<int, 4>, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 4>, cute::constant<int, 32>>, cute::_8>, cute::tuple<cute::tuple<cute::_256, cute::_1>, cute::constant<int, 32>>>, cute::tuple<cute::constant<int, 32>, cute::constant<int, 32>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<160, 128, 32, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<160, 128, 32, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<160, 128, 32, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<160, 128, 32, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<160, 128, 32, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<160, 128, 32, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<160, 128, 32, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<160, 128, 32, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(158): here
              instantiation of "void run_mha_fwd_hdim160<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::bfloat16_t]" 
  kernels/flash_fwd_hdim160_bf16_sm80.cu(9): here

  2 errors detected in the compilation of "kernels/flash_fwd_hdim192_fp16_sm80.cu".
  # --error 0x1 --
  thread '<unnamed>' panicked at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bindgen_cuda-0.1.1/src/lib.rs:220:21:
  nvcc error while executing compiling: "nvcc" "--gpu-architecture=sm_75" "-c" "-o" "/hugo/candle/target/release/build/candle-flash-attn-ef9f57105f35b8e6/out/flash_fwd_hdim192_fp16_sm80.o" "--default-stream" "per-thread" "-std=c++17" "-O3" "-U__CUDA_NO_HALF_OPERATORS__" "-U__CUDA_NO_HALF_CONVERSIONS__" "-U__CUDA_NO_HALF2_OPERATORS__" "-U__CUDA_NO_BFLOAT16_CONVERSIONS__" "-Icutlass/include" "--expt-relaxed-constexpr" "--expt-extended-lambda" "--use_fast_math" "--verbose" "kernels/flash_fwd_hdim192_fp16_sm80.cu"

  # stdout


  # stderr

  #$ _NVVM_BRANCH_=nvvm
  #$ _SPACE_= 
  #$ _CUDART_=cudart
  #$ _HERE_=/usr/local/cuda/bin
  #$ _THERE_=/usr/local/cuda/bin
  #$ _TARGET_SIZE_=
  #$ _TARGET_DIR_=
  #$ _TARGET_DIR_=targets/x86_64-linux
  #$ TOP=/usr/local/cuda/bin/..
  #$ NVVMIR_LIBRARY_DIR=/usr/local/cuda/bin/../nvvm/libdevice
  #$ LD_LIBRARY_PATH=/usr/local/cuda/bin/../lib:/hugo/candle/target/release/deps:/hugo/candle/target/release:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
  #$ PATH=/usr/local/cuda/bin/../nvvm/bin:/usr/local/cuda/bin:/root/.cargo/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  #$ INCLUDES="-I/usr/local/cuda/bin/../targets/x86_64-linux/include"  
  #$ LIBRARIES=  "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib/stubs" "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib"
  #$ CUDAFE_FLAGS=
  #$ PTXAS_FLAGS=
  #$ gcc -std=c++17 -D__CUDA_ARCH__=750 -D__CUDA_ARCH_LIST__=750 -E -x c++  -DCUDA_DOUBLE_MATH_FUNCTIONS -D__CUDACC__ -D__NVCC__ -D__CUDACC_EXTENDED_LAMBDA__ -D__CUDACC_RELAXED_CONSTEXPR__  -O3 -I"cutlass/include" "-I/usr/local/cuda/bin/../targets/x86_64-linux/include"    -U "__CUDA_NO_HALF_OPERATORS__" -U "__CUDA_NO_HALF_CONVERSIONS__" -U "__CUDA_NO_HALF2_OPERATORS__" -U "__CUDA_NO_BFLOAT16_CONVERSIONS__" -D__CUDACC_VER_MAJOR__=11 -D__CUDACC_VER_MINOR__=8 -D__CUDACC_VER_BUILD__=89 -D__CUDA_API_VER_MAJOR__=11 -D__CUDA_API_VER_MINOR__=8 -DCUDA_API_PER_THREAD_DEFAULT_STREAM=1 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64 "kernels/flash_fwd_hdim192_bf16_sm80.cu" -o "/tmp/tmpxft_0002a3f3_00000000-7_flash_fwd_hdim192_bf16_sm80.cpp1.ii" 
  #$ cicc --c++17 --gnu_version=90400 --display_error_number --orig_src_file_name "kernels/flash_fwd_hdim192_bf16_sm80.cu" --orig_src_path_name "/hugo/candle/candle-flash-attn/kernels/flash_fwd_hdim192_bf16_sm80.cu" --allow_managed --extended-lambda --relaxed_constexpr   -arch compute_75 -m64 --no-version-ident -ftz=1 -prec_div=0 -prec_sqrt=0 -fmad=1 -fast-math --gen_div_approx_ftz --include_file_name "tmpxft_0002a3f3_00000000-3_flash_fwd_hdim192_bf16_sm80.fatbin.c" -tused --gen_module_id_file --module_id_file_name "/tmp/tmpxft_0002a3f3_00000000-4_flash_fwd_hdim192_bf16_sm80.module_id" --gen_c_file_name "/tmp/tmpxft_0002a3f3_00000000-6_flash_fwd_hdim192_bf16_sm80.cudafe1.c" --stub_file_name "/tmp/tmpxft_0002a3f3_00000000-6_flash_fwd_hdim192_bf16_sm80.cudafe1.stub.c" --gen_device_file_name "/tmp/tmpxft_0002a3f3_00000000-6_flash_fwd_hdim192_bf16_sm80.cudafe1.gpu"  "/tmp/tmpxft_0002a3f3_00000000-7_flash_fwd_hdim192_bf16_sm80.cpp1.ii" -o "/tmp/tmpxft_0002a3f3_00000000-6_flash_fwd_hdim192_bf16_sm80.ptx"
  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_1, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::constant<int, 0>, cute::_4>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::constant<int, 4>, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 32>>, cute::_8>, cute::tuple<cute::tuple<cute::_256, cute::_1>, cute::constant<int, 32>>>, cute::tuple<cute::constant<int, 32>, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<160, 128, 64, 8, false, false, cutlass::bfloat16_t, Flash_kernel_traits<160, 128, 64, 8, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=false, Is_local=true, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<160, 128, 64, 8, false, false, cutlass::bfloat16_t, Flash_kernel_traits<160, 128, 64, 8, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=false, Is_local=true, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<160, 128, 64, 8, false, false, cutlass::bfloat16_t, Flash_kernel_traits<160, 128, 64, 8, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=false, Is_local=true, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<160, 128, 64, 8, false, false, cutlass::bfloat16_t, Flash_kernel_traits<160, 128, 64, 8, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=false]" 
  kernels/flash_fwd_launch_template.h(158): here
              instantiation of "void run_mha_fwd_hdim160<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::bfloat16_t]" 
  kernels/flash_fwd_hdim160_bf16_sm80.cu(9): here

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_1, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::constant<int, 0>, cute::_4>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::constant<int, 4>, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 16>>, cute::_8>, cute::tuple<cute::tuple<cute::_128, cute::_1>, cute::_16>>, cute::tuple<cute::_16, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<128, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<128, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<128, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<128, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<128, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<128, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<128, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<128, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=false, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(121): here
              instantiation of "void run_mha_fwd_hdim128<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::bfloat16_t]" 
  kernels/flash_fwd_hdim128_bf16_sm80.cu(9): here

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_1, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::constant<int, 0>, cute::_4>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::_4, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 32>>, cute::_8>, cute::tuple<cute::tuple<cute::_256, cute::_1>, cute::constant<int, 32>>>, cute::tuple<cute::_32, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<256, 128, 64, 8, false, false, cutlass::bfloat16_t, Flash_kernel_traits<256, 128, 64, 8, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<256, 128, 64, 8, false, false, cutlass::bfloat16_t, Flash_kernel_traits<256, 128, 64, 8, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<256, 128, 64, 8, false, false, cutlass::bfloat16_t, Flash_kernel_traits<256, 128, 64, 8, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<256, 128, 64, 8, false, false, cutlass::bfloat16_t, Flash_kernel_traits<256, 128, 64, 8, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(239): here
              instantiation of "void run_mha_fwd_hdim256<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::bfloat16_t]" 
  kernels/flash_fwd_hdim256_bf16_sm80.cu(9): here

  3 errors detected in the compilation of "kernels/flash_fwd_hdim160_bf16_sm80.cu".
  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 64UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_2, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::_4, cute::_8>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::_8, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 16>>, cute::_8>, cute::tuple<cute::tuple<cute::_128, cute::_1>, cute::_16>>, cute::tuple<cute::_16, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<128, 128, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<128, 128, 64, 4, cutlass::bfloat16_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<128, 128, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<128, 128, 64, 4, cutlass::bfloat16_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<128, 128, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<128, 128, 64, 4, cutlass::bfloat16_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<128, 128, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<128, 128, 64, 4, cutlass::bfloat16_t>>, Is_dropout=false, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(121): here
              instantiation of "void run_mha_fwd_hdim128<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::bfloat16_t]" 
  kernels/flash_fwd_hdim128_bf16_sm80.cu(9): here

  # --error 0x1 --
  thread '<unnamed>' panicked at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bindgen_cuda-0.1.1/src/lib.rs:220:21:
  nvcc error while executing compiling: "nvcc" "--gpu-architecture=sm_75" "-c" "-o" "/hugo/candle/target/release/build/candle-flash-attn-ef9f57105f35b8e6/out/flash_fwd_hdim160_bf16_sm80.o" "--default-stream" "per-thread" "-std=c++17" "-O3" "-U__CUDA_NO_HALF_OPERATORS__" "-U__CUDA_NO_HALF_CONVERSIONS__" "-U__CUDA_NO_HALF2_OPERATORS__" "-U__CUDA_NO_BFLOAT16_CONVERSIONS__" "-Icutlass/include" "--expt-relaxed-constexpr" "--expt-extended-lambda" "--use_fast_math" "--verbose" "kernels/flash_fwd_hdim160_bf16_sm80.cu"

  # stdout


  # stderr

  #$ _NVVM_BRANCH_=nvvm
  #$ _SPACE_= 
  #$ _CUDART_=cudart
  #$ _HERE_=/usr/local/cuda/bin
  #$ _THERE_=/usr/local/cuda/bin
  #$ _TARGET_SIZE_=
  #$ _TARGET_DIR_=
  #$ _TARGET_DIR_=targets/x86_64-linux
  #$ TOP=/usr/local/cuda/bin/..
  #$ NVVMIR_LIBRARY_DIR=/usr/local/cuda/bin/../nvvm/libdevice
  #$ LD_LIBRARY_PATH=/usr/local/cuda/bin/../lib:/hugo/candle/target/release/deps:/hugo/candle/target/release:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib:/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
  #$ PATH=/usr/local/cuda/bin/../nvvm/bin:/usr/local/cuda/bin:/root/.cargo/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  #$ INCLUDES="-I/usr/local/cuda/bin/../targets/x86_64-linux/include"  
  #$ LIBRARIES=  "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib/stubs" "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib"
  #$ CUDAFE_FLAGS=
  #$ PTXAS_FLAGS=
  #$ gcc -std=c++17 -D__CUDA_ARCH__=750 -D__CUDA_ARCH_LIST__=750 -E -x c++  -DCUDA_DOUBLE_MATH_FUNCTIONS -D__CUDACC__ -D__NVCC__ -D__CUDACC_EXTENDED_LAMBDA__ -D__CUDACC_RELAXED_CONSTEXPR__  -O3 -I"cutlass/include" "-I/usr/local/cuda/bin/../targets/x86_64-linux/include"    -U "__CUDA_NO_HALF_OPERATORS__" -U "__CUDA_NO_HALF_CONVERSIONS__" -U "__CUDA_NO_HALF2_OPERATORS__" -U "__CUDA_NO_BFLOAT16_CONVERSIONS__" -D__CUDACC_VER_MAJOR__=11 -D__CUDACC_VER_MINOR__=8 -D__CUDACC_VER_BUILD__=89 -D__CUDA_API_VER_MAJOR__=11 -D__CUDA_API_VER_MINOR__=8 -DCUDA_API_PER_THREAD_DEFAULT_STREAM=1 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64 "kernels/flash_fwd_hdim64_fp16_sm80.cu" -o "/tmp/tmpxft_0002a3fb_00000000-7_flash_fwd_hdim64_fp16_sm80.cpp1.ii" 
  #$ cicc --c++17 --gnu_version=90400 --display_error_number --orig_src_file_name "kernels/flash_fwd_hdim64_fp16_sm80.cu" --orig_src_path_name "/hugo/candle/candle-flash-attn/kernels/flash_fwd_hdim64_fp16_sm80.cu" --allow_managed --extended-lambda --relaxed_constexpr   -arch compute_75 -m64 --no-version-ident -ftz=1 -prec_div=0 -prec_sqrt=0 -fmad=1 -fast-math --gen_div_approx_ftz --include_file_name "tmpxft_0002a3fb_00000000-3_flash_fwd_hdim64_fp16_sm80.fatbin.c" -tused --gen_module_id_file --module_id_file_name "/tmp/tmpxft_0002a3fb_00000000-4_flash_fwd_hdim64_fp16_sm80.module_id" --gen_c_file_name "/tmp/tmpxft_0002a3fb_00000000-6_flash_fwd_hdim64_fp16_sm80.cudafe1.c" --stub_file_name "/tmp/tmpxft_0002a3fb_00000000-6_flash_fwd_hdim64_fp16_sm80.cudafe1.stub.c" --gen_device_file_name "/tmp/tmpxft_0002a3fb_00000000-6_flash_fwd_hdim64_fp16_sm80.cudafe1.gpu"  "/tmp/tmpxft_0002a3fb_00000000-7_flash_fwd_hdim64_fp16_sm80.cpp1.ii" -o "/tmp/tmpxft_0002a3fb_00000000-6_flash_fwd_hdim64_fp16_sm80.ptx"
  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_1, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::constant<int, 0>, cute::_4>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::constant<int, 4>, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 16>>, cute::_8>, cute::tuple<cute::tuple<cute::_128, cute::_1>, cute::_16>>, cute::tuple<cute::_16, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<96, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<96, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<96, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<96, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<96, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<96, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<96, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<96, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(95): here
              instantiation of "void run_mha_fwd_hdim96<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::bfloat16_t]" 
  kernels/flash_fwd_hdim96_bf16_sm80.cu(9): here

  3 errors detected in the compilation of "kernels/flash_fwd_hdim128_bf16_sm80.cu".
  # --error 0x1 --
  thread '<unnamed>' panicked at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bindgen_cuda-0.1.1/src/lib.rs:220:21:
  nvcc error while executing compiling: "nvcc" "--gpu-architecture=sm_75" "-c" "-o" "/hugo/candle/target/release/build/candle-flash-attn-ef9f57105f35b8e6/out/flash_fwd_hdim128_bf16_sm80.o" "--default-stream" "per-thread" "-std=c++17" "-O3" "-U__CUDA_NO_HALF_OPERATORS__" "-U__CUDA_NO_HALF_CONVERSIONS__" "-U__CUDA_NO_HALF2_OPERATORS__" "-U__CUDA_NO_BFLOAT16_CONVERSIONS__" "-Icutlass/include" "--expt-relaxed-constexpr" "--expt-extended-lambda" "--use_fast_math" "--verbose" "kernels/flash_fwd_hdim128_bf16_sm80.cu"

  # stdout


  # stderr

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 128UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_2, cute::_16>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::_4, cute::_8>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::_16, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::_16, cute::_8>, cute::_8>, cute::tuple<cute::tuple<cute::_64, cute::_1>, cute::_8>>, cute::tuple<cute::_8, cute::_128>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<32, 128, 128, 4, false, false, cutlass::half_t, Flash_kernel_traits<32, 128, 128, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<32, 128, 128, 4, false, false, cutlass::half_t, Flash_kernel_traits<32, 128, 128, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<32, 128, 128, 4, false, false, cutlass::half_t, Flash_kernel_traits<32, 128, 128, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<32, 128, 128, 4, false, false, cutlass::half_t, Flash_kernel_traits<32, 128, 128, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(61): here
              instantiation of "void run_mha_fwd_hdim32<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::half_t]" 
  kernels/flash_fwd_hdim32_fp16_sm80.cu(9): here

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_1, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::constant<int, 0>, cute::_4>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::_4, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 16>>, cute::_8>, cute::tuple<cute::tuple<cute::_128, cute::_1>, cute::_16>>, cute::tuple<cute::constant<int, 16>, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<256, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<256, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<256, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<256, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<256, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<256, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<256, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<256, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(239): here
              instantiation of "void run_mha_fwd_hdim256<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::bfloat16_t]" 
  kernels/flash_fwd_hdim256_bf16_sm80.cu(9): here

  1 error detected in the compilation of "kernels/flash_fwd_hdim32_fp16_sm80.cu".
  # --error 0x1 --
  thread '<unnamed>' panicked at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bindgen_cuda-0.1.1/src/lib.rs:220:21:
  nvcc error while executing compiling: "nvcc" "--gpu-architecture=sm_75" "-c" "-o" "/hugo/candle/target/release/build/candle-flash-attn-ef9f57105f35b8e6/out/flash_fwd_hdim32_fp16_sm80.o" "--default-stream" "per-thread" "-std=c++17" "-O3" "-U__CUDA_NO_HALF_OPERATORS__" "-U__CUDA_NO_HALF_CONVERSIONS__" "-U__CUDA_NO_HALF2_OPERATORS__" "-U__CUDA_NO_BFLOAT16_CONVERSIONS__" "-Icutlass/include" "--expt-relaxed-constexpr" "--expt-extended-lambda" "--use_fast_math" "--verbose" "kernels/flash_fwd_hdim32_fp16_sm80.cu"

  # stdout


  # stderr

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 64UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_2, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::_4, cute::_8>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::_8, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 16>>, cute::_8>, cute::tuple<cute::tuple<cute::_128, cute::_1>, cute::_16>>, cute::tuple<cute::_16, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<96, 128, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<96, 128, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<96, 128, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<96, 128, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<96, 128, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<96, 128, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<96, 128, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<96, 128, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(95): here
              instantiation of "void run_mha_fwd_hdim96<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::bfloat16_t]" 
  kernels/flash_fwd_hdim96_bf16_sm80.cu(9): here

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_1, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::constant<int, 0>, cute::_4>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::constant<int, 4>, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 16>>, cute::_8>, cute::tuple<cute::tuple<cute::_128, cute::_1>, cute::_16>>, cute::tuple<cute::_16, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<192, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<192, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<192, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<192, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<192, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<192, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<192, 64, 64, 4, false, false, cutlass::bfloat16_t, Flash_kernel_traits<192, 64, 64, 4, cutlass::bfloat16_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(186): here
              instantiation of "void run_mha_fwd_hdim192<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::bfloat16_t]" 
  kernels/flash_fwd_hdim192_bf16_sm80.cu(9): here

  2 errors detected in the compilation of "kernels/flash_fwd_hdim96_bf16_sm80.cu".
  # --error 0x1 --
  thread '<unnamed>' panicked at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bindgen_cuda-0.1.1/src/lib.rs:220:21:
  nvcc error while executing compiling: "nvcc" "--gpu-architecture=sm_75" "-c" "-o" "/hugo/candle/target/release/build/candle-flash-attn-ef9f57105f35b8e6/out/flash_fwd_hdim96_bf16_sm80.o" "--default-stream" "per-thread" "-std=c++17" "-O3" "-U__CUDA_NO_HALF_OPERATORS__" "-U__CUDA_NO_HALF_CONVERSIONS__" "-U__CUDA_NO_HALF2_OPERATORS__" "-U__CUDA_NO_BFLOAT16_CONVERSIONS__" "-Icutlass/include" "--expt-relaxed-constexpr" "--expt-extended-lambda" "--use_fast_math" "--verbose" "kernels/flash_fwd_hdim96_bf16_sm80.cu"

  # stdout


  # stderr

  2 errors detected in the compilation of "kernels/flash_fwd_hdim256_bf16_sm80.cu".
  # --error 0x1 --
  thread '<unnamed>' panicked at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bindgen_cuda-0.1.1/src/lib.rs:220:21:
  nvcc error while executing compiling: "nvcc" "--gpu-architecture=sm_75" "-c" "-o" "/hugo/candle/target/release/build/candle-flash-attn-ef9f57105f35b8e6/out/flash_fwd_hdim256_bf16_sm80.o" "--default-stream" "per-thread" "-std=c++17" "-O3" "-U__CUDA_NO_HALF_OPERATORS__" "-U__CUDA_NO_HALF_CONVERSIONS__" "-U__CUDA_NO_HALF2_OPERATORS__" "-U__CUDA_NO_BFLOAT16_CONVERSIONS__" "-Icutlass/include" "--expt-relaxed-constexpr" "--expt-extended-lambda" "--use_fast_math" "--verbose" "kernels/flash_fwd_hdim256_bf16_sm80.cu"

  # stdout


  # stderr

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 32UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_1, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::constant<int, 0>, cute::_4>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::constant<int, 4>, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 32>>, cute::_8>, cute::tuple<cute::tuple<cute::_256, cute::_1>, cute::constant<int, 32>>>, cute::tuple<cute::_32, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<192, 128, 64, 8, false, false, cutlass::bfloat16_t, Flash_kernel_traits<192, 128, 64, 8, cutlass::bfloat16_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<192, 128, 64, 8, false, false, cutlass::bfloat16_t, Flash_kernel_traits<192, 128, 64, 8, cutlass::bfloat16_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<192, 128, 64, 8, false, false, cutlass::bfloat16_t, Flash_kernel_traits<192, 128, 64, 8, cutlass::bfloat16_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<192, 128, 64, 8, false, false, cutlass::bfloat16_t, Flash_kernel_traits<192, 128, 64, 8, cutlass::bfloat16_t>>, Is_dropout=false, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(186): here
              instantiation of "void run_mha_fwd_hdim192<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::bfloat16_t]" 
  kernels/flash_fwd_hdim192_bf16_sm80.cu(9): here

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 64UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_2, cute::_8>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::_4, cute::_8>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::_8, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::constant<int, 8>, cute::constant<int, 16>>, cute::_8>, cute::tuple<cute::tuple<cute::_128, cute::_1>, cute::constant<int, 16>>>, cute::tuple<cute::_16, cute::constant<int, 64>>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<64, 128, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<64, 128, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<64, 128, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<64, 128, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<64, 128, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<64, 128, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=true]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<64, 128, 64, 4, false, false, cutlass::half_t, Flash_kernel_traits<64, 128, 64, 4, cutlass::half_t>>, Is_dropout=true, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(71): here
              instantiation of "void run_mha_fwd_hdim64<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::half_t]" 
  kernels/flash_fwd_hdim64_fp16_sm80.cu(9): here

  2 errors detected in the compilation of "kernels/flash_fwd_hdim192_bf16_sm80.cu".
  # --error 0x1 --
  thread '<unnamed>' panicked at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bindgen_cuda-0.1.1/src/lib.rs:220:21:
  nvcc error while executing compiling: "nvcc" "--gpu-architecture=sm_75" "-c" "-o" "/hugo/candle/target/release/build/candle-flash-attn-ef9f57105f35b8e6/out/flash_fwd_hdim192_bf16_sm80.o" "--default-stream" "per-thread" "-std=c++17" "-O3" "-U__CUDA_NO_HALF_OPERATORS__" "-U__CUDA_NO_HALF_CONVERSIONS__" "-U__CUDA_NO_HALF2_OPERATORS__" "-U__CUDA_NO_BFLOAT16_CONVERSIONS__" "-Icutlass/include" "--expt-relaxed-constexpr" "--expt-extended-lambda" "--use_fast_math" "--verbose" "kernels/flash_fwd_hdim192_bf16_sm80.cu"

  # stdout


  # stderr

  kernels/flash_fwd_kernel.h(67): error: static assertion failed
            detected during:
              instantiation of "void flash::write_softmax_to_gmem(const cute::Tensor<Engine0, Layout0> &, cute::Tensor<Engine1, Layout1> &, TiledCopy) [with Engine0=cute::array_aligned<cutlass::half_t, 128UL, 16UL>, Layout0=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2, cute::_1>, cute::_2, cute::_16>, cute::tuple<cute::tuple<cute::_1, cute::_2, cute::constant<int, 0>>, cute::_4, cute::_8>>, Engine1=cute::ViewEngine<cute::gmem_ptr<cutlass::half_t>>, Layout1=cute::Layout<cute::tuple<cute::tuple<cute::_1, cute::_8>, cute::_16, cute::_1>, cute::tuple<cute::tuple<cute::constant<int, 0>, cute::constant<int, 1>>, int, cute::constant<int, 0>>>, TiledCopy=cute::TiledCopy<cute::Copy_Atom<cute::DefaultCopy, cutlass::half_t>, cute::Layout<cute::tuple<cute::tuple<cute::_16, cute::_8>, cute::_8>, cute::tuple<cute::tuple<cute::_64, cute::_1>, cute::_8>>, cute::tuple<cute::_8, cute::_128>>]" 
  (436): here
              instantiation of "void flash::compute_attn_1rowblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &, int, int, int) [with Kernel_traits=Flash_fwd_kernel_traits<64, 128, 128, 4, false, false, cutlass::half_t, Flash_kernel_traits<64, 128, 128, 4, cutlass::half_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false, Params=Flash_fwd_params]" 
  (630): here
              instantiation of "void flash::compute_attn<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax,Params>(const Params &) [with Kernel_traits=Flash_fwd_kernel_traits<64, 128, 128, 4, false, false, cutlass::half_t, Flash_kernel_traits<64, 128, 128, 4, cutlass::half_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false, Params=Flash_fwd_params]" 
  kernels/flash_fwd_launch_template.h(14): here
              instantiation of "void flash_fwd_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Return_softmax>(Flash_fwd_params) [with Kernel_traits=Flash_fwd_kernel_traits<64, 128, 128, 4, false, false, cutlass::half_t, Flash_kernel_traits<64, 128, 128, 4, cutlass::half_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=true, Is_even_MN=false, Is_even_K=true, Return_softmax=false]" 
  kernels/flash_fwd_launch_template.h(31): here
              instantiation of "void run_flash_fwd<Kernel_traits,Is_dropout,Is_causal>(Flash_fwd_params &, cudaStream_t) [with Kernel_traits=Flash_fwd_kernel_traits<64, 128, 128, 4, false, false, cutlass::half_t, Flash_kernel_traits<64, 128, 128, 4, cutlass::half_t>>, Is_dropout=false, Is_causal=true]" 
  kernels/flash_fwd_launch_template.h(71): here
              instantiation of "void run_mha_fwd_hdim64<T>(Flash_fwd_params &, cudaStream_t) [with T=cutlass::half_t]" 
  kernels/flash_fwd_hdim64_fp16_sm80.cu(9): here

  2 errors detected in the compilation of "kernels/flash_fwd_hdim64_fp16_sm80.cu".
  # --error 0x1 --
  thread '<unnamed>' panicked at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bindgen_cuda-0.1.1/src/lib.rs:220:21:
  nvcc error while executing compiling: "nvcc" "--gpu-architecture=sm_75" "-c" "-o" "/hugo/candle/target/release/build/candle-flash-attn-ef9f57105f35b8e6/out/flash_fwd_hdim64_fp16_sm80.o" "--default-stream" "per-thread" "-std=c++17" "-O3" "-U__CUDA_NO_HALF_OPERATORS__" "-U__CUDA_NO_HALF_CONVERSIONS__" "-U__CUDA_NO_HALF2_OPERATORS__" "-U__CUDA_NO_BFLOAT16_CONVERSIONS__" "-Icutlass/include" "--expt-relaxed-constexpr" "--expt-extended-lambda" "--use_fast_math" "--verbose" "kernels/flash_fwd_hdim64_fp16_sm80.cu"

  # stdout


  # stderr

hugoabonizio avatar Feb 08 '24 20:02 hugoabonizio

Sadly we only support flash-attention v2 in candle. This likely explains the compilation error.

LaurentMazare avatar Feb 08 '24 20:02 LaurentMazare

No problem! So, is the explanation for the performance difference because PyTorch is using flash attention 1.x under the hood? Even though I'm not instantiating the model with attn_implementation='...', PyTorch might be using SDPA, for example?

hugoabonizio avatar Feb 08 '24 21:02 hugoabonizio

I'm not sure, hence the suggestion to try it :) I feel that it's likely for the pytorch version to use some form of optimised attention here.

LaurentMazare avatar Feb 08 '24 21:02 LaurentMazare

I did some digging and discovered that it uses PyTorch SDPA by default. However, disabling it resulted in the same tokens/s (perhaps T4s are not optimized for that?):

model = AutoModelForCausalLM.from_pretrained(
    'meta-llama/Llama-2-7b-hf',
    torch_dtype=torch.float16,
    attn_implementation='eager'
).to('cuda')
  • Python: 14.334846807770598 tokens/s

There's no such thing as torch.inference_mode() or torch.no_grad() in Candle that I was supposed to be using, right?

hugoabonizio avatar Feb 08 '24 21:02 hugoabonizio

I repeated the tests using an Nvidia L4 24GB:

Python with Flash Attention: 15.613678875975044 tokens/s
model = AutoModelForCausalLM.from_pretrained(
    'meta-llama/Llama-2-7b-hf',
    torch_dtype=torch.float16,
    attn_implementation='flash_attention_2'
).to('cuda')
Python without Flash Attention: 15.47059674081593 tokens/s
model = AutoModelForCausalLM.from_pretrained(
    'meta-llama/Llama-2-7b-hf',
    torch_dtype=torch.float16,
    attn_implementation='eager'
).to('cuda')
Candle with Flash Attention: 12.666421963300772 token/s
cargo run --example llama --release --features "flash-attn cuda" -- --prompt "${prompt}" --temperature 2.0 --sample-len 200 --dtype f16 --use-flash-attn
Candle without Flash Attention: 11.487691852657587 token/s
cargo run --example llama --release --features "cuda" -- --prompt "${prompt}" --temperature 2.0 --sample-len 200 --dtype f16

The results make more sense now, despite the ~20% difference. Next, I'll try batch inferencing to see the difference.

hugoabonizio avatar Feb 09 '24 11:02 hugoabonizio

There's no such thing as torch.inference_mode() or torch.no_grad() in Candle that I was supposed to be using, right?

Nope, the default in almost all our examples is to not track gradients so nothing to be done here.

LaurentMazare avatar Feb 10 '24 19:02 LaurentMazare

@LaurentMazare I had the impression that flash attention wasn't supported in T4s because it wasn't compiling with it. However, it appears that the Python implementation of flash attention 1.x does support the Turing architecture.

I tried the command:

$ cargo run --example llama --release --features "flash-attn cuda" -- --prompt "${prompt}" --temperature 2.0 --sample-len 200 --dtype f16 --use-flash-attn

Error

I'm having the same problem. How do I run candle with flash-attention v2?

Hojun-Son avatar Feb 26 '24 05:02 Hojun-Son

@Hojun-Son Candle is already using flash-attention v2. In my case I modified the code to use regular attention instead of FA when I was running on T4s.

hugoabonizio avatar Feb 28 '24 11:02 hugoabonizio

I got a candle result about 18.5 tokens/s on T4, with vllm's paged attention code which seems flash-attention v1.

yinqiwen avatar Apr 01 '24 10:04 yinqiwen

FYI: about tps https://github.com/premAI-io/benchmarks/blob/main/README.md

Engine float32 float16 int8 int4
burn 10.04 ± 0.64 - - -
candle - 36.78 ± 2.17 - -
llama.cpp - - 79.15 ± 1.20 100.90 ± 1.46
ctranslate 35.23 ± 4.01 55.72 ± 16.66 35.73 ± 10.87 -
tinygrad - 20.32 ± 0.06 - -
onnx - 54.16 ± 3.15 - -
transformers (pytorch) 43.79 ± 0.61 46.39 ± 0.28 6.98 ± 0.05 21.72 ± 0.11
vllm 90.78 ± 1.60 90.54 ± 2.22 - 114.69 ± 11.20
exllamav2 - - 121.63 ± 0.74 130.16 ± 0.35
ctransformers - - 76.75 ± 10.36 84.26 ± 5.79
AutoGPTQ 42.01 ± 1.03 30.24 ± 0.41 - -
AutoAWQ - - - 109.20 ± 3.28
DeepSpeed - 81.44 ± 8.13 -
PyTorch Lightning 24.85 ± 0.07 44.56 ± 2.89 10.50 ± 0.12 24.83 ± 0.05
Optimum Nvidia 110.36 ± 0.52 109.09 ± 4.26 - -
Nvidia TensorRT-LLM 55.19 ± 1.03 85.03 ± 0.62 167.66 ± 2.05 235.18 ± 3.20

katopz avatar Apr 01 '24 10:04 katopz

FYI: about tps https://github.com/premAI-io/benchmarks/blob/main/README.md

Interesting datapoint, it's worth mentioning that we've pushed quite a bit on optimizing the performance of the cuda backend over the last two or three weeks, so I would hope for the candle numbers to have improved a bit since then.

The latest change #1977 included re-using some llama.cpp kernels for quantized models and it turns out that with them performance is on par with llama.cpp, e.g. for a 7b model with 4bit quantization, before the change candle was at ~33 token/s, llama.cpp at ~55 token/s and after the change candle is at ~60 token/s. This one only applies to quantized models on cuda but a bunch of other changes would also apply for unquantized models.

LaurentMazare avatar Apr 01 '24 11:04 LaurentMazare