AITemplate icon indicating copy to clipboard operation
AITemplate copied to clipboard

Compilation Errors : incomplete type is not allowed

Open TopTea1 opened this issue 3 years ago • 4 comments

Hello, I have tried to execute the VIT example from the repo in the provided Docker image. I encounter compilation errors during the execution of compile_vit(). Here is a part of the errors :

/usr/local/lib/python3.8/dist-packages/aitemplate/3rdparty/cutlass/include/cutlass/epilogue/threadblock/epilogue.h(476): error: expression must be a pointer to a complete object type
          detected during:
            instantiation of "void cutlass::epilogue::threadblock::Epilogue<Shape_, WarpMmaOperator_, PartitionsK, OutputTileIterator_, AccumulatorFragmentIterator_, WarpTileIterator_, SharedLoadIterator_, OutputOp_, Padding_, FragmentsPerPartition, IterationsUnroll>::apply_output_operator_(OutputTileIterator_::Fragment &, const cutlass::epilogue::threadblock::Epilogue<Shape_, WarpMmaOperator_, PartitionsK, OutputTileIterator_, AccumulatorFragmentIterator_, WarpTileIterator_, SharedLoadIterator_, OutputOp_, Padding_, FragmentsPerPartition, IterationsUnroll>::OutputOp &, const SharedLoadIterator_::Fragment &, const OutputTileIterator_::Fragment &) [with Shape_=cutlass::gemm::GemmShape<64, 256, 32>, WarpMmaOperator_=cutlass::gemm::warp::MmaTensorOp<cutlass::gemm::GemmShape<64, 64, 32>, cutlass::half_t, cutlass::layout::RowMajorTensorOpMultiplicandCrosswise<16, 32>, cutlass::half_t, cutlass::layout::ColumnMajorTensorOpMultiplicandCrosswise<16, 32>, cutlass::half_t, cutlass::layout::RowMajor, cutlass::gemm::warp::MmaTensorOpPolicy<cutlass::arch::Mma<cutlass::gemm::GemmShape<8, 8, 4>, 32, cutlass::half_t, cutlass::layout::RowMajor, cutlass::half_t, cutlass::layout::ColumnMajor, cutlass::half_t, cutlass::layout::RowMajor, cutlass::arch::OpMultiplyAdd>, cutlass::MatrixShape<1, 1>>, 1, false, __nv_bool>, PartitionsK=1, OutputTileIterator_=cutlass::epilogue::threadblock::PredicatedTileIterator<cutlass::epilogue::threadblock::OutputTileOptimalThreadMap<cutlass::epilogue::threadblock::OutputTileShape<256, 8, 1, 1, 1>, cutlass::epilogue::threadblock::OutputTileShape<1, 8, 1, 1, 8>, 128, 2, 16>, cutlass::half_t, false, cutlass::layout::NoPermute, false>, AccumulatorFragmentIterator_=cutlass::epilogue::warp::FragmentIteratorComplexTensorOp<cutlass::gemm::GemmShape<64, 64, 32>, <error-type>, <error-type>, <error-type>, cutlass::layout::RowMajor>, WarpTileIterator_=cutlass::epilogue::warp::TileIteratorTensorOp<cutlass::gemm::GemmShape<64, 64, 32>, <error-type>, cutlass::half_t, cutlass::layout::RowMajor>, SharedLoadIterator_=cutlass::epilogue::threadblock::SharedLoadIterator<cutlass::epilogue::threadblock::OutputTileOptimalThreadMap<cutlass::epilogue::threadblock::OutputTileShape<256, 8, 1, 1, 1>, cutlass::epilogue::threadblock::OutputTileShape<1, 8, 1, 1, 8>, 128, 2, 16>::CompactedThreadMap, cutlass::half_t, 4>, OutputOp_=cutlass::epilogue::thread::LinearCombination<cutlass::half_t, 2, cutlass::half_t, cutlass::half_t, cutlass::epilogue::thread::ScaleType::Default, cutlass::FloatRoundStyle::round_to_nearest>, Padding_=cutlass::MatrixShape<0, 16>, FragmentsPerPartition=1, IterationsUnroll=1]" 
(439): here
            instantiation of "void cutlass::epilogue::threadblock::Epilogue<Shape_, WarpMmaOperator_, PartitionsK, OutputTileIterator_, AccumulatorFragmentIterator_, WarpTileIterator_, SharedLoadIterator_, OutputOp_, Padding_, FragmentsPerPartition, IterationsUnroll>::compute_source_needed_(const cutlass::epilogue::threadblock::Epilogue<Shape_, WarpMmaOperator_, PartitionsK, OutputTileIterator_, AccumulatorFragmentIterator_, WarpTileIterator_, SharedLoadIterator_, OutputOp_, Padding_, FragmentsPerPartition, IterationsUnroll>::OutputOp &, cutlass::epilogue::threadblock::Epilogue<Shape_, WarpMmaOperator_, PartitionsK, OutputTileIterator_, AccumulatorFragmentIterator_, WarpTileIterator_, SharedLoadIterator_, OutputOp_, Padding_, FragmentsPerPartition, IterationsUnroll>::OutputTileIterator, const cutlass::epilogue::threadblock::Epilogue<Shape_, WarpMmaOperator_, PartitionsK, OutputTileIterator_, AccumulatorFragmentIterator_, WarpTileIterator_, SharedLoadIterator_, OutputOp_, Padding_, FragmentsPerPartition, IterationsUnroll>::AccumulatorTile &, cutlass::epilogue::threadblock::Epilogue<Shape_, WarpMmaOperator_, PartitionsK, OutputTileIterator_, AccumulatorFragmentIterator_, WarpTileIterator_, SharedLoadIterator_, OutputOp_, Padding_, FragmentsPerPartition, IterationsUnroll>::OutputTileIterator) [with Shape_=cutlass::gemm::GemmShape<64, 256, 32>, WarpMmaOperator_=cutlass::gemm::warp::MmaTensorOp<cutlass::gemm::GemmShape<64, 64, 32>, cutlass::half_t, cutlass::layout::RowMajorTensorOpMultiplicandCrosswise<16, 32>, cutlass::half_t, cutlass::layout::ColumnMajorTensorOpMultiplicandCrosswise<16, 32>, cutlass::half_t, cutlass::layout::RowMajor, cutlass::gemm::warp::MmaTensorOpPolicy<cutlass::arch::Mma<cutlass::gemm::GemmShape<8, 8, 4>, 32, cutlass::half_t, cutlass::layout::RowMajor, cutlass::half_t, cutlass::layout::ColumnMajor, cutlass::half_t, cutlass::layout::RowMajor, cutlass::arch::OpMultiplyAdd>, cutlass::MatrixShape<1, 1>>, 1, false, __nv_bool>, PartitionsK=1, OutputTileIterator_=cutlass::epilogue::threadblock::PredicatedTileIterator<cutlass::epilogue::threadblock::OutputTileOptimalThreadMap<cutlass::epilogue::threadblock::OutputTileShape<256, 8, 1, 1, 1>, cutlass::epilogue::threadblock::OutputTileShape<1, 8, 1, 1, 8>, 128, 2, 16>, cutlass::half_t, false, cutlass::layout::NoPermute, false>, AccumulatorFragmentIterator_=cutlass::epilogue::warp::FragmentIteratorComplexTensorOp<cutlass::gemm::GemmShape<64, 64, 32>, <error-type>, <error-type>, <error-type>, cutlass::layout::RowMajor>, WarpTileIterator_=cutlass::epilogue::warp::TileIteratorTensorOp<cutlass::gemm::GemmShape<64, 64, 32>, <error-type>, cutlass::half_t, cutlass::layout::RowMajor>, SharedLoadIterator_=cutlass::epilogue::threadblock::SharedLoadIterator<cutlass::epilogue::threadblock::OutputTileOptimalThreadMap<cutlass::epilogue::threadblock::OutputTileShape<256, 8, 1, 1, 1>, cutlass::epilogue::threadblock::OutputTileShape<1, 8, 1, 1, 8>, 128, 2, 16>::CompactedThreadMap, cutlass::half_t, 4>, OutputOp_=cutlass::epilogue::thread::LinearCombination<cutlass::half_t, 2, cutlass::half_t, cutlass::half_t, cutlass::epilogue::thread::ScaleType::Default, cutlass::FloatRoundStyle::round_to_nearest>, Padding_=cutlass::MatrixShape<0, 16>, FragmentsPerPartition=1, IterationsUnroll=1]" 
(199): here
            instantiation of "void cutlass::epilogue::threadblock::Epilogue<Shape_, WarpMmaOperator_, PartitionsK, OutputTileIterator_, AccumulatorFragmentIterator_, WarpTileIterator_, SharedLoadIterator_, OutputOp_, Padding_, FragmentsPerPartition, IterationsUnroll>::operator()(const cutlass::epilogue::threadblock::Epilogue<Shape_, WarpMmaOperator_, PartitionsK, OutputTileIterator_, AccumulatorFragmentIterator_, WarpTileIterator_, SharedLoadIterator_, OutputOp_, Padding_, FragmentsPerPartition, IterationsUnroll>::OutputOp &, cutlass::epilogue::threadblock::Epilogue<Shape_, WarpMmaOperator_, PartitionsK, OutputTileIterator_, AccumulatorFragmentIterator_, WarpTileIterator_, SharedLoadIterator_, OutputOp_, Padding_, FragmentsPerPartition, IterationsUnroll>::OutputTileIterator, const cutlass::epilogue::threadblock::Epilogue<Shape_, WarpMmaOperator_, PartitionsK, OutputTileIterator_, AccumulatorFragmentIterator_, WarpTileIterator_, SharedLoadIterator_, OutputOp_, Padding_, FragmentsPerPartition, IterationsUnroll>::AccumulatorTile &, cutlass::epilogue::threadblock::Epilogue<Shape_, WarpMmaOperator_, PartitionsK, OutputTileIterator_, AccumulatorFragmentIterator_, WarpTileIterator_, SharedLoadIterator_, OutputOp_, Padding_, FragmentsPerPartition, IterationsUnroll>::OutputTileIterator) [with Shape_=cutlass::gemm::GemmShape<64, 256, 32>, WarpMmaOperator_=cutlass::gemm::warp::MmaTensorOp<cutlass::gemm::GemmShape<64, 64, 32>, cutlass::half_t, cutlass::layout::RowMajorTensorOpMultiplicandCrosswise<16, 32>, cutlass::half_t, cutlass::layout::ColumnMajorTensorOpMultiplicandCrosswise<16, 32>, cutlass::half_t, cutlass::layout::RowMajor, cutlass::gemm::warp::MmaTensorOpPolicy<cutlass::arch::Mma<cutlass::gemm::GemmShape<8, 8, 4>, 32, cutlass::half_t, cutlass::layout::RowMajor, cutlass::half_t, cutlass::layout::ColumnMajor, cutlass::half_t, cutlass::layout::RowMajor, cutlass::arch::OpMultiplyAdd>, cutlass::MatrixShape<1, 1>>, 1, false, __nv_bool>, PartitionsK=1, OutputTileIterator_=cutlass::epilogue::threadblock::PredicatedTileIterator<cutlass::epilogue::threadblock::OutputTileOptimalThreadMap<cutlass::epilogue::threadblock::OutputTileShape<256, 8, 1, 1, 1>, cutlass::epilogue::threadblock::OutputTileShape<1, 8, 1, 1, 8>, 128, 2, 16>, cutlass::half_t, false, cutlass::layout::NoPermute, false>, AccumulatorFragmentIterator_=cutlass::epilogue::warp::FragmentIteratorComplexTensorOp<cutlass::gemm::GemmShape<64, 64, 32>, <error-type>, <error-type>, <error-type>, cutlass::layout::RowMajor>, WarpTileIterator_=cutlass::epilogue::warp::TileIteratorTensorOp<cutlass::gemm::GemmShape<64, 64, 32>, <error-type>, cutlass::half_t, cutlass::layout::RowMajor>, SharedLoadIterator_=cutlass::epilogue::threadblock::SharedLoadIterator<cutlass::epilogue::threadblock::OutputTileOptimalThreadMap<cutlass::epilogue::threadblock::OutputTileShape<256, 8, 1, 1, 1>, cutlass::epilogue::threadblock::OutputTileShape<1, 8, 1, 1, 8>, 128, 2, 16>::CompactedThreadMap, cutlass::half_t, 4>, OutputOp_=cutlass::epilogue::thread::LinearCombination<cutlass::half_t, 2, cutlass::half_t, cutlass::half_t, cutlass::epilogue::thread::ScaleType::Default, cutlass::FloatRoundStyle::round_to_nearest>, Padding_=cutlass::MatrixShape<0, 16>, FragmentsPerPartition=1, IterationsUnroll=1]" 
/usr/local/lib/python3.8/dist-packages/aitemplate/3rdparty/cutlass/include/cutlass/conv/kernel/implicit_gemm_convolution.h(426): here
            instantiation of "void cutlass::conv::kernel::ImplicitGemmConvolution<Mma_, Epilogue_, ThreadblockSwizzle_, ConvOperator, ConvProblemSize_, GroupMode_>::operator()(const cutlass::conv::kernel::ImplicitGemmConvolution<Mma_, Epilogue_, ThreadblockSwizzle_, ConvOperator, ConvProblemSize_, GroupMode_>::Params &, cutlass::conv::kernel::ImplicitGemmConvolution<Mma_, Epilogue_, ThreadblockSwizzle_, ConvOperator, ConvProblemSize_, GroupMode_>::SharedStorage &) [with Mma_=cutlass::conv::threadblock::ImplicitGemmMultistage<cutlass::gemm::GemmShape<64, 256, 32>, cutlass::conv::threadblock::Conv2dFpropActivationTileAccessIteratorFixedChannels<cutlass::MatrixShape<64, 32>, cutlass::half_t, cutlass::layout::TensorNHWC, cutlass::transform::PitchLinearWarpRakedThreadMap<cutlass::PitchLinearShape<32, 64>, 128, cutlass::PitchLinearShape<4, 8>, 8>, cutlass::AlignedArray<cutlass::half_t, 4, 8>>, cutlass::transform::threadblock::RegularTileAccessIterator<cutlass::MatrixShape<64, 32>, cutlass::half_t, cutlass::layout::RowMajorTensorOpMultiplicandCrosswise<16, 32>, 0, cutlass::transform::PitchLinearWarpRakedThreadMap<cutlass::PitchLinearShape<32, 64>, 128, cutlass::PitchLinearShape<4, 8>, 8>, 16>, cutlass::arch::CacheOperation::Always, cutlass::conv::threadblock::Conv2dFpropFilterTileAccessIteratorFixedChannels<cutlass::MatrixShape<32, 256>, cutlass::half_t, cutlass::layout::TensorNHWC, cutlass::transform::PitchLinearWarpRakedThreadMap<cutlass::PitchLinearShape<32, 256>, 128, cutlass::PitchLinearShape<4, 8>, 8>, cutlass::AlignedArray<cutlass::half_t, 4, 8>>, cutlass::transform::threadblock::RegularTileAccessIterator<cutlass::MatrixShape<32, 256>, cutlass::half_t, cutlass::layout::ColumnMajorTensorOpMultiplicandCrosswise<16, 32>, 1, cutlass::transform::PitchLinearWarpRakedThreadMap<cutlass::PitchLinearShape<32, 256>, 128, cutlass::PitchLinearShape<4, 8>, 8>, 16>, cutlass::arch::CacheOperation::Always, cutlass::gemm::threadblock::MmaPolicy<cutlass::gemm::warp::MmaTensorOp<cutlass::gemm::GemmShape<64, 64, 32>, cutlass::half_t, cutlass::layout::RowMajorTensorOpMultiplicandCrosswise<16, 32>, cutlass::half_t, cutlass::layout::ColumnMajorTensorOpMultiplicandCrosswise<16, 32>, cutlass::half_t, cutlass::layout::RowMajor, cutlass::gemm::warp::MmaTensorOpPolicy<cutlass::arch::Mma<cutlass::gemm::GemmShape<8, 8, 4>, 32, cutlass::half_t, cutlass::layout::RowMajor, cutlass::half_t, cutlass::layout::ColumnMajor, cutlass::half_t, cutlass::layout::RowMajor, cutlass::arch::OpMultiplyAdd>, cutlass::MatrixShape<1, 1>>, 1, false, __nv_bool>, cutlass::MatrixShape<0, 0>, cutlass::MatrixShape<0, 0>, 1>, 3, __nv_bool>, Epilogue_=cutlass::epilogue::threadblock::Epilogue<cutlass::gemm::GemmShape<64, 256, 32>, cutlass::gemm::warp::MmaTensorOp<cutlass::gemm::GemmShape<64, 64, 32>, cutlass::half_t, cutlass::layout::RowMajorTensorOpMultiplicandCrosswise<16, 32>, cutlass::half_t, cutlass::layout::ColumnMajorTensorOpMultiplicandCrosswise<16, 32>, cutlass::half_t, cutlass::layout::RowMajor, cutlass::gemm::warp::MmaTensorOpPolicy<cutlass::arch::Mma<cutlass::gemm::GemmShape<8, 8, 4>, 32, cutlass::half_t, cutlass::layout::RowMajor, cutlass::half_t, cutlass::layout::ColumnMajor, cutlass::half_t, cutlass::layout::RowMajor, cutlass::arch::OpMultiplyAdd>, cutlass::MatrixShape<1, 1>>, 1, false, __nv_bool>, 1, cutlass::epilogue::threadblock::PredicatedTileIterator<cutlass::epilogue::threadblock::OutputTileOptimalThreadMap<cutlass::epilogue::threadblock::OutputTileShape<256, 8, 1, 1, 1>, cutlass::epilogue::threadblock::OutputTileShape<1, 8, 1, 1, 8>, 128, 2, 16>, cutlass::half_t, false, cutlass::layout::NoPermute, false>, cutlass::epilogue::warp::FragmentIteratorComplexTensorOp<cutlass::gemm::GemmShape<64, 64, 32>, <error-type>, <error-type>, <error-type>, cutlass::layout::RowMajor>, cutlass::epilogue::warp::TileIteratorTensorOp<cutlass::gemm::GemmShape<64, 64, 32>, <error-type>, cutlass::half_t, cutlass::layout::RowMajor>, cutlass::epilogue::threadblock::SharedLoadIterator<cutlass::epilogue::threadblock::OutputTileOptimalThreadMap<cutlass::epilogue::threadblock::OutputTileShape<256, 8, 1, 1, 1>, cutlass::epilogue::threadblock::OutputTileShape<1, 8, 1, 1, 8>, 128, 2, 16>::CompactedThreadMap, cutlass::half_t, 4>, cutlass::epilogue::thread::LinearCombination<cutlass::half_t, 2, cutlass::half_t, cutlass::half_t, cutlass::epilogue::thread::ScaleType::Default, cutlass::FloatRoundStyle::round_to_nearest>, cutlass::MatrixShape<0, 16>, 1, 1>, ThreadblockSwizzle_=cutlass::gemm::threadblock::GemmIdentityThreadblockSwizzle<4>, ConvOperator=cutlass::conv::Operator::kFprop, ConvProblemSize_=cutlass::conv::Conv2dProblemSize, GroupMode_=cutlass::conv::GroupMode::kNone]" 
/usr/local/lib/python3.8/dist-packages/aitemplate/3rdparty/cutlass/include/cutlass/device_kernel.h(57): here
            instantiation of "void cutlass::Kernel<Operator>(Operator::Params) [with Operator=cutlass_tensorop_h884fprop_fixed_channels_64x256_32x3_nhwc_align4_base]" 
/usr/local/lib/python3.8/dist-packages/aitemplate/3rdparty/cutlass/include/cutlass/conv/device/implicit_gemm_convolution.h(297): here
            instantiation of "cutlass::Status cutlass::conv::device::ImplicitGemmConvolution<ImplicitGemmKernel_>::run(cudaStream_t) [with ImplicitGemmKernel_=cutlass_tensorop_h884fprop_fixed_channels_64x256_32x3_nhwc_align4_base]" 
/usr/local/lib/python3.8/dist-packages/aitemplate/3rdparty/cutlass/include/cutlass/conv/device/implicit_gemm_convolution.h(306): here
            instantiation of "cutlass::Status cutlass::conv::device::ImplicitGemmConvolution<ImplicitGemmKernel_>::operator()(cudaStream_t) [with ImplicitGemmKernel_=cutlass_tensorop_h884fprop_fixed_channels_64x256_32x3_nhwc_align4_base]" 
conv2d_bias_few_channels/cutlass_h884fprop_fixed_channels_64x256_32x3_nhwc_align_4_2.cu(136): here

99 errors detected in the compilation of "conv2d_bias_few_channels/cutlass_h884fprop_fixed_channels_64x256_32x3_nhwc_align_4_2.cu".
make: *** [Makefile:7: conv2d_bias_few_channels/cutlass_h884fprop_fixed_channels_64x256_32x3_nhwc_align_4_2] Error 1

2022-11-15 13:56:14,606 INFO <aitemplate.compiler.transform.profile> compiled profilers elapsed time: 0:00:04.537052
2022-11-15 13:56:14,607 INFO <aitemplate.compiler.ops.conv.conv2d> Profile: conv2d_bias_few_channels_1: NI == 1 && HI == 224 && WI == 224 && CI == 4
2022-11-15 13:56:14,608 INFO <aitemplate.backend.profiler_runner> Using 1 GPU for profiling conv2d_bias_few_channels_1


RuntimeError: Profiler ./tmp/profiler/conv2d_bias_few_channels/cutlass_h884fprop_fixed_channels_256x128_32x3_nhwc_align_4_8 is not executable

NVCC: 11.6 GCC: 9.4.0 Make: 4.2.1 AITemplate: master branch

Thanks for your help

TopTea1 avatar Nov 15 '22 14:11 TopTea1

The complete logs : errorlog.txt

TopTea1 avatar Nov 15 '22 14:11 TopTea1

The docker image is not updated to follow the up-to-date instructions, the cutlass version needs to be v0.1.1 (it uses main branch) and --force-install is not set on the WHL.

vans163 avatar Nov 15 '22 16:11 vans163

I have changed to use the correct Cutlass version (to the correct commit) and I got the same issues

TopTea1 avatar Nov 15 '22 17:11 TopTea1

Thanks reporting the issue. We will investigate the potential break this week.

antinucleon avatar Nov 16 '22 18:11 antinucleon