Summary
Traceback (most recent call last):
File "/mnt/sda/yjy/summer/DL-autograd-torch-main/yjy_test.py", line 24, in
output = oneflow.roi_align(input0,input1,input2,input3,input4,input5,input6)
oneflow._oneflow_internal.exception.OpKernelNotFoundException:
File "/home/ci-user/runners/release/_work/oneflow/oneflow/oneflow/core/framework/op_interpreter/op_interpreter_util.cpp", line 139, in Dispatchoneflow::one::Tensor
Dispatch<TensorTuple>(op_expr, inputs, ctx)
File "/home/ci-user/runners/release/_work/oneflow/oneflow/oneflow/core/framework/op_interpreter/op_interpreter_util.cpp", line 131, in Dispatchoneflow::one::TensorTuple
Dispatch(op_expr, inputs, outputs.get(), ctx)
File "/home/ci-user/runners/release/work/oneflow/oneflow/oneflow/core/framework/op_interpreter/op_interpreter.cpp", line 96, in Apply
internal->Apply(op_expr, inputs, outputs, ctx)
File "/home/ci-user/runners/release/_work/oneflow/oneflow/oneflow/core/framework/op_interpreter/eager_mirrored_op_interpreter.cpp", line 165, in NaiveInterpret
PhysicalRun([&](InstructionsBuilder* builder) -> Maybe ... input_eager_blob_objects, output_eager_blob_objects, ctx, stream); })
File "/home/ci-user/runners/release/_work/oneflow/oneflow/oneflow/core/framework/instructions_builder.cpp", line 599, in PhysicalRun
Build(&instructions_builder)
File "/home/ci-user/runners/release/_work/oneflow/oneflow/oneflow/core/framework/instructions_builder.cpp", line 358, in LocalCallOpKernel
vm::LocalCallOpKernelPhyInstrOperand::New( opkernel ... consistent_tensor_infer_result, ctx, *one::CurrentDevVmDepObjectConsumeMode())
File "/home/ci-user/runners/release/work/oneflow/oneflow/oneflow/core/eager/local_call_opkernel_phy_instr_operand.h", line 54, in New
ptr->Init()
File "/home/ci-user/runners/release/work/oneflow/oneflow/oneflow/core/eager/local_call_opkernel_phy_instr_operand.cpp", line 26, in Init
mut_opkernel()->ChooseOpKernel(&user_opkernel, &need_temp_storage, attrs(), inputs().get(), outputs().get(), consistent_tensor_infer_result().get())
File "/home/ci-user/runners/release/work/oneflow/oneflow/oneflow/user/kernels/stateful_local_opkernel.cpp", line 453, in ChooseOpKernel
user_op::UserOpRegistryMgr::Get().GetOpKernelRegistryResult(op_type_name, *reg_ctx)
Cannot find the kernel matching Current OperatorConf.
The Info of OperatorConf are
op_name: roi_align1
op_type_name: roi_align
DeviceType_Name: kCPU
DataType_Name of x_0: kFloat
DataType_Name of rois_0: kFloat
DataType_Name of y_0: kFloat
op_kernels_not_found_debug_str: "(device_type == gpu)"
Code to reproduce bug
input0 = oneflow.rand(2, 3, 64, 64, dtype=oneflow.float32)
input1 = oneflow.rand(200, 5, dtype=oneflow.float32)
input2 = 2.0
input3 = 14
input4 = 14
input5 = 2
input6 = True
output = oneflow.roi_align(input0,input1,input2,input3,input4,input5,input6)
System Information
- What is your OneFlow installation (pip, source, dockerhub): pip
- OS: Ubuntu 20.04.2 LTS
- OneFlow version (run
python3 -m oneflow --doctor): 0.7.0+cu112
- Python version: 3.8.8
- CUDA driver version: 11.4
Currently OneFlow only support ROIAlign Operator in CUDA, you can set the tensor device property as "cuda"