pytorch_dlprim icon indicating copy to clipboard operation
pytorch_dlprim copied to clipboard

Failure of AND Fury

Open skn123 opened this issue 1 year ago • 13 comments

Built the library ... Pytorch 1.13.1

/mnt/d/srcs/pytorch_dlprim$ python3 mnist.py --device ocl:0 Traceback (most recent call last): File "/mnt/d/srcs/pytorch_dlprim/mnist.py", line 164, in main() File "/mnt/d/srcs/pytorch_dlprim/mnist.py", line 121, in main torch.ops.load_library("/mnt/d/build_ninja/dlprim/libpt_ocl.so") File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 573, in load_library ctypes.CDLL(path) File "/usr/lib/python3.10/ctypes/init.py", line 374, in init self._handle = _dlopen(self._name, mode) OSError: /mnt/d/build_ninja/dlprim/libpt_ocl.so: undefined symbol: _ZNK3c105Error4whatEv

...............

ldd /mnt/d/build_ninja/dlprim/libpt_ocl.so linux-vdso.so.1 (0x00007ffe880fb000) libc10.so => /usr/local/lib/libc10.so (0x00007f5a3c439000) libOpenCL.so.1 => /lib/x86_64-linux-gnu/libOpenCL.so.1 (0x00007f5a3c406000) libdlprim_core.so => /mnt/d/build_ninja/dlprim/dlprimitives/libdlprim_core.so (0x00007f5a3c348000) libtorch.so => /usr/local/lib/libtorch.so (0x00007f5a3c343000) libtorch_cpu.so => /usr/local/lib/libtorch_cpu.so (0x00007f5a330f4000) libsqlite3.so.0 => /lib/x86_64-linux-gnu/libsqlite3.so.0 (0x00007f5a32fa5000) libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f5a32d79000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f5a32c92000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f5a32c72000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f5a32a49000) /lib64/ld-linux-x86-64.so.2 (0x00007f5a3c5db000) libnuma.so.1 => /lib/x86_64-linux-gnu/libnuma.so.1 (0x00007f5a32a3c000) libopenblas.so.0 => /lib/x86_64-linux-gnu/libopenblas.so.0 (0x00007f5a305e8000) libmpi_cxx.so.40 => /lib/x86_64-linux-gnu/libmpi_cxx.so.40 (0x00007f5a305ce000) libmpi.so.40 => /lib/x86_64-linux-gnu/libmpi.so.40 (0x00007f5a30497000) libgfortran.so.5 => /lib/x86_64-linux-gnu/libgfortran.so.5 (0x00007f5a301bc000) libopen-pal.so.40 => /lib/x86_64-linux-gnu/libopen-pal.so.40 (0x00007f5a30109000) libopen-rte.so.40 => /lib/x86_64-linux-gnu/libopen-rte.so.40 (0x00007f5a3004c000) libhwloc.so.15 => /lib/x86_64-linux-gnu/libhwloc.so.15 (0x00007f5a2fff0000) libquadmath.so.0 => /lib/x86_64-linux-gnu/libquadmath.so.0 (0x00007f5a2ffa8000) libevent_core-2.1.so.7 => /lib/x86_64-linux-gnu/libevent_core-2.1.so.7 (0x00007f5a2ff73000) libevent_pthreads-2.1.so.7 => /lib/x86_64-linux-gnu/libevent_pthreads-2.1.so.7 (0x00007f5a2ff6e000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f5a2ff52000) libudev.so.1 => /lib/x86_64-linux-gnu/libudev.so.1 (0x00007f5a2ff28000)

skn123 avatar May 15 '24 09:05 skn123

If I install the default modules from pip, then I get this error: /mnt/d/srcs/pytorch_dlprim$ python3 mnist.py --device ocl:0A Traceback (most recent call last):A File "/mnt/d/srcs/pytorch_dlprim/mnist.py", line 164, in main() File "/mnt/d/srcs/pytorch_dlprim/mnist.py", line 121, in main torch.ops.load_library("/mnt/d/build_ninja/dlprim/libpt_ocl.so") File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 573, in load_library ctypes.CDLL(path) File "/usr/lib/python3.10/ctypes/init.py", line 374, in init self._handle = _dlopen(self._name, mode) OSError: /mnt/d/build_ninja/dlprim/libpt_ocl.so: undefined symbol: _ZNK5torch8autograd4Node4nameB5cxx11Ev

Update: Uninstalling all previous versions and installing the whl from pytorch helped. But still does not answer why it fails with custom build?

skn123 avatar May 15 '24 13:05 skn123

From your other comment I understand you managed to run it?

artyom-beilis avatar May 15 '24 14:05 artyom-beilis

Yes and in another comment I showed that pushing it to Intel GPU is fine (except that it is slow). I am still wondering what would be the "correct CMake" parameters to create a custom build of torch.

skn123 avatar May 15 '24 17:05 skn123

Я получаю похожую ошибку на этом тесте:

python mnist.py --device ocl:1 Using device: privateuseone:1 Accessing device #1:AMD Radeon R9 Fury Series (radeonsi, fiji, LLVM 17.0.6, DRM 3.57, 6.8.9-calculate) on rusticl LLVM ERROR: Cannot select: 0x7fb130370a30: f32 = and 0x7fb130370b10, Constant:i32<2147483647> 0x7fb130370b10: f32 = bitcast 0x7fb130370e90 0x7fb130370e90: i32,ch = CopyFromReg 0x5617ad66f4e0, Register:i32 %14 0x7fb13036a9d0: i32 = Register %14 0x7fb13036aab0: i32 = Constant<2147483647> In function: main Аварийный останов

sukamenev avatar Aug 15 '24 18:08 sukamenev

Другой тест отрабатывает корректно.

python test.py Accessing device #1:AMD Radeon R9 Fury Series (radeonsi, fiji, LLVM 17.0.6, DRM 3.57, 6.8.9-calculate) on rusticl REF [[[ 0 0] [164 0] [ 0 255] [ 0 255]]

[[ 0 166] [ 0 25] [ 0 9] [ 0 124]]

[[ 0 0] [ 0 255] [255 197] [ 0 0]]] DEV [[[ 0 0] [164 0] [ 0 255] [ 0 255]]

[[ 0 166] [ 0 25] [ 0 9] [ 0 124]]

[[ 0 0] [ 0 255] [255 197] [ 0 0]]] 0.0

sukamenev avatar Aug 15 '24 18:08 sukamenev

Я получаю похожую ошибку на этом тесте:

python mnist.py --device ocl:1 Using device: privateuseone:1

  1. Please give the output of clinfo
  2. run the command OPENCL_DEBUG_MODE=1 python mnist.py --device ocl:1 (i.e. set environment variable OPENCL_DEBUG_MODE=1)
  3. Please run python tests\test_ops.py --device ocl:1

artyom-beilis avatar Aug 16 '24 03:08 artyom-beilis

on rusticl

From what I know rusticl is horribly buggy. It does not work on my rx 560 at all.

Please try installing AMD drivers (I hope rocm supports it) or altenatively MESA driver (I mean OpenCL driver/platform)

artyom-beilis avatar Aug 16 '24 03:08 artyom-beilis

  1. Please give the output of clinfo

Number of platforms 2 Platform Name Intel(R) CPU Runtime for OpenCL(TM) Applications Platform Vendor Intel(R) Corporation Platform Version OpenCL 2.1 LINUX Platform Profile FULL_PROFILE Platform Extensions cl_khr_icd cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_3d_image_writes cl_intel_exec_by_local_thread cl_khr_spir cl_khr_fp64 cl_khr_image2d_from_buffer cl_intel_vec_len_hint Platform Extensions function suffix INTEL Platform Host timer resolution 1ns

Platform Name rusticl Platform Vendor Mesa/X.org Platform Version OpenCL 3.0 Platform Profile FULL_PROFILE Platform Extensions cl_khr_byte_addressable_store cl_khr_create_command_queue cl_khr_expect_assume cl_khr_extended_versioning cl_khr_icd cl_khr_il_program cl_khr_spirv_no_integer_wrap_decoration Platform Extensions with Version cl_khr_byte_addressable_store 0x400000 (1.0.0) cl_khr_create_command_queue 0x400000 (1.0.0) cl_khr_expect_assume 0x400000 (1.0.0) cl_khr_extended_versioning 0x400000 (1.0.0) cl_khr_icd 0x400000 (1.0.0) cl_khr_il_program 0x400000 (1.0.0) cl_khr_spirv_no_integer_wrap_decoration 0x400000 (1.0.0) Platform Numeric Version 0xc00000 (3.0.0) Platform Extensions function suffix MESA Platform Host timer resolution 1ns

Platform Name Intel(R) CPU Runtime for OpenCL(TM) Applications Number of devices 1 Device Name AMD EPYC 7542 32-Core Processor
Device Vendor Intel(R) Corporation Device Vendor ID 0x8086 Device Version OpenCL 2.1 (Build 0) Driver Version 18.1.0.0920 Device OpenCL C Version OpenCL C 2.0 Device Type CPU Device Profile FULL_PROFILE Device Available Yes Compiler Available Yes Linker Available Yes Max compute units 64 Max clock frequency 0MHz Device Partition (core) Max number of sub-devices 64 Supported partition types by counts, equally, by names (Intel) Supported affinity domains (n/a) Max work item dimensions 3 Max work item sizes 8192x8192x8192 Max work group size 8192 Preferred work group size multiple (kernel) 128 Max sub-groups per work group 1 Preferred / native vector sizes
char 1 / 32
short 1 / 16
int 1 / 8
long 1 / 4
half 0 / 0 (n/a) float 1 / 8
double 1 / 4 (cl_khr_fp64) Half-precision Floating-point support (n/a) Single-precision Floating-point support (core) Denormals Yes Infinity and NANs Yes Round to nearest Yes Round to zero No Round to infinity No IEEE754-2008 fused multiply-add No Support is emulated in software No Correctly-rounded divide and sqrt operations No Double-precision Floating-point support (cl_khr_fp64) Denormals Yes Infinity and NANs Yes Round to nearest Yes Round to zero Yes Round to infinity Yes IEEE754-2008 fused multiply-add Yes Support is emulated in software No Address bits 64, Little-Endian Global memory size 270105997312 (251.6GiB) Error Correction support No Max memory allocation 67526499328 (62.89GiB) Unified memory for Host and Device Yes Shared Virtual Memory (SVM) capabilities (core) Coarse-grained buffer sharing Yes Fine-grained buffer sharing Yes Fine-grained system sharing Yes Atomics Yes Minimum alignment for any data type 128 bytes Alignment of base address 1024 bits (128 bytes) Preferred alignment for atomics
SVM 64 bytes Global 64 bytes Local 0 bytes Max size for global variable 65536 (64KiB) Preferred total size of global vars 65536 (64KiB) Global Memory cache type Read/Write Global Memory cache size 524288 (512KiB) Global Memory cache line size 64 bytes Image support Yes Max number of samplers per kernel 480 Max size for 1D images from buffer 4220406208 pixels Max 1D or 2D image array size 2048 images Base address alignment for 2D image buffers 64 bytes Pitch alignment for 2D image buffers 64 pixels Max 2D image size 16384x16384 pixels Max 3D image size 2048x2048x2048 pixels Max number of read image args 480 Max number of write image args 480 Max number of read/write image args 480 Max number of pipe args 16 Max active pipe reservations 4095 Max pipe packet size 1024 Local memory type Global Local memory size 32768 (32KiB) Max number of constant args 480 Max constant buffer size 131072 (128KiB) Max size of kernel argument 3840 (3.75KiB) Queue properties (on host)
Out-of-order execution Yes Profiling Yes Local thread execution (Intel) Yes Queue properties (on device)
Out-of-order execution Yes Profiling Yes Preferred size 4294967295 (4GiB) Max size 4294967295 (4GiB) Max queues on device 4294967295 Max events on device 4294967295 Prefer user sync for interop No Profiling timer resolution 1ns Execution capabilities
Run OpenCL kernels Yes Run native kernels Yes Sub-group independent forward progress No IL version SPIR-V_1.0 SPIR versions 1.2 printf() buffer size 1048576 (1024KiB) Built-in kernels (n/a) Device Extensions cl_khr_icd cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_3d_image_writes cl_intel_exec_by_local_thread cl_khr_spir cl_khr_fp64 cl_khr_image2d_from_buffer cl_intel_vec_len_hint

Platform Name rusticl Number of devices 1 Device Name AMD Radeon R9 Fury Series (radeonsi, fiji, LLVM 17.0.6, DRM 3.57, 6.8.9-calculate) Device Vendor AMD Device Vendor ID 0x1002 Device Version OpenCL 3.0 Device UUID 00000000-8100-0000-0000-000000000000 Driver UUID 414d442d-4d45-5341-2d44-525600000000 Valid Device LUID No Device LUID 0000-000000000000 Device Node Mask 0 Device Numeric Version 0xc00000 (3.0.0) Driver Version 24.0.4 Device OpenCL C Version OpenCL C 1.2 Device OpenCL C all versions OpenCL C 0xc00000 (3.0.0) OpenCL C 0x402000 (1.2.0) OpenCL C 0x401000 (1.1.0) OpenCL C 0x400000 (1.0.0) Device OpenCL C features __opencl_c_integer_dot_product_input_4x8bit_packed 0x800000 (2.0.0) __opencl_c_integer_dot_product_input_4x8bit 0x800000 (2.0.0) __opencl_c_int64 0x400000 (1.0.0) __opencl_c_images 0x400000 (1.0.0) __opencl_c_3d_image_writes 0x400000 (1.0.0) __opencl_c_subgroups 0x400000 (1.0.0) Latest comfornace test passed v0000-01-01-00 Device Type GPU Device Profile EMBEDDED_PROFILE Device Available Yes Compiler Available Yes Linker Available Yes Max compute units 56 Max clock frequency 1000MHz Device Partition (core) Max number of sub-devices 0 Supported partition types None Supported affinity domains (n/a) Max work item dimensions 3 Max work item sizes 1024x1024x1024 Max work group size 1024 Preferred work group size multiple (device) 64 Preferred work group size multiple (kernel) 64 Max sub-groups per work group 16 Preferred / native vector sizes
char 1 / 1
short 1 / 1
int 1 / 1
long 1 / 1
half 0 / 0 (n/a) float 1 / 1
double 0 / 0 (n/a) Half-precision Floating-point support (n/a) Single-precision Floating-point support (core) Denormals No Infinity and NANs Yes Round to nearest Yes Round to zero No Round to infinity No IEEE754-2008 fused multiply-add No Support is emulated in software No Correctly-rounded divide and sqrt operations No Double-precision Floating-point support (n/a) Address bits 64, Little-Endian Global memory size 4294967296 (4GiB) Error Correction support No Max memory allocation 1073741824 (1024MiB) Unified memory for Host and Device No Shared Virtual Memory (SVM) capabilities (core) Coarse-grained buffer sharing No Fine-grained buffer sharing No Fine-grained system sharing No Atomics No Minimum alignment for any data type 128 bytes Alignment of base address 4096 bits (512 bytes) Preferred alignment for atomics
SVM 0 bytes Global 0 bytes Local 0 bytes Atomic memory capabilities relaxed, work-group scope Atomic fence capabilities relaxed, acquire/release, work-group scope Max size for global variable 0 Preferred total size of global vars 0 Global Memory cache type None Image support Yes Max number of samplers per kernel 32 Max size for 1D images from buffer 268435455 pixels Max 1D or 2D image array size 2048 images Max 2D image size 16384x16384 pixels Max 3D image size 2048x2048x2048 pixels Max number of read image args 32 Max number of write image args 16 Max number of read/write image args 0 Pipe support No Max number of pipe args 0 Max active pipe reservations 0 Max pipe packet size 0 Local memory type Global Local memory size 65536 (64KiB) Max number of constant args 16 Max constant buffer size 67108864 (64MiB) Generic address space support No Max size of kernel argument 4096 (4KiB) Queue properties (on host)
Out-of-order execution No Profiling Yes Device enqueue capabilities (n/a) Queue properties (on device)
Out-of-order execution No Profiling No Preferred size 0 Max size 0 Max queues on device 0 Max events on device 0 Prefer user sync for interop Yes Profiling timer resolution 40ns Execution capabilities
Run OpenCL kernels Yes Run native kernels No Non-uniform work-groups No Work-group collective functions No Sub-group independent forward progress No IL version SPIR-V_1.0 SPIR-V_1.1 SPIR-V_1.2 SPIR-V_1.3 SPIR-V_1.4 ILs with version SPIR-V 0x400000 (1.0.0) SPIR-V 0x401000 (1.1.0) SPIR-V 0x402000 (1.2.0) SPIR-V 0x403000 (1.3.0) SPIR-V 0x404000 (1.4.0) printf() buffer size 1048576 (1024KiB) Built-in kernels (n/a) Built-in kernels with version (n/a) Device Extensions cl_khr_byte_addressable_store cl_khr_create_command_queue cl_khr_expect_assume cl_khr_extended_versioning cl_khr_icd cl_khr_il_program cl_khr_spirv_no_integer_wrap_decoration cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_integer_dot_product cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_gl_sharing cles_khr_int64 cl_khr_3d_image_writes cl_khr_pci_bus_info cl_khr_device_uuid cl_khr_subgroup_shuffle cl_khr_subgroup_shuffle_relative Device Extensions with Version cl_khr_byte_addressable_store 0x400000 (1.0.0) cl_khr_create_command_queue 0x400000 (1.0.0) cl_khr_expect_assume 0x400000 (1.0.0) cl_khr_extended_versioning 0x400000 (1.0.0) cl_khr_icd 0x400000 (1.0.0) cl_khr_il_program 0x400000 (1.0.0) cl_khr_spirv_no_integer_wrap_decoration 0x400000 (1.0.0) cl_khr_global_int32_base_atomics 0x400000 (1.0.0) cl_khr_global_int32_extended_atomics 0x400000 (1.0.0) cl_khr_integer_dot_product 0x800000 (2.0.0) cl_khr_local_int32_base_atomics 0x400000 (1.0.0) cl_khr_local_int32_extended_atomics 0x400000 (1.0.0) cl_khr_gl_sharing 0x400000 (1.0.0) cles_khr_int64 0x400000 (1.0.0) cl_khr_3d_image_writes 0x400000 (1.0.0) cl_khr_pci_bus_info 0x400000 (1.0.0) cl_khr_device_uuid 0x400000 (1.0.0) cl_khr_subgroup_shuffle 0x400000 (1.0.0) cl_khr_subgroup_shuffle_relative 0x400000 (1.0.0)

NULL platform behavior clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) No platform clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) No platform clCreateContext(NULL, ...) [default] No platform clCreateContext(NULL, ...) [other] Success [INTEL] clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT) Success (1) Platform Name Intel(R) CPU Runtime for OpenCL(TM) Applications Device Name AMD EPYC 7542 32-Core Processor
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) Success (1) Platform Name Intel(R) CPU Runtime for OpenCL(TM) Applications Device Name AMD EPYC 7542 32-Core Processor
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) No devices found in platform clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No devices found in platform clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) No devices found in platform clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) Success (1) Platform Name Intel(R) CPU Runtime for OpenCL(TM) Applications Device Name AMD EPYC 7542 32-Core Processor

ICD loader properties ICD loader Name Khronos OpenCL ICD Loader ICD loader Vendor Khronos Group ICD loader Version 3.0.6 ICD loader Profile OpenCL 3.0

  1. run the command OPENCL_DEBUG_MODE=1 python mnist.py --device ocl:1 (i.e. set environment variable OPENCL_DEBUG_MODE=1)

Same output: Using device: privateuseone:1 Accessing device #1:AMD Radeon R9 Fury Series (radeonsi, fiji, LLVM 17.0.6, DRM 3.57, 6.8.9-calculate) on rusticl LLVM ERROR: Cannot select: 0x7f5278370a50: f32 = and 0x7f5278370b30, Constant:i32<2147483647> 0x7f5278370b30: f32 = bitcast 0x7f5278370eb0 0x7f5278370eb0: i32,ch = CopyFromReg 0x560ef0c1de00, Register:i32 %14 0x7f527836a9f0: i32 = Register %14 0x7f527836aad0: i32 = Constant<2147483647> In function: main Аварийный останов

  1. Please run python tests\test_ops.py --device ocl:1

OPENCL_DEBUG_MODE=1 python tests/test_op.py --device ocl:1

Mean 1d Accessing device #1:AMD Radeon R9 Fury Series (radeonsi, fiji, LLVM 17.0.6, DRM 3.57, 6.8.9-calculate) on rusticl torch.Size([1, 3, 4]) torch.Size([1, 3, 4]) y 0.000000 x0 0.000000 Mean 2d torch.Size([2, 1, 1]) torch.Size([2, 1, 1]) y 0.000000 x0 0.000000 Mean 1d squeeze torch.Size([3, 4]) torch.Size([3, 4]) y 0.000000 x0 0.000000 Mean 2d squeeze torch.Size([3]) torch.Size([3]) y 0.000000 x0 0.000000 Mean all squeeze torch.Size([]) torch.Size([]) y 0.000000 x0 0.000000 Sum 1d torch.Size([1, 3, 4]) torch.Size([1, 3, 4]) y 0.000000 x0 0.000000 Sum 2d torch.Size([2, 1, 1]) torch.Size([2, 1, 1]) y 0.000001 x0 0.000000 Sum 1d squeeze torch.Size([3, 4]) torch.Size([3, 4]) y 0.000000 x0 0.000000 Sum 2d squeeze torch.Size([3]) torch.Size([3]) y 0.000000 x0 0.000000 LogSoftmax LLVM ERROR: Cannot select: 0x7f1f8c1fed90: f32 = and 0x7f1f8c1fee70, Constant:i32<2147483647> 0x7f1f8c1fee70: f32 = bitcast 0x7f1f8c1ff1f0 0x7f1f8c1ff1f0: i32,ch = CopyFromReg 0x560aa816c510, Register:i32 %14 0x7f1f8c1f8d30: i32 = Register %14 0x7f1f8c1f8e10: i32 = Constant<2147483647> In function: main Аварийный останов

sukamenev avatar Aug 16 '24 08:08 sukamenev

From what I know rusticl is horribly buggy. It does not work on my rx 560 at all.

Касаемо AMD это не так. Я используя pytorch_dlprim и rusticl уже обучил много своих тестовых нейросеток.

Моя карта (AMD Fury) не поддерживается ROCm. Я попробую что-то придумать, чтобы с другой версий OpenCL потестировать. В прошлый раз (issue 72) я тестировал с несколькими версиями OpenCl, и были ошибки. Но, как я понял, ты уже внёс какие-то фиксы в pytorch_dlprim, поэтому стоит перетестировать?

sukamenev avatar Aug 16 '24 08:08 sukamenev

ты уже внёс какие-то фиксы в pytorch_dlprim, поэтому стоит перетестировать?

For sure - and move to pytorch 2.4 since 1.13 support will likely to end soon. Note also import is simplified now all you need is to import pytorch_ocl - look at readme.

But not critical - the OCL code is the same for 1.13 and 2.4

Also please change OPENCL_DEBUG_MODE=1 to =2 I hoped it was an exception but I mistaken. I want to see exactly at what operator it fails. But anyway it looks like driver failure. Back than Mesa worked quite fine for me.

Regarding support of ROCM - does their opencl driver recognise it or not? It is good to ask their support since they should provide some solution for working OpenCL driver (either old one pal - that worked actually very well) or somehow enable rocm. It isn't request to support their entire infrastructure hip etc, it is barely OpenCL driver as provided in official specs.

artyom-beilis avatar Aug 16 '24 08:08 artyom-beilis

OPENCL_DEBUG_MODE=2 python tests/test_op.py --device ocl:1

LogSoftmax in: at::Tensor ptdlprim::empty_strided(at::IntArrayRef, at::IntArrayRef, c10::optionalc10::ScalarType, c10::optionalc10::Layout, c10::optionalc10::Device, c10::optional) in: at::Tensor ptdlprim::allocate_empty(at::IntArrayRef, c10::optionalc10::ScalarType, c10::optionalc10::Layout, c10::optionalc10::Device, c10::optional, c10::optionalc10::MemoryFormat) in: at::Tensor ptdlprim::_copy_from(const at::Tensor&, const at::Tensor&, bool) in: at::Tensor ptdlprim::make_contiguous_as_target_type(const at::Tensor&, const at::Tensor&) in: at::Tensor ptdlprim::allocate_empty(at::IntArrayRef, c10::optionalc10::ScalarType, c10::optionalc10::Layout, c10::optionalc10::Device, c10::optional, c10::optionalc10::MemoryFormat) in: at::Tensor& ptdlprim::impl_softmax_out(const at::Tensor&, int64_t, bool, at::Tensor&) LLVM ERROR: Cannot select: 0x7f33781fee30: f32 = and 0x7f33781fef10, Constant:i32<2147483647> 0x7f33781fef10: f32 = bitcast 0x7f33781ff290 0x7f33781ff290: i32,ch = CopyFromReg 0x5561233e15c0, Register:i32 %14 0x7f33781f8dd0: i32 = Register %14 0x7f33781f8eb0: i32 = Constant<2147483647> In function: main Аварийный останов

Full output:

https://pastebin.com/hazfFBvz

sukamenev avatar Aug 16 '24 10:08 sukamenev

OPENCL_DEBUG_MODE=2 python mnist.py --device ocl:1

in: at::Tensor ptdlprim::copy_from(const at::Tensor&, const at::Tensor&, bool) in: at::Tensor ptdlprim::make_contiguous_as_target_type(const at::Tensor&, const at::Tensor&) in: at::Tensor ptdlprim::convolution_overrideable(const at::Tensor&, const at::Tensor&, const c10::optionalat::Tensor&, c10::IntArrayRef, c10::IntArrayRef, c10::IntArrayRef, bool, c10::IntArrayRef, int64_t) in: at::Tensor& ptdlprim::add_out(const at::Tensor&, const at::Tensor&, const c10::Scalar&, at::Tensor&) in: at::Tensor ptdlprim::allocate_empty(at::IntArrayRef, c10::optionalc10::ScalarType, c10::optionalc10::Layout, c10::optionalc10::Device, c10::optional, c10::optionalc10::MemoryFormat) in: std::tuple<at::Tensor, at::Tensor, at::Tensor> ptdlprim::native_batch_norm(const at::Tensor&, const c10::optionalat::Tensor&, const c10::optionalat::Tensor&, const c10::optionalat::Tensor&, const c10::optionalat::Tensor&, bool, double, double) in: at::Tensor& ptdlprim::relu(at::Tensor&) in: at::Tensor ptdlprim::max_pool2d_autograd(const at::Tensor&, c10::IntArrayRef, c10::IntArrayRef, c10::IntArrayRef, c10::IntArrayRef, bool) in: static at::Tensor ptdlprim::max_pool2d_cls::forward(torch::autograd::AutogradContext*, const at::Tensor&, c10::IntArrayRef, c10::IntArrayRef, c10::IntArrayRef, c10::IntArrayRef, bool) in: at::Tensor ptdlprim::convolution_overrideable(const at::Tensor&, const at::Tensor&, const c10::optionalat::Tensor&, c10::IntArrayRef, c10::IntArrayRef, c10::IntArrayRef, bool, c10::IntArrayRef, int64_t) in: at::Tensor ptdlprim::max_pool2d_autograd(const at::Tensor&, c10::IntArrayRef, c10::IntArrayRef, c10::IntArrayRef, c10::IntArrayRef, bool) in: static at::Tensor ptdlprim::max_pool2d_cls::forward(torch::autograd::AutogradContext*, const at::Tensor&, c10::IntArrayRef, c10::IntArrayRef, c10::IntArrayRef, c10::IntArrayRef, bool) in: at::Tensor ptdlprim::act_autograd(const at::Tensor&) [with dlprim::StandardActivations Act = dlprim::StandardActivations::relu] in: static at::Tensor ptdlprim::act_cls<Act>::forward(torch::autograd::AutogradContext*, at::Tensor) [with dlprim::StandardActivations Act = dlprim::StandardActivations::relu] in: at::Tensor ptdlprim::_reshape_alias(const at::Tensor&, c10::IntArrayRef, c10::IntArrayRef) in: at::Tensor ptdlprim::linear(const at::Tensor&, const at::Tensor&, const c10::optionalat::Tensor&) in: static at::Tensor ptdlprim::linear_cls::forward(torch::autograd::AutogradContext*, const at::Tensor&, const at::Tensor&, const c10::optionalat::Tensor&) in: at::Tensor ptdlprim::act_autograd(const at::Tensor&) [with dlprim::StandardActivations Act = dlprim::StandardActivations::relu] in: static at::Tensor ptdlprim::act_cls<Act>::forward(torch::autograd::AutogradContext*, at::Tensor) [with dlprim::StandardActivations Act = dlprim::StandardActivations::relu] in: at::Tensor ptdlprim::linear(const at::Tensor&, const at::Tensor&, const c10::optionalat::Tensor&) in: static at::Tensor ptdlprim::linear_cls::forward(torch::autograd::AutogradContext*, const at::Tensor&, const at::Tensor&, const c10::optionalat::Tensor&) in: at::Tensor ptdlprim::allocate_empty(at::IntArrayRef, c10::optionalc10::ScalarType, c10::optionalc10::Layout, c10::optionalc10::Device, c10::optional, c10::optionalc10::MemoryFormat) in: at::Tensor& ptdlprim::impl_softmax_out(const at::Tensor&, int64_t, bool, at::Tensor&) LLVM ERROR: Cannot select: 0x7f458c370930: f32 = and 0x7f458c370a10, Constant:i32<2147483647> 0x7f458c370a10: f32 = bitcast 0x7f458c370d90 0x7f458c370d90: i32,ch = CopyFromReg 0x55d357aef0a0, Register:i32 %14 0x7f458c36a8d0: i32 = Register %14 0x7f458c36a9b0: i32 = Constant<2147483647> In function: main Аварийный останов

Full output:

https://pastebin.com/0TGnp3wj

sukamenev avatar Aug 16 '24 10:08 sukamenev

Win10 x64 torch 2.4 AMD Vega 64 and AMD Fury (R9 Nano)
Tested successfully

pin24 avatar Sep 20 '24 17:09 pin24