compute-runtime
compute-runtime copied to clipboard
Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver
Abort was called at 347 line in file: /opt/src/opencl/shared/source/os_interface/linux/drm_neo.cpp
The following issue occurs while trying to run OpenVINO based inference on GPU device in Alder Lake platform. ``` Abort was called at 347 line in file: /opt/src/opencl/shared/source/os_interface/linux/drm_neo.cpp ``` Please...
Using [clpeak](https://github.com/krrishnarraj/clpeak) with runtime 22.43.24595 on the *integrated* GPU on an i7-12700H CPU under Linux I find the kernel launch latency to be 42.46 us. This is around 10 times...
It seems the cl_khr_spirv_linkonce_odr OpenCL extension is supported but not present in the list of extensions reported via `clGetDeviceInfo`. Am I missing something?
Working on Julia support for oneAPI, I've isolated a test failure that only occurs on my A770 to the following Julia code: ```julia using oneAPI # complete reduction values by...
This is a cutdown case from https://github.com/CHIP-SPV/chip-spv/issues/146. When calling zeCommandQueueCreate from multiple threads, it spontaneously segfaults. The call stack trace is: Thread 101 "a.out" received signal SIGSEGV, Segmentation fault. [Switching...
Hello there! I got some tearing and artifacts when sharing textures between the D3D11 and OpenCL. Here's the main procedures to decode and share a video frame in FFmpeg: ---...
While debugging a Julia/oneAPI.jl-related issue, I was using a debug build of the compute-runtime. Doing so however triggers a debug break with the following seemingly innocuous Level Zero operations: ```julia...
On a system with an A770M discrete GPU and Alder Lake CPU using runtime 22.43.24595.35 there are *two* OpenCL platforms both named "Intel(R) OpenCL HD Graphics" with one device a...
Do you know how to show the Intel GPU detail message using command line ? like cuda nvidia-smi
Proposal: - Be downstream of RoCm and the rocm-llvm. Then submit the other compiler changes back upstream into the main llvm project. - Be able to offload to Intel and...