intel-extension-for-pytorch icon indicating copy to clipboard operation
intel-extension-for-pytorch copied to clipboard

Native API returns: 29 (UR_RESULT_ERROR_INVALID_KERNEL_NAME) when sampling from

Open Nehereus opened this issue 1 month ago • 1 comments

Describe the bug

Native API returns 29 when trying to sample from a bernoulli distribution. Though the current torch version is 2.9.1 without IPEX installed, the issue can be replicated using torch 2.8 with IPEX. Sample code to replicate the issue:

def test_bernoulli_sample_properties(self):
        """
        Tests properties of samples from a Bernoulli distribution on XPU.
        """
        # --- Test 1: Check for XPU availability ---
        if not torch.xpu.is_available():
            self.skipTest("XPU device not available")
            
        device = 'xpu'

        # Define the probabilities for the Bernoulli distribution
        # Use a tensor with multiple probabilities to test batch sampling
        # Move the tensor to the XPU device
        probs = torch.tensor([0.1, 0.5, 0.9], device=device)
        
        # Create the Bernoulli distribution
        try:
            bernoulli_dist = dist.Bernoulli(probs=probs)
        except Exception as e:
            self.fail(f"Failed to create Bernoulli distribution: {e}")

        # --- Test 1: Sample values are valid (0 or 1) ---
        
        # Draw a single sample
        sample_single = bernoulli_dist.sample()
        
        # Check that all sampled values are either 0 or 1
        is_zero_or_one = (sample_single == 0) | (sample_single == 1)
        self.assertTrue(torch.all(is_zero_or_one), 
                        "Sampled values are not all 0 or 1.")
        
        # Check output shape
        self.assertEqual(sample_single.shape, probs.shape,
                         "Single sample shape does not match probs shape.")

        # Check that the sample is on the correct device
        self.assertEqual(sample_single.device.type, device,
                         "Sample is not on the XPU device.")

Versions

ollecting environment information...

PyTorch version: 2.9.1+xpu PyTorch CXX11 ABI: Yes IPEX version: N/A IPEX commit: N/A Build type: N/A

OS: Ubuntu 24.04.3 LTS (x86_64) GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 Clang version: N/A IGC version: N/A CMake version: N/A Libc version: glibc-2.39

Python version: 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime) Python platform: Linux-6.14.8-3-bpo12-pve-x86_64-with-glibc2.39 Is XPU available: N/A DPCPP runtime: 2025.1 MKL version: 2025.1

GPU models and configuration onboard: N/A

GPU models and configuration detected: N/A

Driver version:

  • intel_opencl: 25.35.35096.9-1~24.04~ppa3
  • level_zero: N/A

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 43 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 24 On-line CPU(s) list: 0-23 Vendor ID: AuthenticAMD Model name: AMD Ryzen 9 3900X 12-Core Processor CPU family: 23 Model: 113 Thread(s) per core: 2 Core(s) per socket: 12 Socket(s): 1 Stepping: 0 Frequency boost: enabled CPU(s) scaling MHz: 79% CPU max MHz: 4673.0000 CPU min MHz: 550.0000 BogoMIPS: 7586.51 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es Virtualization: AMD-V L1d cache: 384 KiB (12 instances) L1i cache: 384 KiB (12 instances) L2 cache: 6 MiB (12 instances) L3 cache: 64 MiB (4 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-23 Vulnerability Gather data sampling: Not affected Vulnerability Ghostwrite: Not affected Vulnerability Indirect target selection: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection Vulnerability Spec rstack overflow: Mitigation; Safe RET Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Vulnerability Vmscape: Mitigation; IBPB before exit to userspace

Versions of relevant libraries: [pip] dpcpp-cpp-rt==2025.2.1 [pip] impi-rt==2021.16.1 [pip] intel-cmplr-lib-rt==2025.2.1 [pip] intel-cmplr-lib-ur==2025.2.1 [pip] intel-cmplr-lic-rt==2025.2.1 [pip] intel-opencl-rt==2025.2.1 [pip] intel-openmp==2025.2.1 [pip] intel-pti==0.13.1 [pip] intel-sycl-rt==2025.2.1 [pip] mkl==2025.2.0 [pip] numpy==2.3.3 [pip] oneccl==2021.16.1 [pip] oneccl-devel==2021.16.1 [pip] onemkl-sycl-blas==2025.2.0 [pip] onemkl-sycl-dft==2025.2.0 [pip] onemkl-sycl-lapack==2025.2.0 [pip] onemkl-sycl-rng==2025.2.0 [pip] onemkl-sycl-sparse==2025.2.0 [pip] pytorch-triton-xpu==3.5.0 [pip] torch==2.9.1+xpu [pip] torchaudio==2.9.1+xpu [pip] torchvision==0.24.1+xpu [pip] transformers==4.57.1

Nehereus avatar Nov 18 '25 06:11 Nehereus

@Nehereus Since you are able to reproduce this issue only with PyTorch 2.9.1, it is not related to IPEX. Could you please open an issue in https://github.com/intel/torch-xpu-ops/ for awareness?

tye1 avatar Nov 20 '25 07:11 tye1