Graham Markall

Results 56 issues of Graham Markall

It was only used by the CUDA target, and contained CUDA-specific code in the core of Numba, which was a bit of a violation of the target abstraction. The CUDA...

3 - Ready for Review
CUDA
Effort - short
skip_release_notes

Reproducer: ```python from numba import njit import numpy as np @njit def f(x): return x + 1 arr = np.array('2010', dtype='datetime64[10Y]') f(arr) ``` gives ``` $ python repro.py Traceback (most...

bug
bug - typing

The broad aim of this PR is to handle target options in CUDA more closely to how they are handled by the CPU target, with the longer-term aim of fixing...

3 - Ready for Review
CUDA
Effort - medium

Uses LLVM 15 by default in CI and 16 experimentally. Testing on CI for now to check for and resolve any issues.

2 - In Progress

Creating this issue to keep track of additional requests on PR #1046 and to continue the discussion - points raised, with some light editing: > * I think it might...

feature_request

I can't push to #986 so this PR is for testing #986 on CI. Initially this consists of the `ohu/crt` branch with `main` from https://github.com/numba/llvmlite merged in. CC @oliverhu

**EDIT:** This now removes the numba dependency, which comes transitively from cuDF. **Original description:** numba-cuda is the NVIDIA-maintained CUDA target for Numba, which depends on the numba package. This PR...

Adding overloads to the current dispatcher when specializing saves recreating the overload for subsequent calls to the unspecialized dispatcher that would have used the overload. This also has the side...

4 - Waiting on author
CUDA

To predict the effect of numba/llvmlite#1082. cc @rj-jesus

2 - In Progress
skip_release_notes

gpuCI is being retired in favour of self-hosted runners with Github Actions. This mean the CUDA CI needs to be migrated over to use it. A rough checklist of steps:...

Task
CUDA