CUDA topic

CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.

List CUDA repositories

cmake-cuda-example

66
Stars
11
Forks
Watchers

Example of how to use CUDA with CMake >= 3.8

docker_python-opencv-ffmpeg

66
Stars
30
Forks
Watchers

Dockerfile containing FFmpeg, OpenCV4 and Python2/3, based on Ubuntu LTS

buildTensorflow

32
Stars
4
Forks
Watchers

A lightweight deep learning framework made with ❤️

dockerfiles

502
Stars
128
Forks
Watchers

Compilation of Dockerfiles with automated builds enabled on the Docker Registry

MPM

97
Stars
15
Forks
Watchers

Simulating on GPU using Material Point Method and rendering.

cuda2GLcore

58
Stars
14
Forks
Watchers

Implementation of Cuda to OpenGL rendering

cuda_voxelizer

555
Stars
94
Forks
Watchers

CUDA Voxelizer to convert polygon meshes into annotated voxel grids

ai-lab

434
Stars
65
Forks
Watchers

All-in-one AI container for rapid prototyping

tensorflow-optimized-wheels

120
Stars
10
Forks
Watchers

TensorFlow wheels built for latest CUDA/CuDNN and enabled performance flags: SSE, AVX, FMA; XLA

CudaRelativeAttention

43
Stars
4
Forks
Watchers

custom cuda kernel for {2, 3}d relative attention with pytorch wrapper