CUDA topic
CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
cmake-cuda-example
Example of how to use CUDA with CMake >= 3.8
docker_python-opencv-ffmpeg
Dockerfile containing FFmpeg, OpenCV4 and Python2/3, based on Ubuntu LTS
buildTensorflow
A lightweight deep learning framework made with ❤️
dockerfiles
Compilation of Dockerfiles with automated builds enabled on the Docker Registry
MPM
Simulating on GPU using Material Point Method and rendering.
cuda2GLcore
Implementation of Cuda to OpenGL rendering
cuda_voxelizer
CUDA Voxelizer to convert polygon meshes into annotated voxel grids
ai-lab
All-in-one AI container for rapid prototyping
tensorflow-optimized-wheels
TensorFlow wheels built for latest CUDA/CuDNN and enabled performance flags: SSE, AVX, FMA; XLA
CudaRelativeAttention
custom cuda kernel for {2, 3}d relative attention with pytorch wrapper