chipStar icon indicating copy to clipboard operation
chipStar copied to clipboard

Calls to malloc/free from inside HIP kernels

Open pvelesko opened this issue 3 years ago • 9 comments

https://github.com/ROCm-Developer-Tools/HIP/pull/2975

How do we enable this? @pjaaskel @Kerilk

pvelesko avatar Oct 02 '22 13:10 pvelesko

The basic plan so far has been to add a shadow buffer to the kernel which is basically the "heap" when the kernel calls malloc/free and implement dynamic memory management by returning chunks from the buffer.

pjaaskel avatar Oct 03 '22 13:10 pjaaskel

Can't we do this in SPIR-V?

pvelesko avatar Oct 03 '22 13:10 pvelesko

OpenCL (and thus SPIR-V in this case) doesn't support device-side dynamic memory allocation. We could define a new OpenCL extension that does, but it's better to provide a portable solution that works with the current Intel drivers.

pjaaskel avatar Oct 03 '22 13:10 pjaaskel

SPIR-V Specification 3.32.8 Memory Instructions OpVariable Allocate an object in memory, resulting in a pointer to it, which can be used with OpLoad and OpStore.

Why can't we use this?

pvelesko avatar Oct 03 '22 13:10 pvelesko

It's for static (compile time size known) memory allocation.

pjaaskel avatar Oct 03 '22 14:10 pjaaskel

Ah, I see.

pvelesko avatar Oct 03 '22 14:10 pvelesko

Not much other way around this without an extension. The size of the buffer to allocate will be a problem though, and a hint (or upper bound) regarding the amount of memory involved would be very useful here, unfortunately in the general case this will be intractable.

Drivers that have device side enqueue must have the necessary functionalities already, so it may be an easy extension for them to implement if we define it right.

Kerilk avatar Oct 03 '22 15:10 Kerilk

I played around device malloc implementation in CUDA11 and here is my observation :

  1. malloc (heap size) is allocated once per device.
  2. Looks like default size is 8MB which user can increase/decrease by using cudaDeviceSetLimit(cudaLimitMallocHeapSize, size).
  3. Once kernel is launched heap size can't be changed.

With above observation I think as @pjaaskel mentioned in his response having buffer allocated a chunk of memory of fixed size will be a valid approach. Only point I have is this is device limit hence buffer/heap should be tied to per device not per kernel. Below is the test I used to check cuda behavior

#include <iostream>
#include <cuda_runtime.h>

__global__ void malloc__(int size) {
    int* ptr = (int*)malloc(size);
    if (ptr) {
        printf("1. Passed\n");
    } else {
        printf("1. Failed\n");
    }
}

__global__ void malloc__2(int size) {
    int* ptr = (int*)malloc(size);
    if (ptr) {
        printf("2. Passed\n");
    } else {
        printf("2. Failed\n");
    }
}
int main() {
    size_t limit_val =0;
    cudaError_t status = cudaDeviceGetLimit(&limit_val, cudaLimitMallocHeapSize);
    std::cout<<"Status : "<<cudaGetErrorName(status)<<std::endl;
    std::cout<<"limit_val = "<<limit_val<<std::endl;
    malloc__<<<1,1>>>((1024*1024*7));
    cudaDeviceSynchronize();

    // change the limit
    status = cudaDeviceSetLimit(cudaLimitMallocHeapSize, (limit_val*2));
    status = cudaDeviceGetLimit(&limit_val, cudaLimitMallocHeapSize);
    std::cout<<"Status : "<<cudaGetErrorName(status)<<std::endl;
    std::cout<<"limit_val = "<<limit_val<<std::endl;
    malloc__2<<<1,1>>>((1024*1024*8));
    cudaDeviceSynchronize();
    return 0;
}

Sarbojit2019 avatar Oct 06 '22 06:10 Sarbojit2019

https://reviews.llvm.org/rGa6213088812f this seems like an interesting work to build upon for device side malloc/free and possibly other services. @linehill

pjaaskel avatar Jun 15 '23 11:06 pjaaskel