Calls to malloc/free from inside HIP kernels
https://github.com/ROCm-Developer-Tools/HIP/pull/2975
How do we enable this? @pjaaskel @Kerilk
The basic plan so far has been to add a shadow buffer to the kernel which is basically the "heap" when the kernel calls malloc/free and implement dynamic memory management by returning chunks from the buffer.
Can't we do this in SPIR-V?
OpenCL (and thus SPIR-V in this case) doesn't support device-side dynamic memory allocation. We could define a new OpenCL extension that does, but it's better to provide a portable solution that works with the current Intel drivers.
SPIR-V Specification 3.32.8 Memory Instructions OpVariable Allocate an object in memory, resulting in a pointer to it, which can be used with OpLoad and OpStore.
Why can't we use this?
It's for static (compile time size known) memory allocation.
Ah, I see.
Not much other way around this without an extension. The size of the buffer to allocate will be a problem though, and a hint (or upper bound) regarding the amount of memory involved would be very useful here, unfortunately in the general case this will be intractable.
Drivers that have device side enqueue must have the necessary functionalities already, so it may be an easy extension for them to implement if we define it right.
I played around device malloc implementation in CUDA11 and here is my observation :
- malloc (heap size) is allocated once per device.
- Looks like default size is 8MB which user can increase/decrease by using cudaDeviceSetLimit(cudaLimitMallocHeapSize, size).
- Once kernel is launched heap size can't be changed.
With above observation I think as @pjaaskel mentioned in his response having buffer allocated a chunk of memory of fixed size will be a valid approach. Only point I have is this is device limit hence buffer/heap should be tied to per device not per kernel. Below is the test I used to check cuda behavior
#include <iostream>
#include <cuda_runtime.h>
__global__ void malloc__(int size) {
int* ptr = (int*)malloc(size);
if (ptr) {
printf("1. Passed\n");
} else {
printf("1. Failed\n");
}
}
__global__ void malloc__2(int size) {
int* ptr = (int*)malloc(size);
if (ptr) {
printf("2. Passed\n");
} else {
printf("2. Failed\n");
}
}
int main() {
size_t limit_val =0;
cudaError_t status = cudaDeviceGetLimit(&limit_val, cudaLimitMallocHeapSize);
std::cout<<"Status : "<<cudaGetErrorName(status)<<std::endl;
std::cout<<"limit_val = "<<limit_val<<std::endl;
malloc__<<<1,1>>>((1024*1024*7));
cudaDeviceSynchronize();
// change the limit
status = cudaDeviceSetLimit(cudaLimitMallocHeapSize, (limit_val*2));
status = cudaDeviceGetLimit(&limit_val, cudaLimitMallocHeapSize);
std::cout<<"Status : "<<cudaGetErrorName(status)<<std::endl;
std::cout<<"limit_val = "<<limit_val<<std::endl;
malloc__2<<<1,1>>>((1024*1024*8));
cudaDeviceSynchronize();
return 0;
}
https://reviews.llvm.org/rGa6213088812f this seems like an interesting work to build upon for device side malloc/free and possibly other services. @linehill