stable-diffusion.cpp
stable-diffusion.cpp copied to clipboard
memsize was hardcoded in preprocess_canny function
I tried to preprocess_canny a bitmap with a size of 1024x1024. Due to the hardcoding of the memsize,
uint8_t* preprocess_canny(uint8_t* img, int width, int height, float high_threshold, float low_threshold, float weak, float strong, bool inverse) {
struct ggml_init_params params;
params.mem_size = static_cast<size_t>(10 * 1024 * 1024); // 10
}
void convolve(struct ggml_tensor* input, struct ggml_tensor* output, struct ggml_tensor* kernel, int padding) {
struct ggml_init_params params;
params.mem_size = 20 * 1024 * 1024; // 10
}
the following code will raise an assert error:
struct ggml_tensor* image = ggml_new_tensor_4d(work_ctx, GGML_TYPE_F32, width, height, 3, 1);
I tried to increase the memsize to 4x the original hardcoded value, and it can now process the big image. However, I think we should dynamically calculate the memsize instead of hardcoding it.
Besides that , I think we should not free the image in the preprocess_canny because the memory may be alloced by 3rd language.
free(img); <---- we should remove this call
uint8_t* output = sd_tensor_to_image(image);