Diffusion-based-Segmentation
Diffusion-based-Segmentation copied to clipboard
Image size
Why do you give the image_size flag value 256 when the slices are of size 240x240?
This is a left-over of the original diffusion model implementation we built our method on (https://github.com/openai/guided-diffusion). There you need to specify the image dimesion to be in (64,128,256,512). If you rewrite this part of the code, you could also give an image size of 240.
Hello! Can you give me your E-mail. I want to talk about this Net-Model with you.I have some question about this
Hello! I still have some questions about image size. In guided_diffusion/bratsloader.py line54 ”image = image[..., 8:-8, 8:-8] # crop to a size of (224, 224)“ But the input size is 256x256, with the operation with "image = image[..., 8:-8, 8:-8] # crop to a size of (224, 224)",the image size will be 240x240, not 224x224. And the image in dataset in this repository is 224x224. Could you please explain this to me?
Hi
Sorry for this confusion. In the original BRATS dataset, the images are of size (240,240). When you then compute image = image[..., 8:-8, 8:-8]
, you crop them to a size of (224, 224).
The images in the dataset in this repository should be (240,240).
@JuliaWolleb Can you kindly share the code for preprocessing 3d data to slice wise folders as you have used. The original data is 3d volume data so kindly share the code of how you preprocessed 3d folder( 369 patient folders) to slice wise folder 2d - I have read how in your paper but if you have the code kindly share it if possible? kindly help me out.
@JuliaWolleb 你好,我也很好奇如何将3D转化为2D图片,请问你解决了吗,如果解决的话可以教我一下吗
@JuliaWolleb Hello, I am also curious about how to convert 3D into 2D pictures, may I ask if you have solved it? If so, can you teach me