JuliaWolleb

Results 49 comments of JuliaWolleb

Hi, I have the same problem. Did you mean that the loss function SupConLoss should be modified? If yes, how? Or did you mean only the input dimensions to[batch size,...

Hi Thanks for your interest. Can you maybe plot one image of your predicted segmentation map? This might help me to see where the problem is with your low dice...

Hi This seems like it is generating random segmentation masks, and not segmentation masks that belong to your input image. Do you properly stack the input image and the noisy...

Hi Oh sorry, yes if you change the number of diffusion steps, you also need to adapt the maxt accordingly. But this was only I parameter I played around with,...

Hi We sliced the 3D volume into 2D axial slices, and cropped them to a size of 224x224 (to cut some zero values around the brain). We stack the input...

Hi What do you mean by a resolution of 1024? You mean the image dimensions are of size (1024,1024)? What are your channel dimensions and what is the error message?

I have never trained with images of size 1024x1024. But as long as you do not encouter an "out of memory" error on your GPU, it should work fine.

Oh, sorry for that, you don't need to specify the name. You can change the line to `viz.image(visualize(sample[0, ...]), opts=dict(caption="sampled output"))` in case of the chexpert dataset, in the file...

Hi We based our diffusion model on this work[ Diffusion Models beat GANs](https://arxiv.org/abs/2105.05233). There the loss function is defined as an MSE loss between added and predicted noise. It has...

Hi Yes on the Chexpert dataset, the names of the `.pt` files should not be _brats_ of course. You can change that to _chexpert_ or whatever you like. The error...