Matthew Macy
Matthew Macy
`page.h` is a really generic name. Could you please call it `zfs_page.h`
@fxia22 torch.utils.ffi doesn't appear to have any knowledge of nvcc or .cu. I think you need to build your cuda sources separately (see the example Makefiles that come with the...
I'm using VNet with the LUNA16 data set. I've started by just segmenting lungs which don't have a class imbalance (the CT volumes are 41% lung on average) so I...
I'm translating this to pytorch and have noticed the same thing only more so. It might help if you put each layer on a single line like: http://lmb.informatik.uni-freiburg.de/resources/opensource/3dUnet_miccai2016_with_BN.prototxt Prototxt is...
@prinsherbert have you extracted the data augmentation code and dice loss function from his Caffe fork? I started extracting the former from the 3D Unet caffe patch and it's rather...
@prinsherbert another irregularity that I'm not completely comfortable with (but is actually not a discrepancy between the prototxt and the paper) is that in the sixth block you're actually reducing...
@faustomilletari I couldn't tell from the paper since your respective datasets were quite different, but how do your results compare with 3D Unet (which appeared to come out almost concomitantly)?...
Thanks for the response. It's really hard for me to evaluate how much a given architecture contributes to the SotA when it's evaluated on a different data set from comparable...
@faustomilletari Just to update you on where I'm at before I get your thoughts on how to cope with the massive data set sizes. I ported the loss function: https://github.com/mattmacy/torchbiomed/blob/master/torchbiomed/loss.py...
I now understand what the purpose of the argmax is - to convert the softmax results to a one hot encoding. I now appear to be getting reasonable results on...