Erica

Results 18 comments of Erica

> Yes, you are right, I only fine tune some of the layers of the mode unet, Why not use the pre-training weights for all modes, maybe can get better...

@Lightning980729 Hello,train all data with 20000 epochs, how many times do you use? I train all data with 20000 epochs with learn rate 0.0001, patch size is 64, it caused...

> > @Lightning980729 > > Hello,train all data with 20000 epochs, how many times do you use? > > I train all data with 20000 epochs with learn rate 0.0001,...

I also didn't get the result as the author, so what's wrong with it? How to improve?

Restriction of GPU resource, we must use patch volume, can we resize whole volume to a small size, then training with this? Can dice be promoted?

> You'd better no to do that, resize the volume means resize the label at the same time, it will cause a lot of problems Yes,you are right, this task...

> @Lightning980729 @zwlshine I found some mistake in my code, and i have upload the new version, you should now get the right results. Sorry for the mistake. Besides, the...

I'm sure the only worked change is in the function softmax_weighted_loss. Some other changes like: fractal_net in models.py, and self.is_global_path in operations.py are all commented ones.

> Hello,I can't get the best result. My best reslt is average dice: [0.603,0.62,0.584]. Do you know how solve it? Thanks! When only use HGG for training, I can get...