AH-Net icon indicating copy to clipboard operation
AH-Net copied to clipboard

Ask for some experience.

Open CheungDuMing opened this issue 5 years ago • 3 comments

Have you tried using deconvolution instead of upsampling?

CheungDuMing avatar Sep 24 '18 12:09 CheungDuMing

For the same architecture we did not have the numbers for deconv layers. Generally deconv layers tend to give the high frequency artifacts in the early epochs. So they were not much preferred around my colleagues. Eventually I doubt upsampling and deconv would make much difference in the numbers though.

On Sep 24, 2018, at 2:01 PM, Theo Cheung [email protected] wrote:

Have you tried using deconvolution instead of upsampling?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

lsqshr avatar Sep 24 '18 12:09 lsqshr

Is it same between the structure of multi-channel 2D feature encoder written in your paper and the code? I find that you imply six GCNs in the paper while five in MCFCN function code. Which is the latest version?

CheungDuMing avatar Oct 13 '18 12:10 CheungDuMing

Thanks for your interests. You are right. The 2D network code here is slightly different to what we used in the paper. It is an easy-to-read implementation adapted from https://github.com/ycszen/pytorch-segmentation/blob/master/gcn.py that skips the GCN on the outputs of the first ResNet block. The original paper for GCN only uses 4 GCN kernels (https://arxiv.org/pdf/1703.02719.pdf). The empirical performance difference between these different implementations would depend on the datasets.

We use this repository to demonstrate how the pre-trained 2D network weights can be transferred to the 3D AH-Net. The concept is indeed independent of any particular network architectures. Feel good to explore more possibilities, e.g. pre-train a 2D FCN-like architecture and transfer both the encoder and decoder features to 3D AH-Net decoder blocks instead of discarding the trained 2D decoder, etc. Welcome to discuss more if interested.

lsqshr avatar Oct 13 '18 15:10 lsqshr