GuideNet
GuideNet copied to clipboard
Implementation details (layer and channel numbers)
Thank you for your interesting work and planning to release your code!
I understand that you plan to release the code after getting a review of the TIP submission. Meanwhile, I'm trying to implement your method from the descriptions in the paper, but some of important information is missing to fully understand the method. Could you help me by answering my questions below?
- Layer numbers: I suppose the overall network architecture is exactly the one illustrated in Figure 3 in terms of layer numbers, ie, single conv (+bn+relu) layers at input and output, and 5 successive blocks of ResBlock-ResBlock (w/ downsampling in the first ResBlock) in the encoders. Is is right?
- Places of down-sampling: Where does down-sampling happen? I suppose there is always down-sampling at the first internal layer of each block of two successive ResBlocks. Do the first convolution layers (gray layers) do also down-sampling?
- Channel numbers: Could you provide the channel number of each layer in Figure 3? It will be very helpful if you could directly write down the numbers beside the layers in the figure and upload it here as an image (You can upload an image here by copying an image data onto clipboard and paste in the text box).
Thank you.
@t-taniai Can you update if any response from the author has been provided or have you had any success at repeating the method by yourself? Thanks.
@saikumar96 I haven't got any response from the author yet. And no success at reproducing the method.
@t-taniai Have you released the paper successfully?