PhyDNet
PhyDNet copied to clipboard
there are some formula details in the Supplementary Material I can't understand, maybe others have the same problem
Q1:what is the k ?
I think it should be written as w, am i getting it right ?
Q2: what is the p, k in filter w ?
i guess the p in filter w mean the channel dimension in filter w, am i getting it right?
i guess the k in filter w just mean the size of filter w, not the kth power of w, am i getting it right?
Q3 : we all know the convolution is generally implemented with the bias weight. But the bias weight does not exist in . Could you explain briefly why don't need to consider bias weight?
thank you so much !
I have read the whole article, it is really a good article, clearly organized and proved very powerful.
thanks for yourwork : )
this three question I have my understanding, but if you can kindly answer briefly, I will be more sure : )
tnank you : )
and I have a last question, simple but important, i will open a new issue, maybe others will have the same question.
Thank you @bluove for your feedback ! Q1: you are right, I didn't notice this mistake, it's obviously a w in the formula. Q2: The p means the physical part of the parameter vector w (see beginning of 3.3 in paper), and the k indeed means that w_p is a filter of size k*k. Q3: You can add a bias term in the convolution layer, but it will not change the order of differentiation controlled by the moment matrix M(w)
First of all, thank you very much for the paper, I think it very inspiring!
When I was reproducing the result using batch = 32 (rather than the original 64 due to limited GPU memory). I had some trouble reaching the experimental results mentioned in the paper.
Details: I have run for 600 epochs and the algorithm seems to be converged. However, the MSE = 32.5, MAE = 90.7 and SSIM = 0.924 (In the paper, MSE = 24.4, MAE = 70.3, SSIM = 0.947).
May I know is there something I missed (e.g., hyperparameter trick?), or I must use batch = 64?
Thank you so much for your help.