ProxyOpt icon indicating copy to clipboard operation
ProxyOpt copied to clipboard

Some questions about implementation details

Open dadaokanxigua opened this issue 3 years ago • 3 comments

"We optimize parameter layer several times and does not optimize the same parameters. We use one to bm3d and it can improve psnr from 35.+ to 37.+."

  1. Does "optimize parameter layer several times" mean that you execute train_step2 several times ?
  2. How much is the value of final psnr for step2, is it more than 35.413(the value of pnsr on your test dataset)?

dadaokanxigua avatar May 11 '21 03:05 dadaokanxigua

"We optimize parameter layer several times and does not optimize the same parameters. We use one to bm3d and it can improve psnr from 35.+ to 37.+."

  1. Does "optimize parameter layer several times" mean that you execute train_step2 several times ?
  2. How much is the value of final psnr for step2, is it more than 35.413(the value of pnsr on your test dataset)?

I'm sorry for the late reply. 1、Yes, the result is not stable. 2、I read the previous notes, the random parameter bm3d's psnr is 35.+, the parameter selected in step2 is 37.+, note that the test data we choose is not all 40 instances of that match, but 23 instances, the test instance corresponding to training instance. 40 instances will reduce result obviously for that some scene is not trained.

Apathetically avatar May 28 '21 05:05 Apathetically

Thank you for your reply, there are some other questions hope to discuss:

  1. In train step2, dose the decrete parameters (such as color space) should perform the same optimization method as continuous parameters (such as cff)? I found that the initialization of decrete parameters are hard to change. For example, if the initial value of a descrete parameter is 0.2, its final value after train step2 also fluctuates at 0.2. The train step2 can not change its value from 0 to 1. Have you meet this question? If not, could you add your convergence plots of descrete parameters into the README.
  2. There are only 5 parameters to optimize, is it necessary to feed the whole train dataset?

dadaokanxigua avatar May 28 '21 08:05 dadaokanxigua

Thank you for your reply, there are some other questions hope to discuss:

  1. In train step2, dose the decrete parameters (such as color space) should perform the same optimization method as continuous parameters (such as cff)? I found that the initialization of decrete parameters are hard to change. For example, if the initial value of a descrete parameter is 0.2, its final value after train step2 also fluctuates at 0.2. The train step2 can not change its value from 0 to 1. Have you meet this question? If not, could you add your convergence plots of descrete parameters into the README.
  2. There are only 5 parameters to optimize, is it necessary to feed the whole train dataset?

Since over half a year has passed, I may not remember very clearly 1、The phenomenon you mentioned does exist, but the discretized variable still has to be done like this as paper says. In fact, I ran into a similar problem at the time. 2、In my opinion, if it is applied to actual mobile phone devices, different phones will take different photos. If you want to optimize a specific phone, maybe you only need to train it? But the method proposed in the paper is to participate in the noise reduction competition, which may require training with all datasets.

Apathetically avatar Jun 02 '21 06:06 Apathetically