deep_preset icon indicating copy to clipboard operation
deep_preset copied to clipboard

convergence of preset loss

Open oferidan1 opened this issue 2 years ago • 4 comments

hi, thanks for your great paper and code. I noticed in the paper you add comaprison of L1 on preset estimtation with and without preset pair-wise loss. according to the plot in the paper, also with PPL the preset estimation seem flat (value ~ 0.129) and doesnt go down too much. is this the expected result? Thanks, Ofer

image

oferidan1 avatar Feb 03 '22 10:02 oferidan1

Hi Ofer,

Thank you, and thank you so much for your question. It is not the ideal result but the expected result. The idea of PPL is to minimize distances between photos adjusted by the same preset in latent space. (Regardless of image content, the model should infer a consistent preset and a color style representation vector from photos adjusted by the same preset.) This will enhance not only the stability of preset prediction but also the direct color style transfer.

Best regards, Man

minhmanho avatar Feb 03 '22 16:02 minhmanho

Hi Man, Thanks for your feedback. Let me clarify my question: I understand the idea of PPL loss, but according to the graph in the paper, it seems that even with PPL loss the "preset estimation loss" looks flat and not decreasing too much. Usually, loss should decrease over time. Did you notice this? Thanks, Ofer

On Thu, Feb 3, 2022 at 6:04 PM Man M. Ho @.***> wrote:

Hi Ofer,

Thank you, and thank you so much for your question. It is not the ideal result but the expected result. The idea of PPL is to minimize distances between photos adjusted by the same preset in latent space. (Regardless of image content, the model should infer a consistent preset and a color style representation vector from photos adjusted by the same preset.) This will enhance not only the stability of preset prediction but also the direct color style transfer.

Best regards, Man

— Reply to this email directly, view it on GitHub https://github.com/minhmanho/deep_preset/issues/5#issuecomment-1029143029, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQ2BQ7UY4EUUO3GZQNVABJTUZKRRBANCNFSM5NORFPXQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you authored the thread.Message ID: @.***>

oferidan1 avatar Feb 04 '22 15:02 oferidan1

Hi Ofer,

Yes, I noticed that. The ideal result is that the preset loss is decreased gradually. However, estimating an exact preset (predicting accurate settings) for the actual color transformation is a tough problem, as shown by not-good convergences in the Figure. Hence, I do not expect an accurate preset prediction because it is much easier to learn color style representation and directly transfer the color style using convolution.

Nevertheless, estimating presets seems more suitable for end-users because we can understand color transformation via predicted settings and subsequently adjust the color style. For a better convergence of preset loss, I think the architecture and training framework should be re-designed so that the model can focus more on estimating presets. I didn't train the preset prediction independently but I think it could help.

Best regards, Man

minhmanho avatar Feb 05 '22 05:02 minhmanho

Thanks for the feedback!

On Sat, 5 Feb 2022, 07:34 Man M. Ho, @.***> wrote:

Hi Ofer,

Yes, I noticed that. The ideal result is that the preset loss is decreased gradually. However, estimating an exact preset (predicting accurate settings) for the actual color transformation is a tough problem, as shown by not-good convergences in the Figure. Hence, I do not expect an accurate preset prediction because it is much easier to learn color style representation and directly transfer the color style using convolution.

Nevertheless, estimating presets seems more suitable for end-users because we can understand color transformation via predicted settings and subsequently adjust the color style. For a better convergence of preset loss, I think the architecture and training framework should be re-designed so that the model can focus more on estimating presets. I didn't train the preset prediction independently but I think it could help.

Best regards, Man

— Reply to this email directly, view it on GitHub https://github.com/minhmanho/deep_preset/issues/5#issuecomment-1030531806, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQ2BQ7UHOOLT5KJJAYVRNFLUZSZFZANCNFSM5NORFPXQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you authored the thread.Message ID: @.***>

oferidan1 avatar Feb 05 '22 06:02 oferidan1