Wenliang Peng
Wenliang Peng
Hi Felix, https://github.com/Brummi/MonoRec/blob/b2b3decc130d9ca333d1350096b9838a12f977a3/model/loss_functions/monorec_loss.py#L334 https://github.com/Brummi/MonoRec/blob/b2b3decc130d9ca333d1350096b9838a12f977a3/model/loss_functions/monorec_loss.py#L339 you use the .detach() method in the sdl at depth_refinement_loss, which cut the backpropagation, different from the depth_refinement_loss in the paper. Or did I misunderstand?
Hi, thanks for your awesome work! I have questions regarding the weight decay. 1. Why do you set the weight decay to 0.1 during the pretraining stage which is actually...
### ❓ The question hi, thanks for your awesome open source work! I have question regarding the loss spike during training. Do you know why the spikes occur? and from...
hi, thanks for your awesome work! when will your release the tech report or docs for current work like MiniCPM-Llama3-V 2.5 and MiniCPM-V 2.0? thanks