LCE
LCE copied to clipboard
Boundary loss for weak model
Very interesting and useful work, thank you!
My question is about Boundary loss: You propose to learn transformation from one model to another with intra-class compactness restriction on the mapped embeddings. Let's consider the case when the 2nd model has a lightweight architecture, and consequently provides worse recognition quality compared to the 1st model. From my perspective, in such case it would be hard to learn transformation (2 -> 1) which ensures at least the same intra-class compactness as the 1st model has. Could it be useful to relax boundary loss for models of poor recognition quality?
Thank you!