kiss icon indicating copy to clipboard operation
kiss copied to clipboard

Loss Functions

Open daquilnp opened this issue 5 years ago • 25 comments

Hey again, I had a few questions about the loss functions you used for the Localization net during training.

  • In the Out Of Image loss calculation you +/- 1.5 to the bbox instead of +/- 1 (like your paper), why do you do this?

  • Also why are you using corner coordinates for loss calculations?

  • Was the DirectionLoss used in your paper?

daquilnp avatar Dec 10 '19 23:12 daquilnp

Good questions :wink:

  • you are right, we set this value to 1.5 during some of our experiments, in order to allow the network to predict values that are a little outside of the image, but this did not change much, so using 1 or 1.5 does not really matter.
  • using the corner coordinates saves us some computation time, but gets us the same result
  • Yes, I think we used DirectionLoss, it is not necessary to achieve the results, but it keeps the network from manouvering into a bad state, where it predicts regions of interest that show a mirrored character.

Does that answer your questions?

Bartzi avatar Dec 11 '19 11:12 Bartzi

Yesthat answers everything, thank you! :) I assumed using corner coordinates was to save computation time but I wanted to make sure. Also, what accuracy did you get on the SynthText validation set?

daquilnp avatar Dec 11 '19 15:12 daquilnp

Happy, I could answer your questions! We got about 91% validation accuracy on the SynthText validation set.

Bartzi avatar Dec 11 '19 15:12 Bartzi

Awesome. Thank you again :) I'll try and aim for a similar accuracy, although I also cannot get the SynthAdd dataset (the authors of dataset have not been monitoring their issues :S)

daquilnp avatar Dec 11 '19 15:12 daquilnp

Follow up question. When you say 91% do you mean percentage of correct characters or percentage of correct words? And does that include case sensitivity?

daquilnp avatar Dec 11 '19 21:12 daquilnp

91% is the case insensitive word accuracy, should have told immediately :sweat_smile:

Bartzi avatar Dec 12 '19 09:12 Bartzi

Hello @Bartzi, Im currently looking at the output of the chainer local net with the pretrained model.

  • I've noticed that the bounding boxes find characters in images from right to left. Is that what is supposed to happen?

  • I've also noticed theres a lot of overlap between the characters. Do you remove the duplicates some way?

daquilnp avatar Dec 19 '19 19:12 daquilnp

The predictions of the characters from right to left is one of the interesting things the model does on its own. It is learning by itself which reading direction to use, as such right to left is perfectly acceptable. I also think that this is a better choice for the network, since it essentially is a sequence-to-sequence model and it operates like a stack.

Yes, there is a lot of overlap and this is also intended. There is no need to remove the duplicates. This is what the transformer is for. The encoder takes all features from the rois and hands them to the decoder, which then predicts the characters without overlap.

Bartzi avatar Dec 20 '19 11:12 Bartzi

Ok, that make sense. I just wanted to make sure I was running it correctly.

As for overlapping, I am aware that the transformer's decoder is meant to remove duplicates. However, I was testing the pretrained recognition model on this image from the Synth validation dataset xref

And the result from the decoder was: :::::::fffeeeerrrXXXXXX

daquilnp avatar Dec 20 '19 14:12 daquilnp

Interesting... do you have some code that I could have a look at?

Bartzi avatar Dec 20 '19 14:12 Bartzi

Ok very strange. I cleaned up my code to send to you. When I ran it, I got the correct result. I might have introduced an error in my original implementation and fixed it during the clean up. It looks like everything works as expected. I am getting result: Xref: :)

daquilnp avatar Dec 20 '19 16:12 daquilnp

ah, good :wink:

Bartzi avatar Dec 20 '19 16:12 Bartzi

For future reference. The issue arises if you mix up num_chars and num_words. Intuitively, num_chars should be 23 and num_words should be 1, but for some reason in my npz they were reversed.

daquilnp avatar Dec 20 '19 22:12 daquilnp

Yeah, that's right! It is interesting, though that the model still provides a good prediction if you set those two numbers wrongly.

Bartzi avatar Dec 23 '19 08:12 Bartzi

@Bartzi First of all, thanks for your code! Regarding num_chars and num_words in *.npz, I checked synthadd.npz and mjsynth.npz, in both cases num_chars = 1 and num_words = 23. Intuitively it should be swapped, is this correct? I have tried it, but got an error in Reshape layer. Thank you!

borisgribkov avatar May 28 '21 13:05 borisgribkov

Yes, this is actually intended :sweat_smile: Our original work came from the idea that we want to extract one box per word with multiple characters. However, we thought what if we only have a single word, but want to localize individual characters? The simplest solution is to redefine the way you look at it. Now we want to find a maximum of 23 words (each character is defined to be a single word) with one character each.

This is the way you have to think about it.

Bartzi avatar May 28 '21 13:05 Bartzi

I see, it's clear now! Thank you!

borisgribkov avatar May 28 '21 13:05 borisgribkov

Dear @Bartzi , sorry to disturb you, another question. According to your paper Localization network try to find and "crop" individual characters, for example FOOTBALL word at the Fig.1. In my case I see another behavior, looks like Localization network crops the regions with sets of characters and moreover these regions are significantly overlapped. Please see the example below. As far as I understand there is no limitation for that, whole system can work like this, but I'm a bit confused because of different behavior. Thank you! image

PS training is converged with 96% of accuracy, so my model works fine!

borisgribkov avatar Jun 02 '21 09:06 borisgribkov

Hmm, it seems to me that the localization network never felt the need to converge to localize individual characters as the task for the recognition network was too simple. You could try a very simple trick: Start a new train run, but instead of random initialization of all parameters, you load the pre-trained weights of the localizer. In this way the localizer is encouraged to improve again because the recognition network behaves badly.

We did this in previous work and it worked very well in such cases.

Bartzi avatar Jun 02 '21 11:06 Bartzi

You could also try to lower the learning rate of the recognition network to encourage the localization network to try harder to make t easier for the recognition network.

Bartzi avatar Jun 02 '21 11:06 Bartzi

Thank you! using pre-trained weights looks very promising, will try! Also, I was thinking about the image above too, you are right, the recognition task is very simple - license plate recognition sample, so no curved or some other complicated text at all, basically no need to apply an array of affine matrices, only one for the whole image is enough, maybe this is the reason.

borisgribkov avatar Jun 02 '21 11:06 borisgribkov

Yes, it might not be necessary to use the affine matrices. You could also just train the recognition network on patches you extracted from a regular sliding window. So basically our model without the localization network where you provide the input to the recognition network yourself, using a simple and regular sliding window approach.

Bartzi avatar Jun 02 '21 11:06 Bartzi

Thank you!

borisgribkov avatar Jun 02 '21 11:06 borisgribkov

Hi @Bartzi Thank you for the good advise, usage of pre-trained Localizer weights helps a lot! image and final accuracy is about 2% better

borisgribkov avatar Jun 03 '21 21:06 borisgribkov

Nice, that's good to hear. And the image looks the way it is supposed to :+1:

Bartzi avatar Jun 04 '21 08:06 Bartzi