-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loss Functions #9
Comments
Good questions 😉
Does that answer your questions? |
Yesthat answers everything, thank you! :) I assumed using corner coordinates was to save computation time but I wanted to make sure. Also, what accuracy did you get on the SynthText validation set? |
Happy, I could answer your questions! |
Awesome. Thank you again :) I'll try and aim for a similar accuracy, although I also cannot get the SynthAdd dataset (the authors of dataset have not been monitoring their issues :S) |
Follow up question. When you say 91% do you mean percentage of correct characters or percentage of correct words? And does that include case sensitivity? |
91% is the case insensitive word accuracy, should have told immediately 😅 |
Hello @Bartzi,
|
The predictions of the characters from right to left is one of the interesting things the model does on its own. It is learning by itself which reading direction to use, as such right to left is perfectly acceptable. Yes, there is a lot of overlap and this is also intended. There is no need to remove the duplicates. This is what the transformer is for. The encoder takes all features from the rois and hands them to the decoder, which then predicts the characters without overlap. |
Interesting... do you have some code that I could have a look at? |
Ok very strange. I cleaned up my code to send to you. When I ran it, I got the correct result. I might have introduced an error in my original implementation and fixed it during the clean up. It looks like everything works as expected. I am getting result: Xref: :) |
ah, good 😉 |
For future reference. The issue arises if you mix up num_chars and num_words. Intuitively, num_chars should be 23 and num_words should be 1, but for some reason in my npz they were reversed. |
Yeah, that's right! It is interesting, though that the model still provides a good prediction if you set those two numbers wrongly. |
@Bartzi First of all, thanks for your code! Regarding num_chars and num_words in *.npz, I checked synthadd.npz and mjsynth.npz, in both cases num_chars = 1 and num_words = 23. Intuitively it should be swapped, is this correct? I have tried it, but got an error in Reshape layer. Thank you! |
Yes, this is actually intended 😅 This is the way you have to think about it. |
I see, it's clear now! Thank you! |
Dear @Bartzi , sorry to disturb you, another question. According to your paper Localization network try to find and "crop" individual characters, for example FOOTBALL word at the Fig.1. In my case I see another behavior, looks like Localization network crops the regions with sets of characters and moreover these regions are significantly overlapped. Please see the example below. As far as I understand there is no limitation for that, whole system can work like this, but I'm a bit confused because of different behavior. Thank you! PS training is converged with 96% of accuracy, so my model works fine! |
Hmm, it seems to me that the localization network never felt the need to converge to localize individual characters as the task for the recognition network was too simple. We did this in previous work and it worked very well in such cases. |
You could also try to lower the learning rate of the recognition network to encourage the localization network to try harder to make t easier for the recognition network. |
Thank you! using pre-trained weights looks very promising, will try! Also, I was thinking about the image above too, you are right, the recognition task is very simple - license plate recognition sample, so no curved or some other complicated text at all, basically no need to apply an array of affine matrices, only one for the whole image is enough, maybe this is the reason. |
Yes, it might not be necessary to use the affine matrices. You could also just train the recognition network on patches you extracted from a regular sliding window. So basically our model without the localization network where you provide the input to the recognition network yourself, using a simple and regular sliding window approach. |
Thank you! |
Hi @Bartzi Thank you for the good advise, usage of pre-trained Localizer weights helps a lot! |
Nice, that's good to hear. And the image looks the way it is supposed to 👍 |
Hey again,
I had a few questions about the loss functions you used for the Localization net during training.
In the Out Of Image loss calculation you +/- 1.5 to the bbox instead of +/- 1 (like your paper), why do you do this?
Also why are you using corner coordinates for loss calculations?
Was the DirectionLoss used in your paper?
The text was updated successfully, but these errors were encountered: