sigver icon indicating copy to clipboard operation
sigver copied to clipboard

not getting good results in pretrained model with real time images, why?

Open MuruganR96 opened this issue 4 years ago • 5 comments

I was tried with Pre-trained models,

SigNet SigNet-F lambda 0.95

with Cosine similarity between two image features (like: one genuine, one forgery signature).

feature1 = base_model(input1)
feature2 = base_model(input2)
cosine_similarity(features1, features2)

real1.png, real2.png -> similarity: 0.31142098 real1.png, fake1.png -> similarity: 0.2714578 real1.png, fake2.png -> similarity: 0.6426417 real2.png, fake1.png -> similarity: 0.18943718 real2.png, fake2.png -> similarity: 0.6238067

here how i concluded is 60% above similarity means, both as verified, else rejected. but 2/5 cases successfully verified. 40% accuracy only i got it from pretrained model.

I have few doubts @luizgh, @gonultasbu and @atinesh-s sir. please help to resolve,

  • when i am testing with real time noisy images (like signature written in paper images) not getting good results. why? how to resolve these issue?

  • How i will improve the accuary of the model SigNet?

Thanks & Regards Murugan Rajenthiran

MuruganR96 avatar Jul 23 '20 07:07 MuruganR96

@MuruganR96 if the data distribution changes too much, these models will probably not work very well, so you would be better off collecting data and re-training / finetuning the models. If you have a few samples per user, you can try the "meta-learning" approach, as it does some fine-tuning for the users.

The datasets used to train these models were obtained in an "lab" fashion (you can read about it in the papers that introduced them), but involved people writing signatures on a clean piece of paper (no background), and usually in a single session (less variability). Unfortunately we do not have datasets with "real-world" conditions available for research in the academy.

luizgh avatar Jul 29 '20 12:07 luizgh

@luizgh, @atinesh-s sir, I guess right now we don't have re-training / finetuning option in sigver.

i will do training from scratch with more datasets sir.

I have a doubt sir, please correct me.

  • I have 14075 users signature datasets. list of datasets,
10000 users GPDSsyntheticOffLineSignature CORPUS, 
4000 users GPDSsyntheticOnOffLineSignature CORPUS, and 
75 users MCYT-75 OFFLINE SIGNATURE CORPUS
  • If i will follow Training a CNN for Writer-Independent feature learning with 14075 users means, is it make sense? meaningful? i guess no.of.classes more than 1000 means models will probably not work very well.

please tell me how many users can i take to train @luizgh, @atinesh-s sir?

MuruganR96 avatar Jul 30 '20 12:07 MuruganR96

I think you can try euclidean distance

henryle97 avatar Oct 05 '20 02:10 henryle97

I'm also getting bad results, even using the example images and example.py code. Tried using the proposed formula, euclidean distance and even modifing the proposed formula with instead of max() using mean().

Could it be different libs versions?

ofgagliardi avatar Sep 30 '21 18:09 ofgagliardi

@ofgagliardi are you training writer-dependent classifiers as described in the article, or just using the network to extract features and compare the features from different signatures? If you are doing the latter, I recommend checking out this paper https://arxiv.org/abs/1807.10755, that uses this network to extract features and train a single "writer-independent" classifier. I hope this helps.

luizgh avatar Oct 06 '21 14:10 luizgh