Angel G.

Results 40 comments of Angel G.

For the 99% model, did you change the optimizer from "adagrad" to something which helps it to converge faster ? I see that after 6th epoch, the accuracy improvements from...

OK, to share, I train with 128-D vectors to see the limits of lower-d embedding. At epoch 24, it achieved 95.9%. 98% seems unattainable for now, but we'll see. Also,...

It went up to 97.6 % (on my VGG2) Now I wonder how you managed to reduce the LR of your optimizer - did you delete the optimizer of the...

Thanks for Glint360k. I suggested it, but then I didn't train on it because its bigger size. I succeeded to download it. Maybe when I train for production. I try...

Before we compute the embeddings, it is not known whether the negative in the triplet selected is Hard, Semi-Hard or Easy. The random generation before a pass might yield many...

We may work on this as well. I noticed that the triplet generation is not a very fast process. Probably data-frames are not that fast for this kind of usage.

It is easy, but because the code was old python, needs some fixes to run on 3.8 or 3.9. In fact you can run on CPU as well.

It turned that the simple compile guide is missing a lot: To make 'pHash.h' to be "found", we need also: `sudo apt-get install libavformat-dev libmpg123-dev libsamplerate-dev libsndfile-dev` `sudo apt-get install...

This is big to download over HTTP, so did you try to find it on the torrents ?

I wonder whether I did it right by defining conv2d_ABN as: ``` def conv2d_ABN(ni, nf, stride, activation="leaky_relu", kernel_size=3, activation_param=1e-2, groups=1): if activation=="leaky_relu": return nn.Sequential( nn.Conv2d(ni, nf, kernel_size=kernel_size, stride=stride, padding=kernel_size //...