Olivier Moindrot

Results 42 comments of Olivier Moindrot

The pretraining approach is just to get a good embedding with a softmax loss, since this loss is very stable and you should be able to converge. Once you have...

Yes so you have two steps: 1. Train with softmax loss. You have the network computing the embedding, then a linear layer with softmax activation 2. Remove the linear layer....

It’s a good point, I think without relu makes more sense since you want the embedding to possibly have negative values.

This usually means that all the embeddings have collapsed on a single point. One solution that might work is to lower your learning rate so that this collapse doesn't happen.

You could juste duplicate the loss so that it has the right shape? ```python loss = ... # scalar loss = tf.ones([batch_size, 1]) * loss ```

I'm starting a new job soon so I'm not sure how much time I'll have. You can maybe try to build a working solution in a fork to see how...

@fursovia : something like that would work ```python from tensorflow.contrib.data.python.ops.interleave_ops import DirectedInterleaveDataset import model.mnist_dataset as mnist_dataset # Define the data pipeline mnist = mnist_dataset.train(args.data_dir) datasets = [mnist.filter(lambda img, lab: tf.equal(lab,...

Hi @TengliEd, if you are using the arcface loss I think you don't need to have these balanced batches. Correct me if I'm wrong but you should be able to...

My code above is very slow because of the `dataset.filter(...)` used to build the datasets. The `filter` method will go through all examples until it finds one with the correct...

If you are working with images stored in jpg files for instance, you can apply `tf.data.experimental.choose_from_datasets` only on the filenames and labels (which should be very fast), and then load...