lightly
lightly copied to clipboard
tutorial_custom_augmentations: make evaluation of embeddings more intuitive
Description:
The tutorial tries to evaluate whether the embeddings are good by looking at it's neighbours: It computes for each example image the class distribution of neighbour images, which is some kind of kNN validation.
However there are two disadvantages of this kind of evaluation, as already discussed: https://github.com/lightly-ai/lightly/pull/577#discussion_r754065240
Task:
- Add the interpretation of the plots as text answering the following questions: Are the embeddings good or bad? How do you see that in the plots?