Support evaluation during finetuning the EmbeddingRetriever
Is your feature request related to a problem? Please describe.
In this tutorial, we can finetune a DPR model using a train set and evaluate using a dev set and then we can see the metrics printed out from retriever.train() execution.
However if we want to finetune an EmbeddingRetriever, it is using sentence transformers under the hood, and the retriver.train() does not have a dev_filename argument as an explicit parameter to be set by the user. It'd be useful to incorporate the evalution code into the retriever.train() here as out of the box solution, similar to DPR code.
Thanks.
@rnyak Thank you for this feature suggestion. I completely agree that the parameters should be more similar for DPR and EmbeddingRetriever training to make the training easier to use. 👍 Among other parts of Haystack, we are also reworking the evaluation for Haystack 2.0. Stay tuned! 🙂
@rnyak Thank you for this feature suggestion. I completely agree that the parameters should be more similar for DPR and EmbeddingRetriever training to make the training easier to use. 👍 Among other parts of Haystack, we are also reworking the evaluation for Haystack 2.0. Stay tuned! 🙂
thanks!