daegon Yu
daegon Yu
An error occurred while processing the url https://www.technologyreview.kr/ai-large-language-models-bigscience-project/: Message: disconnected: not connected to DevTools (failed to check if window was closed: disconnected: not connected to DevTools) (Session info: headless chrome=114.0.5735.198)...
Is there any code that evaluates the miracl dataset with the Ndcg@10 metric? Or, I know that in order to evaluate Ndcg on the miracl dataset, the similarity between positives...
In paragraph 4.3 Unsupervised Contrastive Pretraining of the paper Nomic Embed: Training a Reproducible Long Context Text Embedder, the learning rate is set to 2e-5, contrastors/src/contrastors/configs/train In the /contrastive_pretrain.yaml file,...
**Describe** Model I am using (UniLM, MiniLM, LayoutLM ...): E5 hello. I am a student studying sentence similarity. “Paper: Text Embeddings by Weakly-Supervised Contrastive Pre-training” While reading this paper, a...
Can I load a model trained by unsloth's CPT (continued Pre-Training) method, change only the saved Lora parameters to learnable parameters, and then proceed with CPT on a different data...
In FlagEmbedding, hard negative mining is extracted based on ranking. (FlagEmbedding/baai_general_embedding/finetune/hn_mine.py) Is there a code that does hard negative mining based on similarity score?
I have fine-tuned the reranker through this github. I would like to continue learning from the saved checkpoint. Can you give me some instructions on how to do that?
There is no problem when training with the "unsloth/gemma-2-9b-it" model, but if i merge the lora layer of the trained model and continue training, the reward value becomes 0. The...
Hello. I am trying to quantize the gemma2 model using awq, but gemma2 model is not supported? I am sharing the error below. ``` { "name": "AttributeError", "message": "'Gemma2AWQForCausalLM' object...
Is it possible to merge embedding models such as bi-encoders? thank you