gritlm
gritlm copied to clipboard
Generative Representational Instruction Tuning
When I run the script of Training.embedding_model, a bug is occurred. How can I fix it? File /gritlm/training/run.py", line 166, in main else: raise NotImplementedError NotImplementedError
Thanks -- I am using Grit for document embeddings that will be used to score doc-to-doc similarity. Should I add an instruction or leave it blank? Thank you, Griffin
Hi guys, we have tried your model for a bigger number of documents (then used in the example code) and found that the model does not suse the provided documents...
generally, we will have several docs for reference while doing rag, wondering if you guys have tested this setting with doc cache?
I would like to run embedding as a service using something like vLLM on a Docker container on different host. How would one go about doing this?
Thank you for your contribution! I have encountered some issues. 1、Full train Here is my training script: ``` CUDA_VISIBLE_DEVICES="0,5" torchrun --nproc_per_node 2 \ -m training.run \ --output_dir ./output/7-2_full \ --model_name_or_path...
Thank you for your contribution. I encountered the following error when training with toy data: TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]] I read online that the following reasons may...
def __call__(self, q_reps, p_reps): if self.negatives_cross_device: # This gathers both negatives and positives. # It could likely be optimized by only gathering negatives. q_reps = self._dist_gather_tensor(q_reps) p_reps = self._dist_gather_tensor(p_reps) scores...
Is there a way to load this through HF sentence transformer library?