scaling_sentemb
scaling_sentemb copied to clipboard
About the usage of your prompt.
Hi, I am interested in the prompt proposed in your paper to get sentence embeddings. I wonder if I can directly use your prompt in other open-source LLM, e.g. LLAMA 3, to get meaningful sentence embeddings?
For example, using this code to get the sentence embedding of "input_text", via the open-source LLAMA 3 without using the training part of your paper.
tokenizer = AutoTokenizer.from_pretrained(model_id) model = LlamaModel.from_pretrained(model_id,) prompt_template = 'This sentence: "{text}" means in one word:' text = prompt_template.format(text=input_text) t_input = tokenizer(text, return_tensors="pt").to(device) last_hidden_state = model(**t_input, output_hidden_states=True, return_dict=True).hidden_states[-1].cpu()
Thank you.