kongds

Results 44 comments of kongds

Hi, we use PyTorch 1.7.1 version.

Yes, you can also test sentence embeddings with our prompt on LLaMA3 using the following scripts: ```sh MODEL_PATH="meta-llama/Meta-Llama-3-8B" TEMPLATE='This_sentence_:_"*sent_0*"_means_in_one_word:"' python evaluation.py \ --model_name_or_path $MODEL_PATH \ --mode test --mask_embedding_sentence \ --mask_embedding_sentence_template...

Thanks for sharing the results and advice. I have tested adding decompressed MoRA to the weight before, but it can be slow in large language models, which needs to copy...

Hello, Can you provide the full traceback of error message?

Thank you for your interest in our work. I think contrastive learning on image pairs should also works. However, it presents several challenges compared to contrastive learning on text pairs....

We compared multimodal training in our paper and found that it performs worse than single-modality training. And I don't think that contrastive learning on image pairs can achieve better performance...

Thanks for your interest in our work. MoRA has been implemented with Linear4bit in peft-mora is bnb.py, so it can run with a 4-bit model like LoRA. (But I haven't...

We use `Similarity` to get cosine_similarity which can calculate cosine similarity between [128, 1, 768] and [1, 128, 768], and return matrix with shape [128, 128] https://github.com/kongds/Prompt-BERT/blob/24c9c20f1b3957f7de833ed2479d29488ad6a791/prompt_bert/models.py#L38-L49

Thank you for your interest in our work. Our experiments are directly conducted with deepspeed over multiple nodes by the provided script (without slurm). Maybe you need some configs to...

Thank you for your interest in our work. You can find the parameter `group_y_candidate_topk` for Amazon-670k in https://github.com/kongds/LightXML/blob/b9af9443004d3bce8b9116edfe038b702d1b295c/run.sh#L35