LLLLucensus
LLLLucensus
I'm not sure whether it is the bug of pytorch_geometric. Any body saw the same error? > m1=self.propagate(edge_index1,size=(x1.size(0), x1.size(0)), x=x1,edge_weight=edge_weight1) File "/opt/conda/lib/python3.7/site-packages/torch_geometric/nn/conv/message_passing.py", line 257, in propagate msg_kwargs = self.__distribute__(self.__msg_params__, kwargs)...
Is there code of finetuning examples running on multiple GPUs? How about the requirement for mem and cores? Thanks.
请教下如何解决OOM,过程中,观察到只占用了GPU 0,其他几个GPU都没用到。 4张Tesla-V100-16G,参数配置: torchrun --nproc_per_node 1 \ -m FlagEmbedding.finetune.embedder.decoder_only.base \ --model_name_or_path BAAI/bge-multilingual-gemma2 \ --cache_dir ./cache/model \ --use_lora True \ --lora_rank 32 \ --lora_alpha 64 \ --target_modules q_proj k_proj v_proj o_proj...
已安装faiss-gpu-cu11,遇到以下报错何解?感谢~~ 机器:4*v100 16G 运行命令: python hn_mine.py \ --input_file toy_finetune_data.jsonl \ --output_file toy_finetune_data_minedHN.jsonl \ --range_for_sampling 5-8 \ --negative_number 2 \ --use_gpu_for_searching \ --embedder_name_or_path ../../BAAI/bge-m3 inferencing embedding for corpus (number=80)-------------- initial target...