drewskidang
drewskidang
lora_target_modules='["query_key_value"]' "not part of this model"
I just finished pretraining BAAI/bge-base ... is it possible to use the llm-embedder training script on the same model or os the the embedder model different
Multigpu
Is there multigpu support ? Don't know how to set up without running a script
Not sure how to generate Sparse Embeddings when using langchain.
is there a way for the model to be uploaded on hugging face so we can gpu accelerators like vllm
Is it posssible to train on more than one gpu https://huggingface.co/BAAI/bge-m3 currently ooming on this model. BGE allows to load fp16 but cant do it with sefit
Trying to get the original document along with the annoated LLM TASK together in a string is that possible?
pretrain
Is there away to pretrain the M_3 models?