LaMP
LaMP copied to clipboard
Codes for papers on Large Language Models Personalization (LaMP)
As explained in the readme, we need to sort the data for test-data as well however while sorting the code tries to map the "output" parameter from the corresponding output...
feebacks -> feedbacks
Hi there! Thanks so much for your work on the LaMP benchmark. I've been working on reproducing the results but found myself a bit puzzled about the retriever. Facebook has...
Hi, thanks so much for your work of personalized LLM. I am currently trying to reproduce this work on my side. If anyone is also interested, might as well join...
Is text taken from the first sentence of the article ?
I tried to run the file RSPG/utils/create_data.py, and the parameter --retrievers_data_addr requires the presence of two files: scores.json and data.json. The scores.json file was obtained in the previous step, but...
Below is an example prompt from User-based LaMP-1 using two arbitrarily retrieved profile: *For an author who has written the paper with the title, and "convex drawings of internally triconnected...
In your evaluate_llm.py, you wrote: `opts = parser.parse_args() model = AutoModelForSeq2SeqLM.from_pretrained(opts.model_addr, cache_dir=opts.cache_dir) tokenizer = AutoTokenizer.from_pretrained(opts.model_addr, cache_dir=opts.cache_dir) collator = DataCollatorForSeq2Seq(tokenizer = tokenizer, model = model, max_length = opts.max_length)` But what if...
According to the README, this tool should use attribute "golds" but not the root attribute.