rome icon indicating copy to clipboard operation
rome copied to clipboard

How do you train KE and MEND with CounterFact?

Open Zce1112zslx opened this issue 2 years ago • 0 comments

As is described in your paper, "To encourage fair comparison on both zsRE and COUNTERFACT tasks, we additionally train KE-zsRE and KE-CF models on size-10,000 subsets of the respective training sets." and "Again, for fair comparison, we train new versions of MEND (MEND-zsRE, MEND-CF) on the same sets that KE-zsRE and KE-CF were trained on.".

Which 10,000 records do you use to train KE-CF and MEND-CF?

Besides, "Table 4 showcases quantitative results on GPT-2 XL (1.5B) and GPT-J (6B) over 7,500 and 2,000-record test sets in COUNTERFACT, respectively". Which 7,500 or 2,000 records do you use to evaluate all baselines?

Thank you :-)

Zce1112zslx avatar Dec 06 '22 07:12 Zce1112zslx