Robert Washbourne

Results 32 comments of Robert Washbourne

+1 This is still an issue on latest, seemingly for most multichoice evals. Consistently fails by exceeding configured model length by 1 token. `Sampled token IDs exceed the max model...

This broke several lm-eval-harness workflows for me and reverting to older versions of datasets is not fixing the issue, does anyone have a workaround?