notebooks icon indicating copy to clipboard operation
notebooks copied to clipboard

fixing the clm-prompt-tuning that was causing unequal lengths in the label token id

Open bpkapkar opened this issue 9 months ago • 2 comments

The code snippet needs a correction in the line: labels["input_ids"][i] = [-100] * (max_length - len(sample_input_ids)) + label_input_ids Change it to: labels["input_ids"][i] = [-100] * (max_length - len(label_input_ids)) + label_input_ids This adjustment ensures that the label token ids are padded or truncated based on their own length, aligning with Hugging Face's recommended practice and avoiding issues with unequal lengths in input and label token ids. The same changes need to be corrected in documentation as well is been mentioned in the https://huggingface.co/docs/peft/main/en/task_guides/prompt_based_methods and https://huggingface.co/docs/peft/main/en/task_guides/clm-prompt-tuning

What does this PR do?

The Pull Request (PR) corrects a code snippet that pads or truncates label token ids based on their own length, aligning with best practices recommended in the Hugging Face documentation for prompt-based methods and CLM prompt tuning. This correction ensures compatibility with transformer models and resolves issues related to unequal lengths in input and label token ids

  • PyTorch NLP & Accelerate: @sgugger
  • Tokenizers: @n1t0, @Narsil huggingface_hub: @muellerzr, @LysandreJik

bpkapkar avatar Apr 29 '24 04:04 bpkapkar

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Does anyone got chance to have check on this PR and review request. PyTorch NLP & Accelerate: @sgugger Tokenizers: @n1t0, @Narsil huggingface_hub: @muellerzr, @LysandreJik

bpkapkar avatar May 23 '24 17:05 bpkapkar