petals
petals copied to clipboard
content of 'labels' when doing prompt tuning of llama-2 on QA
I saw that on classification tasks, the 'labels' are target values. When using CausalLM model to tune it on QA dataset, the format should be
- input_ids: Q; labels:A or
- input_ids: Q+A, labels: Q+A or
- input_ids: Q+A, labels: IGNORE_TOKENs+A ?
Thank you!