LLMs-Finetuning-Safety icon indicating copy to clipboard operation
LLMs-Finetuning-Safety copied to clipboard

A survey on a line of work following (Qi. et al. 2023)

Open huangtiansheng opened this issue 4 months ago • 0 comments

Hi authors,

Thanks for the wonderful initial work on harmful fine-tuning. We recently noticed a huge amount of papers coming out on the harmful fine-tutning attacks for LLMs. We have pre-printed a survey paper summarizing the existing following-up paper on this issue.

Harmful Fine-tuning Attacks and Defenses for Large Language Models: A Survey https://github.com/git-disl/awesome_LLM-harmful-fine-tuning-papers

Repo: https://github.com/git-disl/awesome_LLM-harmful-fine-tuning-papers

It would be nice if the authors can incorporate our survey in the readme, as this probably can attract more people to work on this important topic. But no pressure if you feel it is inappropriate.

Thanks, Tiansheng Huang

huangtiansheng avatar Oct 05 '24 20:10 huangtiansheng