chatgpt-comparison-detection
chatgpt-comparison-detection copied to clipboard
Human ChatGPT Comparison Corpus (HC3), Detectors, and more! 🔥
我尝试复现中论文中的结果,现在我是直接导入hugging face上的chatgpt-detector-roberta作为model和tokenizer,根据页面上的描述这是由mixed数据集训练的,在论文中对raw-full的F1 score应该为99.44,但我没办法得到这个数据,我使用的数据集是在hc3中readme中的谷歌网盘下载的,以下是我得到的结果 {'0': {'precision': 0.9994103425909546, 'recall': 0.9951852504256943, 'f1-score': 0.9972933215651661, 'support': 17031.0}, '1': {'precision': 0.9898640296662546, 'recall': 0.9987528061860813, 'f1-score': 0.994288552272163, 'support': 8018.0}, 'accuracy': 0.9963271986905665, 'macro avg': {'precision': 0.9946371861286046, 'recall': 0.9969690283058878, 'f1-score':...
 Could you please re-upload the link?
hi, may I know what is the license of the released dataset? We may use it for commercial production so need to know what the license is exactly. Thanks!
您好,想问一下您检测的模型代码(论文中的第二个模型基于robert)是在哪呢
Hi dear developers, I am wondering whether data splits (e.g. train/val/test) has been released; I saw a issue 3 weeks ago with an official reply saying "We will release the...


The basic roberta classifier limits its input tokens to 512/256, so how do you process the long input text? Thank you!