eve
eve copied to clipboard
[Research] Add toxicity detection pipeline
Did you ever consider the addition of a toxicity detection bot to this project?
We’re Nathan Cassee and Alexander Serebrenik (Eindhoven University of Technology, The Netherlands), Nicole Novielli (University of Bari, Italy), Christian Kastner (Carnegie Mellon University, USA), and Bogdan Vasilescu (Carnegie Mellon University, USA). And we are conducting research to understand the effectiveness of toxicity detection bots on GitHub. As part of our research we want to understand the impact of a state of the art toxicity detection bot in your project. We hope that a better knowledge and understanding of how these toxicity bots operate can be used to further improve the health of open-source projects.
To participate in this experiment we ask you to adopt a toxicity bot in lf-edge/eve. You can adopt the bot by merging this pull-request. This bot will monitor issues and pull-requests for comments containing toxicity, and will post a comment if it detects toxicity. Additionally, the bot will securely store comments and edits or deletions made to those comments. This will allow us to study the frequency of toxic comments, and whether and how toxic comments are edited or deleted.
We expect that the toxicity bot reduces toxicity in issues or in pull-requests. However, there might be cases where the bot responds in issues or pull-requests where there is no toxicity (a false positive) and this might distract on-topic discussions.
Practicalities
Your participation in this study is completely voluntary, at any point in time you can retract your project from this study. This can be done by disabling the toxicity bot, or by sending us an email. Additionally, if you don’t want us to use the results of your project in the analysis of the study, you can always email us to inform us that you want to retract your data from the study.
The study itself will run for roughly three months, at the end of this time we will open a PR in your project to remove the toxicity bot from the project. If after the experiment you want to keep using the toxicity bot we can also provide you with a version of the toxicity bot that does not record telemetry.
The data collected for this study will be stored securely on a private server, and the raw data will only be available to the researchers involved in this study. When we release or publicize results of the study the results will be released anonymously or in an aggregated way. Additionally, we will not list the projects that participated in this study, nor will will we release or report on any information that can be used to compare whether one project is more toxic than the other.
This study has been approved by the Ethical Review Board of the Eindhoven University of Technology.
Closing and Survey
If you have any questions about the bot feel free to ask them here, or mail them to [email protected].
If you are not interested in participating we would really appreciate it if you would let us know why you are not participating.
Secondly, it would be really helpful if everyone involved in the project could respond to the following survey on your expectations of the bot, especially if you are not interested in adopting the bot (https://docs.google.com/forms/d/e/1FAIpQLSePeJGI-8Q_T52WgnFmw4Ag4Ufmc70sYXf_8dbdDwpNNlkOVw/viewform)!
Note: We are not sure how best to approach projects to participate in this study, if you thought this PR was spammy, or unhelpful, please let us know so we can modify how we invite project!
Thanks for you response.
- Apologies, verification should be fixed right now.
- I fully understand, I've been reading a bit more into permission for GitHub Actions and I can modify the pipeline to only require permissions to the issues and pull_requests of the repository (see). However, to post comments the bot still requires write permission to both issues and pull_requests. These write permissions are required such that the bot can post a comment to an issue or pull_request if it detects toxicity. However, with a write token on issues and pull_requests you can also delete other comments or pull_requests (see). So I can understand that you would still be hesistant to give the bot write permissions, and I am not aware of any alternatives, or more granular permissions.
We all know that EVE is absolutely toxic free ;) @eriknordmark can we close this because there is no any interest in this PR for a quite a while.
Given the security implications of the bot implementation it doesn't make sense for us to participate in this study. And I wouldn't be surprised if tehe study has completed since the PR is from over a year ago.