Kurt Shuster
Kurt Shuster
We offer several safety classifiers in our model zoo. See [this project](https://github.com/facebookresearch/ParlAI/tree/main/projects/dialogue_safety) and this [project](https://github.com/facebookresearch/ParlAI/tree/main/projects/safety_recipes)
0.5 is generally a good idea. I would then instantiate with the [BAD model](https://parl.ai/projects/safety_recipes/): `OffensiveLanguageClassifier(custom_model_file='zoo:bot_adversarial_dialogue/multi_turn/model')`
the threshold you pick is entirely up to you. and yes, it can be used with just a single utterance
These commands should work if you have installed directly from source. We have not made a formal `pip` release yet including BB3, but that is imminent.
This is still not done, however it seems like it'd be a good change to do
We have not; that sounds useful!
You can follow the general format of other zoo models: 1. Add an entry to [`model_list.py`](https://github.com/facebookresearch/ParlAI/blob/main/parlai/zoo/model_list.py) 2. Add a build script; see [this for an example](https://github.com/facebookresearch/ParlAI/blob/main/parlai/zoo/blenderbot2/blenderbot2_400M.py) 3. (Highly encouraged) Add...
cc @moyapchen maybe?
I'm able to repro on my end so I'll try to look into it a bit more and report back here with findings ### Edit Update 1 The model is...
> Should we try w/ slurm to rule out it being multiprocessing? Tried this, still fails. something is hanging somewhere...