multivers icon indicating copy to clipboard operation
multivers copied to clipboard

Using your own data

Open griff4692 opened this issue 3 years ago • 2 comments

Hey David -

Thanks for this repo (it's very legible and easy to read - I also like PLightning!)

If I want to very quickly run predict a label for text A given text B, is there a script to get raw text data in the right format? It seems the predict script relies on specific input files for datasets, but I just want to use it on new text pairs. (the use case is factuality verification of generated scientific abstracts, given the body of the article). I might have to feed each abstract sentence-by-sentence to make it more in line with the SciFact hypotheses.

Make predictions using the convenience wrapper script [script/predict.sh](https://github.com/dwadden/multivers/blob/main/script/predict.sh). This script accepts a dataset name as an argument, and makes predictions **using the correct inputs files** and model checkpoints for that dataset.

griff4692 avatar Jul 22 '22 01:07 griff4692

Hi,

I'm sorry for the delay responding. I've added a section to the README covering this, see here: https://github.com/dwadden/multivers#making-predictions-for-new-datasets.

Let me know if you've still got questions!

Dave

dwadden avatar Aug 15 '22 21:08 dwadden

Re your question about feeding generated abstracts line-by-line, this an interesting question. Seems like there's two obvious ways to do this:

  • Just treat the whole generated abstract as the claim. Not ideal because the abstract probably makes multiple claims, and MultiVerS isn't really designed to deal with this.
  • Split the generated abstract into sentences and verify one at a time. Without trying it, my guess is that this would work better. One hurdle here is that some sentences probably don't make claims, and others probably aren't self contained due to coreference, anaphora, etc.

I can imagine a couple ways to maybe make the "split-then-verify" approach a bit better.

  • Run a claim detection system like this or this on the generated abstracts to filter out sentences that don't make any claims worth checking.
  • Run a decontextualization system or claim generation system on the sentences that are worth checking to make them stand-alone.

Let me know if you end up trying any of this, I'm curious to see if it works!

dwadden avatar Aug 15 '22 22:08 dwadden

@griff4692 are you good for me to close this?

dwadden avatar Jan 27 '23 04:01 dwadden

@griff4692 are you good for me to close this?

Yes - sorry for not replying to your comment.
(If interested, I can do a pull request with a simple Python class I wrote at some point which allows for single sentence scoring)

griff4692 avatar Jan 27 '23 12:01 griff4692

No worries, thanks for closing. And sure, I'd gladly accept a PR to do single sentence scoring!

dwadden avatar Jan 28 '23 06:01 dwadden