Paragraph-level-Simplification-of-Medical-Texts
Paragraph-level-Simplification-of-Medical-Texts copied to clipboard
Paragraph-level Simplification of Medical Texts
Code and data for our NAACL 2021 paper "Paragraph-level Simplification of Medical Texts," which can be found here. If you have any questions about the code or the paper, feel free to email me at [email protected]
. If you find our data and/or code useful in your work, please include the following citation:
@inproceedings{devaraj-etal-2021-paragraph,
title = "Paragraph-level Simplification of Medical Texts",
author = "Devaraj, Ashwin and Marshall, Iain and Wallace, Byron and Li, Junyi Jessy",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics",
month = jun,
year = "2021",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.naacl-main.395",
pages = "4972--4984",
}
Dependencies
pytorch
pytorch-lightning==0.9.0
transformers==3.3.1
rouge_score
nltk
Data
The Cochrane dataset that we used can be found in the data
directory. To scrape your own dataset from the Cochrane website, run the following command, which creates a directory called scraped_data
and populates it with the raw dataset data.json
.
python prepare_data/scrape.py
Then run the following command to process the raw dataset generated by the above script. It creates a cleaned up and length-filtered version of the original dataset, which can be found at scraped_data/data_final_1024.json
.
python prepare_data/process.py
Finally, create a train-val-test split of this dataset by running the following command, which creates the directory scraped_data/data-1024
, which contains the split data.
python prepare_data/split_dataset.py
To use this dataset for training or text generation, copy the directory into the data
folder.
Train Model
There are 4 different training settings explored in the paper: no unlikelihood training and unlikelihood training with 3 different sets of weights used in the loss function (Cochrane, Newsela, and both). To train a model under one of these settings, run one of the following scripts: scripts/train/bart-no-ul.sh
or scripts/bart-ul_{cochrane, newsela, both}.sh
.
Pretrained Models
The pretrained models corresponding to the 4 training settings can be found here. To use these models, unzip the directories and place them in the trained_models
directory.
Generate Text
To generate text using trained models, run one of the following scripts: scripts/gen/bart_gen_{no-ul, cochrane, newsela, both}.sh
. Decoding hyperparameters can be controlled by modifying the command-line arguments listed in the scripts. Both a text and JSON version of the generated text will be written to the directory containing the model (i.e. trained_models/bart-ul_cochrane
).