joss-reviews icon indicating copy to clipboard operation
joss-reviews copied to clipboard

[REVIEW]: tipr: An R package for sensitivity analyses for unmeasured confounders

Open editorialbot opened this issue 2 years ago • 43 comments

Submitting author: @LucyMcGowan (Lucy D'Agostino McGowan) Repository: https://github.com/LucyMcGowan/tipr Branch with paper.md (empty if default branch): joss Version: v1.0.0 Editor: @fabian-s Reviewers: @gcskoenig, @MichaelSchomaker Archive: 10.5281/zenodo.6958926

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/44597f0b3015b6efd8de2edbe7894641"><img src="https://joss.theoj.org/papers/44597f0b3015b6efd8de2edbe7894641/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/44597f0b3015b6efd8de2edbe7894641/status.svg)](https://joss.theoj.org/papers/44597f0b3015b6efd8de2edbe7894641)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@gcskoenig & @MichaelSchomaker, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @fabian-s know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @gcskoenig

📝 Checklist for @MichaelSchomaker

editorialbot avatar Jun 22 '22 06:06 editorialbot

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

editorialbot avatar Jun 22 '22 06:06 editorialbot

Software report:

github.com/AlDanial/cloc v 1.88  T=0.03 s (1035.7 files/s, 110678.1 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
R                               19            188            740           1460
Markdown                         4             88              0            312
TeX                              1             14              0            154
YAML                             4              8              6             51
Rmd                              1             28             38             12
-------------------------------------------------------------------------------
SUM:                            29            326            784           1989
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

editorialbot avatar Jun 22 '22 06:06 editorialbot

Wordcount for paper.md is 1940

editorialbot avatar Jun 22 '22 06:06 editorialbot

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1093/jnci/22.1.173 is OK
- 10.1093/oxfordjournals.aje.a112581 is OK
- 10.1016/0021-9681(66)90062-2 is OK
- 10.1097/EDE.0000000000000457 is OK
- 10.1111/j.2517-6161.1983.tb01242.x is OK
- 10.2307/2533848 is OK
- 10.7326/M16-2607 is OK
- 10.1111/rssb.12348 is OK
- 10.1007/978-0-387-87959-8 is OK
- 10.1002/(SICI)1097-0258(19981015)17:19<2265::AID-SIM918>3.0.CO;2-B is OK
- 10.1037/h0037350 is OK
- 10.1214/12-aos1058 is OK
- 10.1159/000315883 is OK
- 10.3102/10769986011003207 is OK

MISSING DOIs

- None

INVALID DOIs

- None

editorialbot avatar Jun 22 '22 06:06 editorialbot

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

editorialbot avatar Jun 22 '22 06:06 editorialbot

👋🏼 @LucyMcGowan @gcskoenig @MichaelSchomaker this is the review thread for the paper. All of our communications will happen here from now on.

As a reviewer, the first step is to create a checklist for your review by entering

@editorialbot generate my checklist

as the top of a new comment in this thread.

These checklists contain the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. The first comment in this thread also contains links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention openjournals/joss-reviews#REVIEW_NUMBER so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for reviews to be completed within about 2-4 weeks. Please let me know if any of you require some more time. We can also use EditorialBot (our bot) to set automatic reminders if you know you'll be away for a known period of time.

Please feel free to ping me (@fabian-s) if you have any questions/concerns. Sorry this spent so much time in pre-review.

fabian-s avatar Jun 22 '22 06:06 fabian-s

Review checklist for @gcskoenig

Conflict of interest

  • [x] I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the https://github.com/LucyMcGowan/tipr?
  • [ ] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Contribution and authorship: Has the submitting author (@LucyMcGowan) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • [x] Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [ ] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [ ] Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • [ ] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • [x] A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • [ ] State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • [x] Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • [x] References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Comments

  • In the repository readme it says that the license is MIT, but the license fail only contains copyright information (https://github.com/LucyMcGowan/tipr/blob/master/LICENSE).
  • In the documentation on your website the CRAN installation could be mentioned. Did you actively decide not to do so?
  • It would be nice to run the tests with Travis of Github actions. There is a testsfolder and a testthat script using which the tests can be run. All tests passed.
  • The paper includes an overview of all implemented functions and their respective functionality (Section Functions). According to the review criteria, the API should not be detailed in the paper, but in the software documentation. I would encourage to shorten or remove this section.
    • If you want to details the API in the paper (which is not encouraged), it would be helpful to have a table accompoagnying the text that explains each component in the grammar, i.e. what is smd, exposed_p, exposure_r2, ....
  • Other methods are mentioned, but the paper does not provide an overview over existing R packages and what they do.
  • Spaces are missing before the citations (e.g. line 85), which does not seem to be the JOSS convention
  • Line 86, should be "for"
  • In the paper, the functionality may be visualized with a figure, e.g. a simple causal graph that illustrates the aspects of the confounder that can be modified and an overview of the methods which can be used.

gcskoenig avatar Jun 22 '22 07:06 gcskoenig

Review checklist for @MichaelSchomaker

Conflict of interest

  • [x] I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the https://github.com/LucyMcGowan/tipr?
  • [ ] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Contribution and authorship: Has the submitting author (@LucyMcGowan) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • [x] Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [ ] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [ ] Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • [ ] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • [x] A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • [x] State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • [x] Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • [x] References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

MichaelSchomaker avatar Jun 29 '22 14:06 MichaelSchomaker

  • The dependencies are not explicitly stated in the paper.
  • The License file needs more information.
  • There are no details on how to report issues/problems with the software

Comments about content: while the space in JOSS is limited, I have a few smaller suggestions and questions that could be addressed in the paper in 1-2 sentences each:

  • is the method also applicable for more complex back-door paths, e.g. A <- L_1 -> L_2 -> Y? Or, to frame it differently, do the calculations require direct effects from confounder on outcome and confounder on exposure, respectively, and how is the method applicable to confounder-exposure/outcome relationships that are mediated through other variables. If there needs to be a direct effect relationship, it should be mentioned.
  • the package contains 3 types of functions, the second for "Functions that calculate the magnitude of an unmeasured confounder needed to tip an observed effect to cross the null, rendering it inconclusive". In the abstract, the author mentions "we can use these quantities to assess how an unmeasured confounder may tip our result to insignificance, rendering the study inconclusive." The wording and functions represent a dichotomous viewpoint that may be similarly dangerous as binary p-value cutoffs or looking at overlaps of lower/upper confidence limits with null values. A CI shows the values of the parameter which, when combined with the background assumptions, produce a test model which is highly compatible with the data in the sense of having less than a certain amount of bits of information against it, see for example Rafi and Greenland: Semantic and cognitive tools to aid statistical science: replace confidence and significance by compatibility and surprise. This issue is also discussed in the original van der Weele article which is a main reference in this JOSS article: "We do not propose any threshold cutoff for the E-value. Enough mischief has been done by the arbitrary 0.05 P-value cutoff. The E-value, like the P-value, is a continuous measure. " Long story short: I would in abstract and function description emphasize the continuous nature of the approach and the potential danger of applying the (second set of) functions in way that looks only at when "an observed effect to cross the null, rendering it inconclusive". If I had the choice, I would actually strongly favor an E-value example over the tipping point example.
  • I would clearly define standardized mean difference, to avoid misunderstandings and misinterpretations.
  • Other than that, I think this is a really useful R package with a well-written summary.

MichaelSchomaker avatar Jun 29 '22 16:06 MichaelSchomaker

👋🏼 @LucyMcGowan could you update us on your timeline to adress the reviewers' comments?

fabian-s avatar Jul 18 '22 08:07 fabian-s

Thank you so much! I hope to have these addressed by next week.

LucyMcGowan avatar Jul 23 '22 19:07 LucyMcGowan

Thank you so much for the reviews, they certainly helped make the paper better! Please find my point-by-point responses below.

Reviewer 1

  • In the repository readme it says that the license is MIT, but the license fail only contains copyright information (https://github.com/LucyMcGowan/tipr/blob/master/LICENSE).

I have added a license.md file to the repository.

  • In the documentation on your website the CRAN installation could be mentioned. Did you actively decide not to do so?

I have updated the README to include this.

  • It would be nice to run the tests with Travis of Github actions. There is a testsfolder and a testthat script using which the tests can be run. All tests passed.

I have added the tests via Github actions

  • The paper includes an overview of all implemented functions and their respective functionality (Section Functions). According to the review criteria, the API should not be detailed in the paper, but in the software documentation. I would encourage to shorten or remove this section.

I have removed this section and amended the previous to list the various methods (rather than all function names)

  • If you want to details the API in the paper (which is not encouraged), it would be helpful to have a table accompoagnying the text that explains each component in the grammar, i.e. what is smd, exposed_p, exposure_r2, ....

I have removed this section

  • Other methods are mentioned, but the paper does not provide an overview over existing R packages and what they do.

I have added the following text to describe the current (single package) landscape rather than an overview package

There are several related methods for conducting sensitivity analyses for unmeasured confounders [@Cornfield; @Bross; @Schlesselman; @Rosenbaum:1983; @Lin; @lash2009applying; @rosenbaum1986dropping; @cinelli2020making; @VanderWeele:2017ki; @Ding], some of which have their own R packages, for example methods in @cinelli2020making can be implemented using the sensemakr R package, obsSens implements methods from @Lin, and EValue implements methods in @VanderWeele:2017ki;. However, there is not currently a single R package that has a unified grammar allowing the user to conduct appropriate sensitivity analysis for their study.

  • Spaces are missing before the citations (e.g. line 85), which does not seem to be the JOSS convention Line 86, should be "for"

I have corrected this spacing

  • In the paper, the functionality may be visualized with a figure, e.g. a simple causal graph that illustrates the aspects of the confounder that can be modified and an overview of the methods which can be used.

I have added a DAG as well as an additional figure to the final example

Reviewer 2

  • The dependencies are not explicitly stated in the paper.

The dependencies are in the DESCRIPTION file of the package itself (and handled by CRAN on installation) I would be happy to include this information in the paper, however I couldn't find any other R papers that do -- is there a standard way to include this information?

  • The License file needs more information.

I have added a license.md file to the repository

  • There are no details on how to report issues/problems with the software

I have added a bugreports link to the DESCRIPTION file as well as a code of conduct to the repository (and linked in the README)

Comments about content: while the space in JOSS is limited, I have a few smaller suggestions and questions that could be addressed in the paper in 1-2 sentences each:

  • is the method also applicable for more complex back-door paths, e.g. A <- L_1 -> L_2 -> Y? Or, to frame it differently, do the calculations require direct effects from confounder on outcome and confounder on exposure, respectively, and how is the method applicable to confounder-exposure/outcome relationships that are mediated through other variables. If there needs to be a direct effect relationship, it should be mentioned.

Mathematically, this is a bias adjustment, so would account for any direct effects (if there are several, correlated, confounders that are missing, you can think of this as measuring the summary score of the independent effects). I have changed the word "confounder" to "confounding" in several places to indicate that this is accounting for all "confounding". For example, the Normally distributed U could be the linear combination of several confounders.

  • the package contains 3 types of functions, the second for "Functions that calculate the magnitude of an unmeasured confounder needed to tip an observed effect to cross the null, rendering it inconclusive". In the abstract, the author mentions "we can use these quantities to assess how an unmeasured confounder may tip our result to insignificance, rendering the study inconclusive." The wording and functions represent a dichotomous viewpoint that may be similarly dangerous as binary p-value cutoffs or looking at overlaps of lower/upper confidence limits with null values. A CI shows the values of the parameter which, when combined with the background assumptions, produce a test model which is highly compatible with the data in the sense of having less than a certain amount of bits of information against it, see for example Rafi and Greenland: Semantic and cognitive tools to aid statistical science: replace confidence and significance by compatibility and surprise. This issue is also discussed in the original van der Weele article which is a main reference in this JOSS article: "We do not propose any threshold cutoff for the E-value. Enough mischief has been done by the arbitrary 0.05 P-value cutoff. The E-value, like the P-value, is a continuous measure. " Long story short: I would in abstract and function description emphasize the continuous nature of the approach and the potential danger of applying the (second set of) functions in way that looks only at when "an observed effect to cross the null, rendering it inconclusive". If I had the choice, I would actually strongly favor an E-value example over the tipping point example.

Thank you for this important comment. I completely agree with the concern about dichotomizing. A main motivation for performing these tipping point analyses is that they suggest not only a different effect, but potentially a completely different direction of the effect itself, which could completely alter conclusions. I have added the following line to the summary:

The adjust functions allow an investigator to examine how a specific (or set of specific) confounders would change a result while the tip functions provide sensitivity analyses that allow an investigator to examine how extreme an unmeasured confounder would need to be in order to change the direction of the effect, and thus often the conclusions of the study.

  • I would clearly define standardized mean difference, to avoid misunderstandings and misinterpretations.

I have added the following text to specify what I mean by "standardized" as well as explained the difference in means:

If quantifying the impact of a standardized Normally distributed confounder, the impact of the unmeasured confounder on the exposure is parameterized as a difference in means between the unmeasured confounder in the exposed population and the unexposed population. By "standardized Normally distributed" we mean that the unmeasured confounder is Normally distributed with mean $\mu_1$ for the exposed and $\mu_0$ for the unexposed and unit variance. (Note a standardized Normally distributed confounder can be created from a Normally distributed confounder by dividing by the variance).

I have also changed the name of the parameter since smd can have different meanings

  • Other than that, I think this is a really useful R package with a well-written summary.

Thank you! Thank you again for reading through this paper, I really appreciate the comments.

LucyMcGowan avatar Aug 03 '22 00:08 LucyMcGowan

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

editorialbot avatar Aug 03 '22 00:08 editorialbot

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

editorialbot avatar Aug 03 '22 00:08 editorialbot

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

editorialbot avatar Aug 03 '22 00:08 editorialbot

@editorialbot generate pdf

LucyMcGowan avatar Aug 03 '22 01:08 LucyMcGowan

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

editorialbot avatar Aug 03 '22 01:08 editorialbot

@editorialbot generate pdf

LucyMcGowan avatar Aug 03 '22 01:08 LucyMcGowan

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

editorialbot avatar Aug 03 '22 01:08 editorialbot

@editorialbot check references

fabian-s avatar Aug 03 '22 07:08 fabian-s

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1093/jnci/22.1.173 is OK
- 10.1093/oxfordjournals.aje.a112581 is OK
- 10.1016/0021-9681(66)90062-2 is OK
- 10.1097/EDE.0000000000000457 is OK
- 10.1111/j.2517-6161.1983.tb01242.x is OK
- 10.2307/2533848 is OK
- 10.7326/M16-2607 is OK
- 10.1111/rssb.12348 is OK
- 10.1007/978-0-387-87959-8 is OK
- 10.1002/(SICI)1097-0258(19981015)17:19<2265::AID-SIM918>3.0.CO;2-B is OK
- 10.1037/h0037350 is OK
- 10.1214/12-aos1058 is OK
- 10.1159/000315883 is OK
- 10.3102/10769986011003207 is OK

MISSING DOIs

- None

INVALID DOIs

- None

editorialbot avatar Aug 03 '22 07:08 editorialbot

thank you @LucyMcGowan, excellent. I added some very minor comments on formatting / language directly in the diffs of your commits.

@MichaelSchomaker @gcskoenig are you satisfied with these changes as well?

re:

The dependencies are not explicitly stated in the paper.

that's fine for a JOSS paper on an R package since the dependencies are explicit in the package's DESCRIPTION

fabian-s avatar Aug 03 '22 07:08 fabian-s

Thank you so much! I tried to add a line space again and I think it fixed the caption?

LucyMcGowan avatar Aug 03 '22 12:08 LucyMcGowan

@editorialbot generate pdf

LucyMcGowan avatar Aug 03 '22 12:08 LucyMcGowan

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

editorialbot avatar Aug 03 '22 12:08 editorialbot

@editorialbot generate pdf

LucyMcGowan avatar Aug 03 '22 12:08 LucyMcGowan

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

editorialbot avatar Aug 03 '22 12:08 editorialbot

Thank you so much! I tried to add a line space again and I think it fixed the caption?

looks like it :)

I think I'll just go ahead and make and editorial decision to accept this now , seems fine to me. (Sorry Gunnar & Michael, I want this done before my vacation)

@LucyMcGowan At this point could you please:

  • [ ] Make a tagged release of your software, and list the version tag of the archived version here.
  • [ ] Archive the reviewed software in Zenodo or a similar service (e.g., figshare, an institutional repository)
  • [ ] Check the archival deposit (e.g., in Zenodo) has the correct metadata. This includes the title (should match the paper title) and author list (make sure the list is correct and people who only made a small fix are not on it). You may also add the authors' ORCID.
  • [ ] Please list the DOI of the archived version here.

I can then move forward with accepting the submission.

fabian-s avatar Aug 03 '22 15:08 fabian-s

  • [x] Make a tagged release of your software, and list the version tag of the archived version here: https://github.com/LucyMcGowan/tipr/releases/tag/v1.0.0
  • [x] Archive the reviewed software in Zenodo or a similar service (e.g., figshare, an institutional repository): 10.5281/zenodo.6958926
  • [x] Check the archival deposit (e.g., in Zenodo) has the correct metadata. This includes the title (should match the paper title) and author list (make sure the list is correct and people who only made a small fix are not on it). You may also add the authors' ORCID.
  • [x] Please list the DOI of the archived version here. 10.5281/zenodo.6958926

LucyMcGowan avatar Aug 03 '22 16:08 LucyMcGowan

@editorialbot set 10.5281/zenodo.6958926 as archive

fabian-s avatar Aug 04 '22 06:08 fabian-s