joss-reviews
joss-reviews copied to clipboard
[REVIEW]: Castellum: A participant management tool for scientific studies
Submitting author: @jagnobli (Timo Göttel) Repository: https://git.mpib-berlin.mpg.de/castellum/castellum Branch with paper.md (empty if default branch): joss-submission Version: v0.82.0 Editor: @oliviaguest Reviewers: @samhforbes, @htwangtw Archive: 10.5281/zenodo.7177876
Status
Status badge code:
HTML: <a href="https://joss.theoj.org/papers/ee9206f83dd18bdab2062f3e23d778b0"><img src="https://joss.theoj.org/papers/ee9206f83dd18bdab2062f3e23d778b0/status.svg"></a>
Markdown: [](https://joss.theoj.org/papers/ee9206f83dd18bdab2062f3e23d778b0)
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@samhforbes & @htwangtw, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
@editorialbot generate my checklist
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @oliviaguest know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.36 s (1302.9 files/s, 103336.0 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 296 4543 2890 16674
HTML 125 667 1 4966
PO File 2 756 1208 2236
Markdown 12 427 0 1231
JSON 8 0 0 664
JavaScript 12 55 7 479
CSS 1 25 3 161
YAML 2 5 3 140
make 1 14 0 47
Bourne Shell 2 12 5 32
Dockerfile 1 11 0 25
SVG 6 0 0 22
INI 1 3 0 19
TeX 1 1 0 16
PHP 1 0 0 9
-------------------------------------------------------------------------------
SUM: 471 6519 4117 26721
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- None
MISSING DOIs
- 10.17487/rfc6238 may be a valid DOI for title: TOTP: Time-Based One-Time Password Algorithm
INVALID DOIs
- None
Wordcount for paper.md is 974
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Review checklist for @samhforbes
Conflict of interest
- [x] I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.
Code of Conduct
- [x] I confirm that I read and will adhere to the JOSS code of conduct.
General checks
- [x] Repository: Is the source code for this software available at the https://git.mpib-berlin.mpg.de/castellum/castellum?
- [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
- [x] Contribution and authorship: Has the submitting author (@jagnobli) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
- [x] Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
Functionality
- [x] Installation: Does installation proceed as outlined in the documentation?
- [x] Functionality: Have the functional claims of the software been confirmed?
- [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)
Documentation
- [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
- [ ] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
- [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
- [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
- [x] Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
- [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support
Software paper
- [x] Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
- [x] A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
- [x] State of the field: Do the authors describe how this software compares to other commonly-used packages?
- [x] Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
- [x] References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
Review checklist for @htwangtw
Conflict of interest
- [x] I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.
Code of Conduct
- [x] I confirm that I read and will adhere to the JOSS code of conduct.
General checks
- [x] Repository: Is the source code for this software available at the https://git.mpib-berlin.mpg.de/castellum/castellum?
- [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
- [x] Contribution and authorship: Has the submitting author (@jagnobli) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
- [x] Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
Functionality
- [x] Installation: Does installation proceed as outlined in the documentation?
- [x] Functionality: Have the functional claims of the software been confirmed?
- [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)
Documentation
- [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
- [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
- [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
- [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
- [x] Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
- [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support
Software paper
- [x] Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
- [x] A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
- [x] State of the field: Do the authors describe how this software compares to other commonly-used packages?
- [x] Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
- [x] References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
@jagnobli can you check out that missing DOI when you get a sec — no big rush?
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
👋 @samhforbes, @htwangtw: thank you both so much for agreeing to review this! ☺️
Any code-related questions, feel free to open issues on the package's repo itself and then link to whatever issue you open from here. This means we can all keep track of what is being asked of the authors from this thread. Everything important about the PDF or the package, like high-level questions to me or @jagnobli, discussions and feedback, just leave it here as a comment directly. ✨ 🌷
@jagnobli, this looks exciting. As far as I can see we would need a mpib git account to log in and create issues, so I currently can't raise issues on the repository. Is this the case or have I missed something (the latter is very possible...)
@jagnobli can you check out that missing DOI when you get a sec — no big rush?
sure thing, @oliviaguest will do that next week...
@jagnobli, this looks exciting. As far as I can see we would need a mpib git account to log in and create issues, so I currently can't raise issues on the repository. Is this the case or have I missed something (the latter is very possible...)
@samhforbes I wasn't aware that issues should be opened over there - this is a bit annoying, sorry. Will ask a colleague to set up a guest account for you (and @htwangtw as well). Could you please pm me your mail address that should be used as account?
@jagnobli, this looks exciting. As far as I can see we would need a mpib git account to log in and create issues, so I currently can't raise issues on the repository. Is this the case or have I missed something (the latter is very possible...)
Ah, true. Sorry and thanks! I actually noticed that and then forgot. @openjournals/joss-eics (see above comment by @samhforbes) shall we just post here in this thread — thoughts? Would that work @jagnobli event tho it makes things harder (probably) to keep track for you?
This isn't just a problem for the reviewer, but it's a problem for meeting the Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support criterion as well. In some cases, an option has been to create a github mirror of the software for the purpose of issues and contributions, but it's up the author to figure how the software will meet this criterion
@jagnobli, this looks exciting. As far as I can see we would need a mpib git account to log in and create issues, so I currently can't raise issues on the repository. Is this the case or have I missed something (the latter is very possible...)
@samhforbes I wasn't aware that issues should be opened over there - this is a bit annoying, sorry. Will ask a colleague to set up a guest account for you (and @htwangtw as well). Could you please pm me your mail address that should be used as account?
Thanks, and it would likely be only a step 0. This is because given @danielskatz comment above, your code currently does not indeed meet our guidelines.
hmmm, perhaps I am a little small-minded here (sorry):
I actually think that we provide clear "guidelines for third parties wishing to [...] Contribute [...] Report issues [...] Seek support" in a contributing markdown of our repo.
So, from my point of view the community guidelines should be more explicit if our approach is insufficient...
What do you think?
@jagnobli this is a bit errant, so I will discuss with other @openjournals/joss-eics. For the time being, can you allow @samhforbes and @htwangtw access to the repo, please?
should now present the missing doi:
@editorialbot generate pdf
@editorialbot generate pdf
@jagnobli @editorialbot command has to be a comment on its own to work 😊
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.17487/rfc6238 is OK
MISSING DOIs
- None
INVALID DOIs
- None
I see @jagnobli @samhforbes are discussing how to make the documentation more clear for installation, here: https://git.mpib-berlin.mpg.de/castellum/castellum/-/issues/158#note_26799. Might be useful to check out for @htwangtw too.
In general, please link to any issues you all open relating to this, so we can all keep track.
Hi @jagnobli I think from the example deployment this looks exciting. I appreciate there are merge requests where extensive additions are being made to docs, however I will comment on the main branch at this stage.
To avoid talking across purposes on the project repo, I'm going to highlight some issues as I see them here, and possibly open isolated issues where appropriate on the project repo. The reason is some of these are more wider scope issues or relate more to the paper than to the software, others may want the watchful and wise input of @oliviaguest
Installation: https://git.mpib-berlin.mpg.de/castellum/castellum/-/issues/158#note_26799. I currently can't verify installation, only an example deployment, which works. Note this works on AMD but not ARM64, and this might be worth documenting?
Documentation is where I have a few things to highlight.
- README could point the various audiences tot he correct documentation (this would fulfil the aim of clearly stating the target audience). So eg those interested in running an example see xxx, those interested in setting up a test server see yyy
- Installation instructions. I can't currently do a standalone install, so no idea if these are complete, but from the errors I get I think a java runtime is required which I don't see in your documentation, but I may have missed something. Again information on which systems can run this would be welcome here.
- Example usage, please make sure the documentation clearly designates how to log in to the test deployment
- Automated tests: I can see you have tests and CI here
- Community guidelines. From what I can see the contributing md document contains this, however the link given to contact takes me to the mpib lifespan psychology homepage, and I can't see clear links to castellum from there, nor the contact details for the authors with any great ease.
Software paper:
- Some discussion of the state of the field, comparison with existing platforms would be good to meet these guidelines.
Hi @samhforbes,
thanks for your valuable feedback. We will work on your requests. To some of your points I have some direct answers or comprehension questions though:
* Installation instructions.
Not sure if this is helping: We currently see our main target group to be facilities that provide this service to their researchers (rather than having individual researches installing it locally on their machines). Therefore, to us "installation" means "setting up a local test deployment to check if it is worth it" and you were able to set this up...maybe you could elaborate a little more on what's missing from your point of you?
(your technical question will be discussed in the other repo issue board)
* Automated tests: I can see you have tests and CI here
To me this statement lacks some kind of request. Hence, should we do something about this?
* Community guidelines. From what I can see the contributing md document contains this, _however_ the link given to contact takes me to the mpib lifespan psychology homepage, and I can't see clear links to castellum from there, nor the contact details for the authors with any great ease.
We simply forgot to update the link to the new location. This will be fixed soon. Do you think this would be enough to check it in the check list?
Software paper: * Some discussion of the state of the field, comparison with existing platforms would be good to meet these guidelines.
I am a little unsure here as the original checklist states "packages" (maybe a question for @oliviaguest?): Does this mean we should only present other open source tools that may come with a similar feature set or should we present commercial online services that provide similar feature sets (I would have trouble in finding this helpful as there are some but would be a different scope to review them; to me, it comes down to "make or buy" and it feels somehow strange to put this in a paper about an open source tool?)
Hi @jagnobli Excellent stuff.
-
installation instructions. Yes some of this is at your discretion of course, since the audience is specific, and I can understand that these would be facilities / centres / departments running this, not individuals - and I think you are reasonably clear about this in the docs. However, the local test deployment works, but not on all machines, and the login instructions were not super well documented, so these instructions could be added for the test deployment perhaps? For the actual installation as it is intended my point is more that I can't test this as it stands (of course since I am not the intended installer). So as long as the instructions are adequate for a facility manager to set this up, then that meets remit.
-
Automated tests. Yup, sorry for the confusion, just working my way through the list and noted your testing seemed good!
-
Community guidelines. I see this is now updated and links correctly to contact details. Ticked and done.
-
Software paper. I'm happy for @oliviaguest to advise here as well, but (and you know the state of the field better than me) Castellum seems to be fairly unique in what it provides - if no other platform provides a comprehensive, open source, GDPR compliant system, then I would say this should definitely be a strength of Castellum mentioned in the paper :)
From where I stand, I like the platform and the way it runs, and am impressed with the work done on it by the three authors. I have tested installation and functionality of the example deployment and am satisfied, pending the documentation of installation details and other issues mentioned above and in the linked issue. Obviously I have only tested the example deployment, so some details of a full install I have to take at face value.
@editorialbot generate pdf