[REVIEW]: Learning Machine Learning with Lorenz-96
Submitting author: @dhruvbalwada (Dhruv Balwada) Repository: https://github.com/m2lines/L96_demo Branch with paper.md (empty if default branch): Version: v1.0.3 Editor: @magsol Reviewers: @micky774, @AnonymousFool Archive: 10.5281/zenodo.13357587 Paper kind: learning module
Status
Status badge code:
HTML: <a href="https://jose.theoj.org/papers/c644a0264f445698f212a051d8ace6e8"><img src="https://jose.theoj.org/papers/c644a0264f445698f212a051d8ace6e8/status.svg"></a>
Markdown: [](https://jose.theoj.org/papers/c644a0264f445698f212a051d8ace6e8)
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@micky774 & @AnonymousFool, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
@editorialbot generate my checklist
The reviewer guidelines are available here: https://openjournals.readthedocs.io/en/jose/reviewer_guidelines.html. Any questions/concerns please let @magsol know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.90 T=0.08 s (584.4 files/s, 331163.2 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Jupyter Notebook 28 0 16704 7734
Python 6 405 581 1335
TeX 2 37 1 388
Markdown 6 71 0 290
YAML 5 10 27 183
SVG 2 0 0 2
-------------------------------------------------------------------------------
SUM: 49 523 17313 9932
-------------------------------------------------------------------------------
Commit count by author:
58 Shubham Gupta
57 Alistair Adcroft
45 Ryan Abernathey
29 Shantanu Acharya
25 pre-commit-ci[bot]
23 dhruvbalwada
20 Dhruv Balwada
17 Mohamed Aziz Bhouri
16 Johanna Goldman
14 Laure Zanna
9 Brandon Reichl
7 Feiyu Lu
7 Yani Yoval
5 Nora Loose
5 Pierre Gentine
4 lesommer
3 Andrew Ross
3 Arthur
3 Lorenzo Zampieri
3 Ziwei Li
2 Mitch Bushuk
2 Sara Shamekh
1 Alex Connolly
1 William-gregory
1 chzhangudel
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- None
MISSING DOIs
- 10.1017/cbo9780511617652.004 may be a valid DOI for title: Predictability: a problem partly solved
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Paper file info:
📄 Wordcount for paper.md is 1350
🔴 Failed to discover a Statement of need section in paper
Review checklist for @Micky774
Conflict of interest
- [x] As the reviewer I confirm that I have read the JOSE conflict of interest policy and that there are no conflicts of interest for me to review this work.
Code of Conduct
- [x] I confirm that I read and will adhere to the JOSE code of conduct.
General checks
- [x] Repository: Is the source for this learning module available at the https://github.com/m2lines/L96_demo?
- [x] License: Does the repository contain a plain-text LICENSE file with the contents of a standard license? (OSI-approved for code, Creative Commons for content)
- [x] Version: Does the release version given match the repository release?
- [x] Authorship: Has the submitting author (@dhruvbalwada) made visible contributions to the module? Does the full list of authors seem appropriate and complete?
Documentation
- [x] A statement of need: Do the authors clearly state the need for this module and who the target audience is?
- [x] Installation instructions: Is there a clearly stated list of dependencies?
- [x] Usage: Does the documentation explain how someone would adopt the module, and include examples of how to use it?
- [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the module 2) Report issues or problems with the module 3) Seek support
Pedagogy / Instructional design (Work-in-progress: reviewers, please comment!)
- [x] Learning objectives: Does the module make the learning objectives plainly clear? (We don't require explicitly written learning objectives; only that they be evident from content and design.)
- [x] Content scope and length: Is the content substantial for learning a given topic? Is the length of the module appropriate?
- [x] Pedagogy: Does the module seem easy to follow? Does it observe guidance on cognitive load? (working memory limits of 7 +/- 2 chunks of information)
- [x] Content quality: Is the writing of good quality, concise, engaging? Are the code components well crafted? Does the module seem complete?
- [x] Instructional design: Is the instructional design deliberate and apparent? For example, exploit worked-example effects; effective multi-media use; low extraneous cognitive load.
JOSE paper
- [x] Authors: Does the
paper.mdfile include a list of authors with their affiliations? - [x] A statement of need: Does the paper clearly state the need for this module and who the target audience is?
- [x] Description: Does the paper describe the learning materials and sequence?
- [x] Does it describe how it has been used in the classroom or other settings, and how someone might adopt it?
- [x] Could someone else teach with this module, given the right expertise?
- [x] Does the paper tell the "story" of how the authors came to develop it, or what their expertise is?
- [x] References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
Hey @Micky774 @AnonymousFool 👋 Wanted to check in on the status of your reviews, see if you needed anything or if there are any roadblocks I can help troubleshoot. Thanks!
Oh my god, well this fell off my radar somehow. That was irresponsible of me. Mea culpa.
I've got too much scheduled today to work on it, so I'll start work in earnest tomorrow.
Review checklist for @AnonymousFool
Conflict of interest
- [x] As the reviewer I confirm that I have read the JOSE conflict of interest policy and that there are no conflicts of interest for me to review this work.
Code of Conduct
- [x] I confirm that I read and will adhere to the JOSE code of conduct.
General checks
- [x] Repository: Is the source for this learning module available at the https://github.com/m2lines/L96_demo?
- [x] License: Does the repository contain a plain-text LICENSE file with the contents of a standard license? (OSI-approved for code, Creative Commons for content)
- [x] Version: Does the release version given match the repository release?
- [x] Authorship: Has the submitting author (@dhruvbalwada) made visible contributions to the module? Does the full list of authors seem appropriate and complete?
Documentation
- [x] A statement of need: Do the authors clearly state the need for this module and who the target audience is?
- [x] Installation instructions: Is there a clearly stated list of dependencies?
- [x] Usage: Does the documentation explain how someone would adopt the module, and include examples of how to use it?
- [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the module 2) Report issues or problems with the module 3) Seek support
Pedagogy / Instructional design (Work-in-progress: reviewers, please comment!)
- [x] Learning objectives: Does the module make the learning objectives plainly clear? (We don't require explicitly written learning objectives; only that they be evident from content and design.)
- [x] Content scope and length: Is the content substantial for learning a given topic? Is the length of the module appropriate?
- [x] Pedagogy: Does the module seem easy to follow? Does it observe guidance on cognitive load? (working memory limits of 7 +/- 2 chunks of information)
- [x] Content quality: Is the writing of good quality, concise, engaging? Are the code components well crafted? Does the module seem complete?
- [x] Instructional design: Is the instructional design deliberate and apparent? For example, exploit worked-example effects; effective multi-media use; low extraneous cognitive load.
JOSE paper
- [x] Authors: Does the
paper.mdfile include a list of authors with their affiliations? - [x] A statement of need: Does the paper clearly state the need for this module and who the target audience is?
- [x] Description: Does the paper describe the learning materials and sequence?
- [x] Does it describe how it has been used in the classroom or other settings, and how someone might adopt it?
- [x] Could someone else teach with this module, given the right expertise?
- [x] Does the paper tell the "story" of how the authors came to develop it, or what their expertise is?
- [x] References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
Sorry for the delay, and thank you for your patience. I will be performing the first part of my review today, and hope to complete a full round by tomorrow evening, circumstances permitting.
Once again, sorry for the delay @dhruvbalwada. The good news is that the vast majority of the non-pedagogical components are already in a fantastic state, and there is no core content missing. If anything, most of these suggestions are to round out the existing content and offer some more concrete and explicit communication which future learners can benefit from. Below is my first-pass of the non-pedagogical sections.
If you have any questions about the feedback, please feel free to let me know! In particular, if there is something you'd like a more detailed discussion and dissection of, it would probably be best to open an issue in your repository corresponding to the specific piece of feedback that needs clarification. We can continue a more detailed discussion there and simply link back to it in this thread for brevity/clarity.
Non-pedagogical components review
General checks
- Please create an initial release in the repository. For details, see the github docs. This should match the version provided in your application, i.e.
v1.0
Documentation
- Your
README.mdlacks a clear statement of need. The easiest resolution would be to add a small section describing a specific (but perhaps non-exhaustive) list of folks that may benefit from this content. You describe this a bit in your paper, albeit slightly scattered, so it should be fairly easy to add. In particular it would be beneficial to specify if there is any prior knowledge required for making full use of this module. - In a similar vein, while you provide instructions for building/serving the content, the readme lacks a discussion on the contextual use of the repository. Please add some words offering instructions or recommendations for using the repository as a teaching tool itself, e.g. a recommended pace/timeline, or potential adaptation of the content to suit specific needs (this is less obvious and may not be appropriate).
- While your documentation includes instructions for contribution to the module, it does not provide instructions for reporting problems or obtaining support. This could be as simple as directing them to open an issue in the repository and perhaps including a code of conduct if appropriate. Optionally you may provide either individual or organizational contact information if there is a commitment to maintenance / support, but this is not strictly necessary.
JOSE paper
- Your paper lacks a clear statement of need. Most of the content that would comprise the statement of need is present in the submission, however it is scattered and should instead be explicitly included in a separate section.
- Please source any data or external models you may be using as a core part of the module (as opposed to transient or one-time use).
- Please include some more context regarding the tooling your module covers, and its role in the field. Specifically, please explicitly mention and cite some other models/solutions that accomplish similar tasks to the L96 model your module focuses on. It is reasonable to expect future users to gain much value from a submission that includes relevant citations as they can use those citations as future reading.
Alright, I've done a run through of all the required material for the review. I agree with Meekail's feedback thus far, and I found one additional issue with respect to the non-pedagogical requirements that I've documented here.
With respect to the pedagogical content, I think that the structure, ordering, and pacing of ideas throughout the notebooks is impeccable. I think though that there are a lot of small edits I could make to various sentences and formulae to improve their precision and clarity.
I think the most productive and easiest way to deliver and discuss the feedback would be if I made a new branch of the repository in which I commit the edit ideas as changes to the notebooks. Then I can open a pull request, and we can use github's comment and suggestion infrastructure to organize discussion of the feedback. If you, on review, found the feedback valuable, then you can just merge the changes in.
I've also noticed a lot of small typos and grammatical errors throughout the notebooks, none of which affected my ability to understand the ideas the notebooks communicate. But as part of my editing feedback, I could include spelling and grammatical fixes. Or I could just ignore them if you prefer.
Thoughts @dhruvbalwada?
@AnonymousFool - If you have the time to make the edits in a new branch, it would be great and very much appreciated.
@AnonymousFool let us know how the review is progressing.
If you face any further technical difficulties, reach out to me here / open an issue and I'll be addressing it
Hi @AnonymousFool and @Micky774, thanks so much for your help so far! I still see some items in your checklists that haven't been addressed. Are you waiting for feedback, or would you be able to continue your reviews?
@magsol I'll be updating my review this upcoming week, but afaik still waiting on changes in the repository to address the current given feedback as well.
Hi @dhruvbalwada, the reviewers are indicating that they're waiting on changes on your end. Can you provide an update on how that's going?
Hi @dhruvbalwada , @IamShubhamGupto: I saw you working on the feedback from @AnonymousFool, but I'm not clear on whether you have addressed the feedback from @Micky774 yet. I'd like to see if we can wrap this up soon; are you waiting on anything from the reviewers?
Hi @magsol @Micky774 @AnonymousFool - we have made all the appropriate changes to the repo and the paper according to your suggestions. Please let us know what else to address and how to proceed.
@magsol @Micky774 thank you for reviewing our work so far and waiting for the new changes. I believe as of today all the remaining requested changes have been published except for releasing version v1.0. Since we would have to recreate the release to incorporate newer commits, I would keep this as the last step.
Let me know if the current version of the repository is ready and the release will be created subsequently
Alright, yeah, I think the latest round of edits has covered the whole checklist without problems.
I hope at some point to get around to those edit suggestions I want to do, but I seem to have bogged myself down in other problems, and I see no reason to prevent publishing what I already believe is a well-functioning educational resource.
@magsol and @Micky774 - would you like to make any more changes before this can be published?
@dhruvbalwada @magsol all good on my end -- so sorry for the delay, and thank you for your patience and work!
Post-Review Checklist for Editor and Authors
Additional Author Tasks After Review is Complete
- [x] Double check authors and affiliations (including ORCIDs)
- [x] Make a release of the software with the latest changes from the review and post the version number here. This is the version that will be used in the JOSE paper.
- [x] Archive the release on Zenodo/figshare/etc and post the DOI here.
- [x] Make sure that the title and author list (including ORCIDs) in the archive match those in the JOSE paper.
- [x] Make sure that the license listed for the archive is the same as the software license.
Editor Tasks Prior to Acceptance
- [x] Read the text of the paper and offer comments/corrections (as either a list or a PR)
- [x] Check the references in the paper for corrections (e.g. capitalization)
- [x] Check that the archive title, author list, version tag, and the license are correct
- [x] Set archive DOI with
@editorialbot set <DOI here> as archive - [x] Set version with
@editorialbot set <version here> as version - [x] Double check rendering of paper with
@editorialbot generate pdf - [x] Specifically check the references with
@editorialbot check referencesand ask author(s) to update as needed - [ ] Recommend acceptance with
@editorialbot recommend-accept
@magsol - Are the to-do items in the above list meant to be clickable?
@dhruvbalwada Clickable for me and the reviewers, yes :) Don't worry if they appear grayed-out to you.
@magsol - regarding the third and last todo
Archive the release on Zenodo/figshare/etc and post the DOI here - does GitHub count? as of now we have v1.0 released and with the changes we will most likely make it v1.1. should we archive / delete v1.0?
Make sure that the license listed for the archive is the same as the software license.. Is the archive referred here same as the one in the third todo?
I have checked the list of authors and their orcids in the draft, and made edits as needed.
version number v1.0
https://github.com/m2lines/L96_demo/releases/tag/v1.0