software-submission
software-submission copied to clipboard
Plenoptic
Submitting Author: Billy Broderick (@billbrod) All current maintainers: @billbrod (there are several other authors / contributors, but I'm the only committed to maintaining) Package Name: Plenoptic One-Line Description of Package: a python library for model-based synthesis of perceptual stimuli Repository Link: https://github.com/plenoptic-org/plenoptic Version submitted: v1.0.2 EiC: @sneakers-the-rat Editor: @sneakers-the-rat Reviewer 1: @NickleDave Reviewer 2: @DanielaPamplona Reviewers assigned: 2023-12-13 Reviews due: 2024-01-19 Archive: TBD JOSS DOI: TBD Version accepted: TBD Date accepted (month/day/year): TBD
Code of Conduct & Commitment to Maintain Package
- [X] I agree to abide by pyOpenSci's Code of Conduct during the review process and in maintaining my package after should it be accepted.
- [X] I have read and will commit to package maintenance after the review as per the pyOpenSci Policies Guidelines.
Description
- Include a brief paragraph describing what your package does:
plenopticprovides tools to help researchers understand their model by synthesizing novel informative stimuli (e.g., images, movies, or sound clips, depending on the model), which help build intuition for what features the model ignores and what it is sensitive to.
Scope
-
Please indicate which category or categories. Check out our package scope page to learn more about our scope. (If you are unsure of which category you fit, we suggest you make a pre-submission inquiry):
- [ ] Data retrieval
- [ ] Data extraction
- [ ] Data processing/munging
- [ ] Data deposition
- [ ] Data validation and testing
- [ ] Data visualization[^1]
- [ ] Workflow automation
- [ ] Citation management and bibliometrics
- [ ] Scientific software wrappers
- [ ] Database interoperability
Domain Specific & Community Partnerships
- [ ] Geospatial
- [ ] Education
- [ ] Pangeo
Community Partnerships
If your package is associated with an existing community please check below:
- [ ] Pangeo
- [ ] My package adheres to the Pangeo standards listed in the pyOpenSci peer review guidebook
[^1]: Please fill out a pre-submission inquiry before submitting a data visualization package.
- For all submissions, explain how the and why the package falls under the categories you indicated above. In your explanation, please address the following points (briefly, 1-2 sentences for each):
I am not sure which of the above categories I fit under, but the discussion in my pre-submission inquiry (#97) assured me that I'm in scope.
- Who is the target audience and what are scientific applications of this package?
Researchers in vision science, neuroscience, and machine learning. The goal is to generate novel stimuli (images, videos, audio) that researchers can then use in new experiments to better understand their computational models.
- Are there other Python packages that accomplish the same thing? If so, how does yours differ?
Not that I'm aware of. There are several bits of related research code, but no packages. Research code includes: https://github.com/ArturoDeza/Fast-Texforms, https://github.com/brialorelle/texformgen, https://github.com/ProgramofComputerGraphics/PooledStatisticsMetamers, https://github.com/freeman-lab/metamers/, among others.
- If you made a pre-submission enquiry, please paste the link to the corresponding issue, forum post, or other discussion, or
@tagthe editor you contacted:
#97 , @NickleDave
Technical checks
For details about the pyOpenSci packaging requirements, see our packaging guide. Confirm each of the following by checking the box. This package:
- [X] does not violate the Terms of Service of any service it interacts with.
- [X] uses an OSI approved license.
- [X] contains a README with instructions for installing the development version.
- [X] includes documentation with examples for all functions.
- [X] contains a tutorial with examples of its essential functions and uses.
- [X] has a test suite.
- [X] has continuous integration setup, such as GitHub Actions CircleCI, and/or others.
Publication Options
- [X] Do you wish to automatically submit to the Journal of Open Source Software? If so:
JOSS Checks
- [X] The package has an obvious research application according to JOSS's definition in their submission requirements. Be aware that completing the pyOpenSci review process does not guarantee acceptance to JOSS. Be sure to read their submission requirements (linked above) if you are interested in submitting to JOSS.
- [X] The package is not a "minor utility" as defined by JOSS's submission requirements: "Minor ‘utility’ packages, including ‘thin’ API clients, are not acceptable." pyOpenSci welcomes these packages under "Data Retrieval", but JOSS has slightly different criteria.
- [ ] The package contains a
paper.mdmatching JOSS's requirements with a high-level description in the package root or ininst/. - [ ] The package is deposited in a long-term repository with the DOI:
Note: JOSS accepts our review as theirs. You will NOT need to go through another full review. JOSS will only review your paper.md file. Be sure to link to this pyOpenSci issue when a JOSS issue is opened for your package. Also be sure to tell the JOSS editor that this is a pyOpenSci reviewed package once you reach this step.
Does the JOSS and pyOpenSci review happen simultaneously? I was going to add the paper.md and submit to JOSS after going through this review.
Are you OK with Reviewers Submitting Issues and/or pull requests to your Repo Directly?
This option will allow reviewers to open smaller issues that can then be linked to PR's rather than submitting a more dense text based review. It will also allow you to demonstrate addressing the issue via PR links.
- [x] Yes I am OK with reviewers submitting requested changes as issues to my repo. Reviewers will then link to the issues in their submitted review.
Confirm each of the following by checking the box.
- [x] I have read the author guide.
- [x] I expect to maintain this package for at least 2 years and can help find a replacement for the maintainer (team) if needed.
Please fill out our survey
- [x] Last but not least please fill out our pre-review survey. This helps us track submission and improve our peer review process. We will also ask our reviewers and editors to fill this out.
P.S. Have feedback/comments about our review process? Leave a comment here
Editor and Review Templates
Hi @billbrod, thank you for your detailed submission.
Just letting you know we are working on getting the review started.
@sneakers-the-rat will be the editor and they are going to do the initial checks.
We're still looking for one reviewer (the other one is me :slightly_smiling_face: -- sorry @sneakers-the-rat for stealing your thunder :stuck_out_tongue_closed_eyes: )
Editor response to review:
Editor comments
:wave: Hi @NickleDave and @DanielaPamplona Thank you for volunteering to review for pyOpenSci!
Please fill out our pre-review survey
Before beginning your review, please fill out our pre-review survey. This helps us improve all aspects of our review and better understand our community. No personal data will be shared from this survey - it will only be used in an aggregated format by our Executive Director to improve our processes and programs.
- [x] reviewer 1 survey completed.
- [ ] reviewer 2 survey completed.
- [ ] reviewer 3 (if applicable)
The following resources will help you complete your review:
- Here is the reviewers guide. This guide contains all of the steps and information needed to complete your review.
- Here is the review template that you will need to fill out and submit here as a comment, once your review is complete.
Please get in touch with any questions or concerns! Your review is due: Friday, January 5th, 2024
Reviewers: @NickleDave and @DanielaPamplona Due date: Friday, January 5th, 2024
We have located reviewers! Thank you @NickleDave and @DanielaPamplona for your time, and looking forward to digging into this package with you all!
Please see the above post with links to the reviewer's guide and next steps.
The next step will be to copy the review checklist template as a comment here, and as you work through the checklist the authors have indicated it is ok to raise issues and make pull requests with problems and questions as they arise. Since github issues don't have threading, we will use issues raised in the repo as "threads" to keep discussion orderly :). When you raise an issue or PR, please link back to this issue so that we can track issues raised in relationship to the review. It is up to the authors if they would like to tag the issues or name them in any specific way :).
The reviewers should feel free to split up different focuses or roles depending on what plays to your strengths - both will complete the basic checklist, but you can decide if you'd rather focus on docs, tests, internal architecture, onboarding, and so on for more detailed focus. The authors can also let the reviewers know if there are any specific areas they would like feedback on or close attention on.
Let's shoot for reviews in Friday, January 5th, 2024 - gl;hf to all ♥️
Thanks all, looking forward to this process! While I'm looking forward to all kinds of feedback, I am specifically interested in feedback on the docs, especially the motivation for the included methods and explaining what they can be used for scientifically -- I think that's the weakest (and most important?) part of our documentation right now. Since all the primary authors for the project are from the same lab, we talked a lot offline and have a similar background.
Hi, Just to let you know that I am looking forward to this. I am silent because I did not manage to start the review, but as soon as I start, I am sure I will have tons of things to say. Cheers,
This is ur friendly neighborhood editor here to say that I hope the authors and reviewers are taking a much deserved break, and to keep a reasonable and humane timeline I am making the call to reset the review window to start Jan 1 - Lets say reviews in January 19th, 2024
Have a lovely rest of the holiday
Hello dear reviewers @NickleDave @DanielaPamplona - checking in and seeing how things are going, if any guidance is needed or if i can be of any help getting reviews rolling!
Hi,
Thank you for checking, that is kind of you!
I do have a question...
What is a "vignette" ? I believe I never saw/use it... Or it is implicit in the code...?
Cheers,
Daniela Pamplona
Daniela Pamplona Researcher and Lecturer at Université Paris-Est Créteil Val de Marne https://danielapamplona.github.io/
Sent: Monday, January 08, 2024 at 7:11 PM From: "Jonny Saunders" @.> To: "pyOpenSci/software-submission" @.> Cc: "Daniela Pamplona" @.>, "Mention" @.> Subject: Re: [pyOpenSci/software-submission] Plenoptic (Issue #150)
Hello dear reviewers @NickleDave @DanielaPamplona - checking in and seeing how things are going, if any guidance is needed or if i can be of any help getting reviews rolling!
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
@DanielaPamplona
I do have a question... What is a "vignette" ?
Completely fair question. That's a term from R (see Vignettes), so more than reasonable to have not seen it before. They are sort of like longer, narrative code examples, but they don't have a strict definition - if it's easier, you can think of them as synonymous with examples.
If one were to draw the distinction, an example might look like something you'd find in a docstring like:
(show example)
def myFunction(arg):
"""
Do a function thing
Args:
arg: The thing we give this function
Example:
To use this function, give it an argument!
>>> arg = 1
>>> print(myFunction(arg))
5
Any argument will do!
>>> arg = 'new argument'
>>> print(myFunction(arg))
5
"""
# TODO: use the argument
return 5
Where a vignette would be a longer-form demonstration of major functionality of the package, often with real-world use that show multiple things working together.
From a quick look, everything I see in the examples directory could be called a vignette, but I also don't think you need to split hairs too-too finely there :)
Got it! Thanks!
Daniela Pamplona Researcher and Lecturer at Université Paris-Est Créteil Val de Marne https://danielapamplona.github.io/
Sent: Wednesday, January 10, 2024 at 5:04 AM From: "Jonny Saunders" @.> To: "pyOpenSci/software-submission" @.> Cc: "Daniela Pamplona" @.>, "Mention" @.> Subject: Re: [pyOpenSci/software-submission] Plenoptic (Issue #150)
@DanielaPamplona
I do have a question... What is a "vignette" ?
Completely fair question. That's a term from R (see Vignettes), so more than reasonable to have not seen it before. They are sort of like longer, narrative code examples, but they don't have a strict definition - if it's easier, you can think of them as synonymous with examples.
If one were to draw the distinction, an example might look like something you'd find in a docstring like:
def myFunction(arg): """ Do a function thing
Args:
arg: The thing we give this function
Example:
To use this function, give it an argument!
>>> arg = 1
>>> print(myFunction(arg))
5
Any argument will do!
>>> arg = 'new argument'
>>> print(myFunction(arg))
5
"""
# TODO: use the argument
return 5
Where a vignette would be a longer-form demonstration of major functionality of the package, often with real-world use that show multiple things working together.
From a quick look, everything I see in the examples directory could be called a vignette, but I also don't think you need to split hairs too-too finely there :)
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
Thank you for checking in @sneakers-the-rat and keeping us on track -- no questions from me yet!
I just started a new job but have this on my to-do list and expect to get through the checklist this week, will raise issues with @billbrod if needed as I do and link them to this
Hi, I have a small question for the authors ( @billbrod ), but this is a bit important. Please answer this as soon as possible. I will submit a detailed review later. I am trying to reproduce the "quick start" and I do not find the image "einstein.pgm". As a matter of fact, I do not find the folder "data". Is it somewhere on the package or you forgot to submit it? Without it, I cannot continue the review of this section... Thank you! Daniela Pamplona
Hi @DanielaPamplona I find it here: https://github.com/LabForComputationalVision/plenoptic/blob/main/data/256/einstein.pgm (I searched on GitHub)
I think the data folder is this one: https://github.com/LabForComputationalVision/plenoptic/tree/main/data
So the data folder would be in the root of the project folder (/home/you/somewhere/plenoptic/) if you clone the project with git.
Looks like there's a couple of fixtures in the tests that expect it to live there (GitHub search found this for me too) https://github.com/LabForComputationalVision/plenoptic/blob/de8f9308d0fcd24050a93b326390eb4444b9aff2/tests/conftest.py#L11 https://github.com/LabForComputationalVision/plenoptic/blob/de8f9308d0fcd24050a93b326390eb4444b9aff2/tests/conftest.py#L33
Does that help?
Hi @DanielaPamplona sorry about that! @NickleDave is right, the data folder lives in the git repo.
I do have a question about that -- it lives there because the development of the tutorials was done by folks who all had cloned the full repo, but users generally don't have that. I don't think the data/ directory should be included in the installation (the files are only used in the notebooks and tests), so do either of you have advice as to how we should provide the directory to users? Should we just explain it in the installation instructions add a little blurb at the top of each notebook that uses those files?
I would suggest adding a data or examples folder to the project with functionality for accessing it from plenoptic itself. I'll raise an issue on the repo with more detail and link it back here
Thanks y'all - yes this is the kind of thing worth raising an issue about (with a link back to this issue). As reviewers, you are in the position of being an "average user," so if you can't figure out how something works, then some improvement can likely be made in the docs, code, or in this case packaging! Go review team go <3
Hi all, please find below my review.
@billbrod I would like to go through some of the docs in more detail, to give feedback on how domain-specific material is presented; I will raise additional issues then. But I want to get this in on time, and raise the initial issues for required items below. Boxes that are not yet checked will be explained in "required", and I will raise an issue for each item.
Package Review
Please check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide
- [x] As the reviewer I confirm that there are no conflicts of interest for me to review this work (If you are unsure whether you are in conflict, please speak to your editor before starting your review).
Documentation
The package includes all the following forms of documentation:
- [x] A statement of need clearly stating problems the software is designed to solve and its target audience in README.
- [x] Installation instructions: for the development version of the package and any non-standard dependencies in README.
- [x] Vignette(s) demonstrating major functionality that runs successfully locally.
- [x] Function Documentation: for all user-facing functions.
- [ ] Examples for all user-facing functions.
- [x] Community guidelines including contribution guidelines in the README or CONTRIBUTING.
- [x] Metadata including author(s), author e-mail(s), a url, and any other relevant metadata e.g., in a
pyproject.tomlfile or elsewhere.
Readme file requirements
The package meets the readme requirements below:
- [x] Package has a README.md file in the root directory.
The README should include, from top to bottom:
- [x] The package name
- [ ] Badges for:
- [x] Continuous integration and test coverage,
- [x] Docs building (if you have a documentation website),
- [ ] A repostatus.org badge,
- [x] Python versions supported,
- [x] Current package version (on PyPI / Conda).
NOTE: If the README has many more badges, you might want to consider using a table for badges: see this example. Such a table should be more wide than high. (Note that the a badge for pyOpenSci peer-review will be provided upon acceptance.)
- [x] Short description of package goals.
- [x] Package installation instructions
- [x] Any additional setup required to use the package (authentication tokens, etc.)
- [x] Descriptive links to all vignettes. If the package is small, there may only be a need for one vignette which could be placed in the README.md file.
- [x] Brief demonstration of package usage (as it makes sense - links to vignettes could also suffice here if package description is clear)
- [x] Link to your documentation website.
- [ ] If applicable, how the package compares to other similar packages and/or how it relates to other packages in the scientific ecosystem.
- [x] Citation information
Usability
Reviewers are encouraged to submit suggestions (or pull requests) that will improve the usability of the package as a whole. Package structure should follow general community best-practices. In general please consider whether:
- [x] Package documentation is clear and easy to find and use.
- [x] The need for the package is clear
- [ ] All functions have documentation and associated examples for use
- [x] The package is easy to install
Functionality
- [x] Installation: Installation succeeds as documented.
- [x] Functionality: Any functional claims of the software been confirmed.
- [x] Performance: Any performance claims of the software been confirmed.
- [x] Automated tests:
- [ ] All tests pass on the reviewer's local machine for the package version submitted by the author. Ideally this should be a tagged version making it easy for reviewers to install.
- [x] Tests cover essential functions of the package and a reasonable range of inputs and conditions.
- [x] Continuous Integration: Has continuous integration setup (We suggest using Github actions but any CI platform is acceptable for review)
- [ ] Packaging guidelines: The package conforms to the pyOpenSci packaging guidelines.
A few notable highlights to look at:
- [x] Package supports modern versions of Python and not End of life versions.
- [ ] Code format is standard throughout package and follows PEP 8 guidelines (CI tests for linting pass)
For packages also submitting to JOSS
- [x] The package has an obvious research application according to JOSS's definition in their submission requirements.
Note: Be sure to check this carefully, as JOSS's submission requirements and scope differ from pyOpenSci's in terms of what types of packages are accepted.
The package contains a paper.md matching JOSS's requirements with:
- [ ] A short summary describing the high-level functionality of the software
- [ ] Authors: A list of authors with their affiliations
- [ ] A statement of need clearly stating problems the software is designed to solve and its target audience.
- [ ] References: With DOIs for all those that have one (e.g. papers, datasets, software).
Final approval (post-review)
- [ ] The author has responded to my review and made changes to my satisfaction. I recommend approving this package.
Estimated hours spent reviewing: 7
Review Comments
Required
In this section I will put to-dos for items in the checklist above that are not checked\
-
Documentation
- [ ] Examples for all user-facing functions https://github.com/LabForComputationalVision/plenoptic/issues/237
- I find examples for some methods on main classes, e.g.
plenoptic.synthesize.Metamer.load - but many functions don't have examples, and some of these functions end up being core to the package, e.g.
plenoptic.tools.remove_grad
- I find examples for some methods on main classes, e.g.
- [ ] Examples for all user-facing functions https://github.com/LabForComputationalVision/plenoptic/issues/237
-
README
- [ ] Badges for:
- [ ] A repostatus.org badge, https://github.com/LabForComputationalVision/plenoptic/issues/236
- [ ] If applicable, how the package compares to other similar packages and/or how it relates to other packages in the scientific ecosystem.
- I think it would be worth (1) stating clearly the dependency on torch, here and in the installation page (see below) and (2) linking related libraries that are dependencies, e.g. https://github.com/LabForComputationalVision/pyrtools
- [ ] Badges for:
-
Usability
-
[ ] All functions have documentation and associated examples for use
- this repeats the to-do from documentation above; I know it can be really tedious to write examples for literally every user-facing function but I would suggest that for frequently used functions such as
plenoptic.tools.remove_gradthere should be an example that passes when doctests are run
- this repeats the to-do from documentation above; I know it can be really tedious to write examples for literally every user-facing function but I would suggest that for frequently used functions such as
-
Functionality
- [ ] Automated tests:
- [ ] All tests pass on the reviewer's local machine for the package version submitted by the author. Ideally this should be a tagged version making it easy for reviewers to install.
- [ ] this test failed for me locally: tests/test_metamers.py::TestMetamers::test_stop_criterion https://github.com/LabForComputationalVision/plenoptic/issues/238
- [ ] All tests pass on the reviewer's local machine for the package version submitted by the author. Ideally this should be a tagged version making it easy for reviewers to install.
- [ ] Automated tests:
-
[ ] Packaging guidelines: The package conforms to the pyOpenSci packaging guidelines.
- [ ] Code format is standard throughout package and follows PEP 8 guidelines (CI tests for linting pass)
- currently there's no code formatting or linting (e.g. flake8), AFAICT -- left a comment on an existing issue re: formatters here https://github.com/LabForComputationalVision/plenoptic/issues/51#issuecomment-1900731361
- [ ] Code format is standard throughout package and follows PEP 8 guidelines (CI tests for linting pass)
suggested
In this section I will put suggested changes. Some of these start to get very nit-picky so I won't raise issues for all of them
-
documentation
- [ ] be clear about what "model" means in the context of plenoptic: these are models of visual perception, although there are some models of audio perception as well. This should be the second sentence of explaining what plenoptic does, right after saying "it's for synthesizing stimuli using models" https://github.com/LabForComputationalVision/plenoptic/issues/239
-
README
- Badge section
- citation
- There is a citation section but it just links to a list of citations in the docs. I would suggest the following
- Directly state in this section of the README how the code should be cited. E.g., "If you use the code, please cite both the code by DOI as well as the JOV paper. If you are not using the code directly then cite the paper". I would also take advantage of the citation.cff, immediately after say "you can also click on 'cite this repository' on the right side of the page to get a copyable citation". I would then include both the Zenodo badge with the code DOI and a bibtex citation for the paper in this section, just in case someone misses the part about "click on the right page". Finally I would say "for more in-depth citation lists go here (link to citation page)" https://github.com/LabForComputationalVision/plenoptic/issues/240
- There is a citation section but it just links to a list of citations in the docs. I would suggest the following
-
API
- In general, accessing core classes and functions is verbose. I have been guilty of this too and I know this would in a sense be a breaking change, but I would strongly suggest importing core classes and functions at the top-level if possible. I want to be able to just type
po.Metamerandpo.remove_grad. It could be fine to leave them where they are but just import them - Remember file structure is an impelmentation detail and flat is better than nested
- Reasonable people can disagree about this but I prefer to avoid namespaces with names like
toolsorutil, because everything can go there, and as it stands now, you are just forcing users to typetoolsa lot. I would suggest moving modules in tools up a level and import the crucial functions directly into theplenopitcnamespace from those modules. E.g.plenoptic.remove_grad. Or maybeplenoptic.grad.removeif you must have a module for it (seems weird that this function lives invalidate).
- In general, accessing core classes and functions is verbose. I have been guilty of this too and I know this would in a sense be a breaking change, but I would strongly suggest importing core classes and functions at the top-level if possible. I want to be able to just type
-
it's also confusing and non-standard to internally import a sub-package at the root package level and change its name: e.g.
synthesize->synthandsimulate->simul. I kept thinking I missed an import statement with aliases, and I wasn't sure if there was something else going on with the code organization that I didn't know about. I would strongly suggest removing these internal aliases. If they are just too long, rename the module itself. -
installation
- [ ] I would not suggest installing with pip into a conda environment
- [ ] because conda recommends only installing with pip last after everything else
- [ ] would it be possible to make a conda-forge package?
- existing issue: https://github.com/LabForComputationalVision/plenoptic/issues/104
- my comment: https://github.com/LabForComputationalVision/plenoptic/issues/104#issuecomment-1898993294
- [ ] I would not suggest installing with pip into a conda environment
-
documentation
- consider switching to newer theme like pydata-sphinx?
- As stated above, I would strongly suggest making it as obvious as possible that most functionality requires torch
- in install instructions
- Some thoughts on organization, as per the diataxis framework: https://github.com/LabForComputationalVision/plenoptic/issues/21#issuecomment-1017933312, https://github.com/LabForComputationalVision/plenoptic/issues/21#issuecomment-1900755529
- put install + quickstart in a "getting started"
- put model requirements and citation guide in a "reference"
- most everything else should go in a "user guide"
- except API should have its own section
- reproducibility should go in reference
- API docs
- formatting looks off here in some places -- I know that sphinx-autodoc is not the easiest thing to work with but some clean-up here would help it look more approachable
- I added a typo on this issue: https://github.com/LabForComputationalVision/plenoptic/issues/102#issuecomment-1900764245, will add more there if I find them
-
functionality / presentation I tested by running notebooks, mostly. I kept notes for the key notebooks as I read through them. This starts to get quite nitpicky, please know I am trying to help with presentation of material. Going to add here for now but I could move this to individual issues.
- quickstart
- the function
load_imagesin the notebook did not work for me at first - I read the function but it uses
op-- I would suggest making the code as readable as possible by not abbreviating and used absolute names instead, esp for standard library functions; code is read much more than it's written - The problem was that the notebook expects to be run from inside notebook and so it has a hardcoded path
- a fix for this would be to run notebooks from root
- and/or to include images in the package as discussed in (link issue)
- I would also suggest re-writing this with pathlib;
pathlib.Path().exists() - and raising an explicit error like "file does not exist: {}"
- the function
- quickstart
-
packaging guidelines
- pyproject.toml
- I would change trove classifiers
- the package is not in beta annymore if you are using in the lab
- you can also specify specific pythons: 3.7, etc
- You probably want to raise the minimum python requires and adhere to SPEC0 since you are depending on core scientific python libraries
- right now core packages are at 3.10-3.12, but if you need numba then you might need to raise it to 3.9 and test on 3.9-3.11 since numba is not out for 3.12 just yet
- I would change trove classifiers
- pyproject.toml
Reading notes
conceptual introduction
-
""> Synthesis is a framework for exploring models" -- what kind of models? (Answer: cognitive/neural!)
-
index
-
"descarded" -> "discarded"
quickstart
- "Models can be really simple, as this demonstrates."
- I think you want to say something here like "Models can be really simple as we show in the next session"
- "It needs to inherit torch.nn.Module and just needs two methods: init (so it’s an object) and forward (so it can take an image)."
- I would explicitly say here "In plenoptic, all models are implemented using the library PyTorch. This means that every model must subclass the
torch.nn.Moduleclass and implementforward"- At the risk of sounding even more pedantic then usual, I think you don't strictly need an
__init__method -- if you writeclass MyModel(torch.nn.Module)then you will already haveModule's__init__method
- At the risk of sounding even more pedantic then usual, I think you don't strictly need an
- I would explicitly say here "In plenoptic, all models are implemented using the library PyTorch. This means that every model must subclass the
- As per API notes above, I would suggest moving
plenoptic.simulate.canonical_computations.filterstoplenoptic.filtersor even toplenopticif it is frequently used. If you don't want to move code around you could just add afilterspackage and import there from the lengthy namespace - explain in models why inputs are 4d, like you do in conceptual -- define the dimensions
- suggest adding headings throughout quickstart
- "Stimuli"
- "Models"
- "Synthesis"
- state briefly here how we optimize / compute loss, since this is key concept. Link to relevant papers + additional tutorial/how-to
- "Turn off gradient"
- If this function is used frequently why not have it at package level?
plenoptic.remove_grad - Is it absolutely necessary to
detachall parameters in the model? Doesn't this limit the ability to use the library with arbitrary models, because a user would be surprised to find that some model they trained and wanted to continue training suddenly had all the parameters detached? Would it be sufficient to just set requires_grad of the parameters to False, e.g. by callingModel.eval()? Or what about using a context manager inside the Metamer class that sets requires_grad of model parameters to false just during the optimization step?
- If this function is used frequently why not have it at package level?
- "Metamers"
- "Eigendistortions"
- It was really unclear to me that we start with a randomly-initialized image. The part of the quickstart that uses the Marie Curie image makes this clearer, but I would say it explicitly in the first part
- I would also explicitly say something like "at every step of optimization we compare the image, that begins as randomly initialized, with a representation provided by the model, and we use the loss function to minimize the difference between the representation of the metamer and the representation of the reference image
Excellent, thank you David! @DanielaPamplona checking in if I can be of any assistance, or if you have a rough estimate of timeline? No rush! It looks like the authors have a few things to work on in the meantime :)
Hi,
Sorry I got a cold so I could not finish on time. If I don't get worse, I expect to finish this on Monday.
Best
Daniela Pamplona Researcher and Lecturer at Université Paris-Est Créteil Val de Marne https://danielapamplona.github.io/
Sent: Friday, January 19, 2024 at 7:51 PM From: "Jonny Saunders" @.> To: "pyOpenSci/software-submission" @.> Cc: "Daniela Pamplona" @.>, "Mention" @.> Subject: Re: [pyOpenSci/software-submission] Plenoptic (Issue #150)
Excellent, thank you David! @DanielaPamplona checking in if I can be of any assistance, or if you have a rough estimate of timeline? No rush! It looks like the authors have a few things to work on in the meantime :)
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
Please take your time to get well, I only ask so we keep each other appraised, not trying to rush you!
Thanks all! I've got a workshop in 2 weeks (for a different software package) that I'm franticly preparing for, so I most likely won't make much progress on these revisions before then. So no rush Daniela!
Hi, So here is my review. Sorry for the delay.
Legend: Y = Yes N = No ~ = Not applicable +/- = more or less/ needs improvement
Package Review
Please check off boYes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide
- [Y] As the reviewer I confirm that there are no conflicts of interest for me to review this work (If you are unsure whether you are in conflict, please speak to your editor before starting your review).
Documentation
The package includes all the following forms of documentation:
- [Y] A statement of need clearly stating problems the software is designed to solve and its target audience in README.
- [Y] Installation instructions: for the development version of the package and any non-standard dependencies in README.
- [~] Vignette(s) demonstrating major functionality that runs successfully locally.
- [Y] Function Documentation: for all user-facing functions. [Did not search everything, random sampled 50% and it was always documented]
- [+/-] Examples for all user-facing functions.[Did not search everything, random sampled 50% and sometimes was missing.]
- [Y] Community guidelines including contribution guidelines in the README or CONTRIBUTING.
- [Y] Metadata including author(s), author e-mail(s), a url, and any other relevant metadata e.g., in a
pyproject.tomlfile or elsewhere.
Readme file requirements The package meets the readme requirements below:
- [Y] Package has a README.md file in the root directory.
The README should include, from top to bottom:
- [Y] The package name
- [ ] Badges for:
- [Y] Continuous integration and test coverage,
- [Y] Docs building (if you have a documentation website),
- [N] A repostatus.org badge,
- [Y] Python versions supported,
- [Y] Current package version (on PyPI / Conda).
NOTE: If the README has many more badges, you might want to consider using a table for badges: see this example. Such a table should be more wide than high. (Note that the a badge for pyOpenSci peer-review will be provided upon acceptance.)
- [Y] Short description of package goals. [I would emphatize that the fact that the package is not only to generate images, but also to analyse them. Also use the term "generate/generative" that is more appealing].
- [Y] Package installation instructions.
- [~] Any additional setup required to use the package (authentication tokens, etc.)
- [+/-] Descriptive links to all vignettes. If the package is small, there may only be a need for one vignette which could be placed in the README.md file.
- [Y] Brief demonstration of package usage (as it makes sense - links to vignettes could also suffice here if package description is clear) [The quick start is not that "brief"]
- [N] Link to your documentation website.
- [~] If applicable, how the package compares to other similar packages and/or how it relates to other packages in the scientific ecosystem.
- [Y] Citation information
Usability
Reviewers are encouraged to submit suggestions (or pull requests) that will improve the usability of the package as a whole. Package structure should follow general community best-practices. In general please consider whether:
- [+/-] Package documentation is clear and easy to find and use. [The documentation webpage is not clear. See suggestion below]
- [ +] The need for the package is clear
- [ +/-] All functions have documentation and associated examples for use[Did not search everything, random sampled 50% and sometimes was missing.]
- [Y] The package is easy to install
Functionality
- [Y] Installation: Installation succeeds as documented.
- [Y] Functionality: Any functional claims of the software been confirmed.
- [ ~] Performance: Any performance claims of the software been confirmed.
- [ ] Automated tests:
- [Y] All tests pass on the reviewer's local machine for the package version submitted by the author. Ideally this should be a tagged version making it easy for reviewers to install. [Did not make automated tests, but passed all tests]
- [Y] Tests cover essential functions of the package and a reasonable range of inputs and conditions.
- [Y] Continuous Integration: Has continuous integration setup (We suggest using Github actions but any CI platform is acceptable for review)
- [ ] Packaging guidelines: The package conforms to the pyOpenSci packaging guidelines.
A few notable highlights to look at:
- [Y] Package supports modern versions of Python and not End of life versions.
- [ Y] Code format is standard throughout package and follows PEP 8 guidelines (CI tests for linting pass)
For packages also submitting to JOSS
- [~] The package has an obvious research application according to JOSS's definition in their submission requirements.
Note: Be sure to check this carefully, as JOSS's submission requirements and scope differ from pyOpenSci's in terms of what types of packages are accepted.
The package contains a paper.md matching JOSS's requirements with:
- [N] A short summary describing the high-level functionality of the software
- [N] Authors: A list of authors with their affiliations
- [N] A statement of need clearly stating problems the software is designed to solve and its target audience.
- [N] References: With DOIs for all those that have one (e.g. papers, datasets, software).
Final approval (post-review)
- [ ] The author has responded to my review and made changes to my satisfaction. I recommend approving this package.
Estimated hours spent reviewing: 8H
Review Comments
plenoptic is a python package for the generation and analysis of model-based visual stimuli. The package is helpful and it contributes for better and FAIR science, and it is specially useful for researchers that are not experts in programming. I found the package well-organized and clean. Please see the "checklist". Some things can be improved, especially on the documentation webpage. In general, I think the package is almost ready to be "published", the comments below are minor and easy to fix.
-
README:
- Too many times written "help", you can use support, contribute, or any other synonym. I also think you should refer to the analysis/interpretation of the models part here.
- On the GitHub page, some images have transparent background, so they look strange, like this one:
-
Documentation webpage:https://plenoptic.readthedocs.io/ Observations on the form
1)The documentation webpage is confusing and not well organized, but it can be easily improved. A possible organization would be:
->Presentation -> Introduction -> Installation -> Quick start -> Citation Guide ->Modules -> Metric -> Simulate -> Synthesize -> tools -> Display and animate functions ->Tutorials and examples -> Eigendistortions ... -> Reproducing Wand and Simoncelli -> Extending existing synthesis objects ->Advanced Usage -> Model requirements -> Synthesis object design -> Extending... -> API -> Tips and tricks -> ReproducibilityIn the section "Modules" it would be presented the general goal and content of the module and, eventually, some code snap-shots. Not all the modules need to be present, only the most relevant and/or difficult to use. The code snap-shots can be from the tutorials and examples already existing, In addition, there can be links between the modules and tutorial sections In the section "Tutorials and examples" would be all the tutorials already on the website.
-
The tutorials/examples are too verbose. I would prefer something sharper. For instance:
- In " Extending existing synthesis objects" it is written:
"Once you are familiar with the existing synthesis objects included in plenoptic, you may wish to change some aspect of their function. For example, you may wish to change how the po.synth.MADCompetition initializes the MAD image or alter the objective function of po.synth.Metamer. While you could certainly start from scratch or copy the source code of the object and alter them directly, an easier way to do so is to create a new sub-class: an object that inherits the synthesis object you wish to modify and over-writes some of its existing methods."
It could be written (this is an example, it is not to be taken literally):
Functions of synthesis objects can be changed without altering the source code directly. For that, it is just necessary to create a new sub-class and over-write the existing methods. Bellow follows an example of changing the po.synth.MADCompetition initialization with the MAD image and the objective function of po.synth.Metamer.
- The "Eigendistortions" section starts with "In this tutorial we will cover:...", I think this is a great way of starting, and all the tutorials should start like that.
Observations on the contents * Installation: One should not mix pip and conda. * Installation: There is a problem when using plenoptic with jupyter notebook, see issue: https://github.com/LabForComputationalVision/plenoptic/issues/242 * Model requirements: Numerize instead of itemize, link to example. The citation method is not consistent with the one used in conceptual information. * Quick start: it is not really quick. It could have finished after cell 9 of the code. * Code: many modules are initialized by importing all the functions from the submodules, as for instance in the /plenoptic/tools/init.py it is written:
from .data import * from .conv import * from .signal import * from .stats import * from .display import * from .straightness import * from .optim import * from .external import * from .validate import remove_grad from . import validate You can use Lasy_loader (https://pypi.org/project/lazy_loader/) to automatize de process. * If "all synthesis methods will raise a ValueError if given a model with any learnable parameters", then the helper function that you provide should be run automatically when a model is created. -
For now, that's it! I will look in more detail at the tutorials in the second round, if there is one.
Cheers!
Thanks Daniela! Like I said, it'll be a bit before i can dedicate time to most of these, but I wanted to try and solve the jupyter issue. However, I can't reproduce the issue and I'm not sure how to fix this -- @NickleDave or @sneakers-the-rat do you have any ideas?
Thank you so much to our reviewers for the amount of time and care they've given to this package! There are a lot of very helpful suggestions here, and it seems relatively clear to me which of these are review blockers and which are just tips, so I think we have a clear path forward once @billbrod has some time to address things. Please take your time responding to the issues, and also feel free to pop back in here if anything is unclear.
I took a look at the jupyter issue and think there are a few possible routes there - specify -c conda-forge, reinstalling requests/charset-normalizer, or specifying jupyterlab seem like they would all fit the bill.
I'll check back in a few weeks to see how progress is going, thanks again to the reviewers for all their work <3
Hi @sneakers-the-rat and @DanielaPamplona, I'm just commenting here for @billbrod that he is on paternity leave, as of last night when his wife went into labor!
So I think we will need to put this review "pending maintainer response" while he's out.
@sneakers-the-rat I will abuse my power as editor and add the label, just to save you the work :grin:
edit: I put "on hold" but that's the wrong label. Maybe we shouldn't add a label here--I'm just trying to give context for somebody who wants the overview of where reviews are at. Happy to let the actual editor do whatever they think is appropriate
edit edit: for context, I needed to message Billy about other stuff related to US-RSE, that's how I found out and told him I would comment here since clearly he's busy with other things
Thank you for the update :). Congratulations @billbrod and please take the time you need and feel no hurry. We're here to help your work, and that certainly doesn't include rushing you away from your newborn. Best of luck and let us know when you're ready!
Congratulations! Have fun with the baby!
Hi all, just an update that I'm back from parental leave and will start working on this again, though my overall capacity is still reduced.
Ahahah! Been there, no worries!
Get some sleep!
Daniela Pamplona
Daniela Pamplona Researcher and Lecturer at Université Paris-Est Créteil Val de Marne https://danielapamplona.github.io/
Sent: Wednesday, May 29, 2024 at 5:29 PM From: "William F. Broderick" @.> To: "pyOpenSci/software-submission" @.> Cc: "Daniela Pamplona" @.>, "Mention" @.> Subject: Re: [pyOpenSci/software-submission] Plenoptic (Issue #150)
Hi all, just an update that I'm back from parental leave and will start working on this again, though my overall capacity is still reduced.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
Fabulous!!! Hope you and the family are well. Just letting you know im on strike and will not be working until the strike is resolved. Whether someone else picking up the review is crossing a picket line is hard to say because this is volunteer work not for my employer, but I wont be proceeding with editorial duties until the University of California resolves its unfair labor practices with UAW 4811.