software-review icon indicating copy to clipboard operation
software-review copied to clipboard

distionary: Create and Evaluate Probability Distributions

Open vincenzocoia opened this issue 9 months ago • 39 comments

Submitting Author Name: Vincenzo Coia Submitting Author Github Handle: @vincenzocoia Repository: https://github.com/probaverse/distionary Version submitted: 0.1.0 Submission type: Stats Badge grade: silver Editor: @helske Reviewers: @dutangc, @katrinabrock

Due date for @dutangc: 2025-06-29

Due date for @katrinabrock: 2025-08-01 Archive: TBD Version accepted: TBD Language: en

  • Paste the full DESCRIPTION file inside a code block below:
Package: distionary
Title: Create and Evaluate Probability Distributions
Version: 0.1.0
Authors@R: c(
    person("Vincenzo", "Coia", , "[email protected]", 
           role = c("aut", "cre")),
    person("Amogh", "Joshi", role = "ctb"),
    person("Shuyi", "Tan", role = "ctb"),
    person("Zhipeng", "Zhu", role = "ctb")
    )
Description: Create probability distribution objects from a list of
    common distribution families, or make your own distribution.
    Evaluate distribution properties, even if it is not specified
    in the distribution's definition.
License: MIT + file LICENSE
Suggests: 
    covr,
    knitr,
    rmarkdown,
    testthat (>= 3.0.0),
    tibble
Config/testthat/edition: 3
Encoding: UTF-8
LazyData: true
Roxygen: list(markdown = TRUE)
RoxygenNote: 7.3.2
Imports: 
    ellipsis,
    rlang,
    stats,
    vctrs
VignetteBuilder: knitr
URL: https://distionary.probaverse.com/, https://github.com/probaverse/distionary
BugReports: https://github.com/probaverse/distionary/issues

Scope

  • Please indicate which of our statistical package categories this package falls under. (Please check one or more appropriate boxes below):

    Statistical Packages

    • [ ] Bayesian and Monte Carlo Routines
    • [ ] Dimensionality Reduction, Clustering, and Unsupervised Learning
    • [ ] Machine Learning
    • [ ] Regression and Supervised Learning
    • [ ] Exploratory Data Analysis (EDA) and Summary Statistics
    • [ ] Spatial Analyses
    • [ ] Time Series Analyses
    • [x] Probability Distributions

Pre-submission Inquiry

  • [ ] A pre-submission inquiry has been approved in issue#<issue_num>

General Information

  • Who is the target audience and what are scientific applications of this package?

Lots of people work with probability distributions. Lots of people don’t work with probability distributions but should, because they don’t see the value or because distributions are too clumsy to work with under existing infrastructure. And, there are lots of people learning about probability distributions that would have an easier time if they get to “feel” distributions and their multifaceted nature. distionary is for all of these people.

More accurately, the probaverse is for all of these people, and distionary powers the probaverse. The probaverse is a suite of packages providing an intuitive API for easily making distributions that are representative of your data, noting that typical treatments of distributions are usually not enough for advanced probabilistic modelling.

  • Paste your responses to our General Standard G1.1 here, describing whether your software is:

    • The first implementation of a novel algorithm; or
    • The first implementation within R of an algorithm which has previously been implemented in other languages or contexts; or
    • An improvement on other implementations of similar algorithms in R.

    Please include hyperlinked references to all other relevant software.

I'd say distionary is An improvement on other implementations of similar algorithms in R. Lots of software exists for building and evaluating probability distributions, but I've always found the software packages to lack features I need for my work, and packages having different strengths don't talk to each other to form a cohesive API. Over the past 5 years, I developed distionary and the wider probaverse on an as-needed basis so I could more easily make and evaluate distributions for my work. I secured funding from the R Consortium so that I can make distionary complete and published (so that the rest of the probaverse can be launched).

Here are some example packages that already exist that handle distributions. I'll only list three, but there are lots.

  • distributional is a similar package that began development around the same time as the probaverse (back then it was just distplyr, for which distionary was a part of). We even converged on similar syntax unknowingly. distributional has since expanded from its more limited earlier days, but its scope is still limited compared to the goals of the probaverse (simple examples include not being able to make your own distribution families, and not enough operations are available, at least for my applications). Also, distributional vectorises distributions, which conflicts with the probaverse philosophy of vectorising evaluation arguments (like CDF arguments) first and letting the user be more deliberate if wanting to also work with a vector of distributions.
  • distributions3 is similar to distributional but looks to be rather limited. Similar comments apply.
  • fitdistrplus focusses on estimating distribution parameters and is lighter on the distribution specification and evaluation scope. It doesn't talk to other packages like distributional or distributions3 to leverage their strengths.

This is only tangentially relevant but these practices have been considered in the development of distionary.

Badging

I think for now, silver. I have lots of plans for building more functionality into this package: improving algorithms, improving how properties are calculated when not specified, specifying family objects in addition to distributions, distributions beyond univariate, reparameterisation, etc., and I already have a design laid out for most of these scopes. Right now, my objective is to get a basic version of distionary reviewed and on CRAN, as a basis for getting the rest of the probaverse up. For instance, I'm OK with having more onus on the user to make sure inputs are correct at this stage. Eventually I do plan to seek a gold badge, because I want to ensure quality as the probaverse develops.

I'd say the one component extending beyond the minimal requirements is distionary's generality to multiple use cases. I use it for statistical hydrology and climate modelling; I have also used it when putting together a business proposal to simulate projected profit. Anyone doing risk analysis, such as in the actuarial sciences, would find the probaverse useful, too. Financial companies would find it useful for simulating and evaluating uncertainty. Epidemiologists can use it to model the spread of disease. I foresee lots of exciting applications.

Technical checks

Confirm each of the following by checking the box.

  • [x] I have read the rOpenSci packaging guide.

  • [x] I have read the author guide and I expect to maintain this package for at least 2 years or have another maintainer identified.

  • [x] I/we have read the Statistical Software Peer Review Guide for Authors.

  • 👀 I/we have run autotest checks on the package, and ensured no tests fail.

    • I suspect this test passes, but I also think there's a bug in the package. I get the error Error in tools::Rd_db(gsub("^package:", "", i)) : installed help of package ‘distionary’ is corrupt, but the help files all seem fine. Maybe you won't get the error.
  • 👀 The srr_stats_pre_submit() function confirms this package may be submitted.

    • All standards are present and have been appropriately handled, but I think there's a bug with this package, too. After running srr_stats_pre_submit() I get the message ! This package has no 'srr' standards, despite following the directions. Maybe you won't get the error.
  • [x] The pkgcheck() function confirms this package may be submitted - alternatively, please explain reasons for any checks which your package is unable to pass.

This package:

Publication options

  • [x] Do you intend for this package to go on CRAN?
  • [ ] Do you intend for this package to go on Bioconductor?

Code of conduct

  • [x] I agree to abide by rOpenSci's Code of Conduct during the review process and in maintaining my package should it be accepted.

vincenzocoia avatar Feb 19 '25 04:02 vincenzocoia

Thanks for submitting to rOpenSci, our editors and @ropensci-review-bot will reply soon. Type @ropensci-review-bot help for help.

ropensci-review-bot avatar Feb 19 '25 04:02 ropensci-review-bot

:rocket:

The following problem was found in your submission template:

  • 'statsgrade' variable must be one of [bronze, silver, gold] Editors: Please ensure these problems with the submission template are rectified. Package checks have been started regardless.

:wave:

ropensci-review-bot avatar Feb 19 '25 04:02 ropensci-review-bot

Checks for distionary (v0.1.0)

git hash: 148b577e

  • :heavy_check_mark: Package name is available
  • :heavy_check_mark: has a 'codemeta.json' file.
  • :heavy_check_mark: has a 'contributing' file.
  • :heavy_check_mark: uses 'roxygen2'.
  • :heavy_check_mark: 'DESCRIPTION' has a URL field.
  • :heavy_check_mark: 'DESCRIPTION' has a BugReports field.
  • :heavy_check_mark: Package has at least one HTML vignette
  • :heavy_check_mark: All functions have examples.
  • :heavy_check_mark: Package has continuous integration checks.
  • :heavy_check_mark: Package coverage is 81.2%.
  • :heavy_check_mark: R CMD check found no errors.
  • :heavy_check_mark: R CMD check found no warnings.
  • :eyes: Function names are duplicated in other packages

(Checks marked with :eyes: may be optionally addressed.)

Package License: MIT + file LICENSE


1. rOpenSci Statistical Standards (srr package)

:heavy_check_mark: All applicable standards [v0.2.0] have been documented in this package (26 complied with; 56 N/A standards)


2. Package Dependencies

Details of Package Dependency Usage (click to open)

The table below tallies all function calls to all packages ('ncalls'), both internal (r-base + recommended, along with the package itself), and external (imported and suggested packages). 'NA' values indicate packages to which no identified calls to R functions could be found. Note that these results are generated by an automated code-tagging system which may not be entirely accurate.

type package ncalls
internal distionary 283
internal base 265
internal grDevices 1
internal methods 1
imports stats 49
imports rlang 7
imports vctrs 2
imports ellipsis NA
suggests covr NA
suggests knitr NA
suggests rmarkdown NA
suggests testthat NA
suggests tibble NA
linking_to NA NA

Click below for tallies of functions used in each package. Locations of each call within this package may be generated locally by running 's <- pkgstats::pkgstats(<path/to/repo>)', and examining the 'external_calls' table.

distionary

distribution (88), parameters (61), eval_cdf (9), eval_pmf (9), eval_survival (6), eval_quantile (4), variance (4), eval_density (3), gev_range (3), gev_t_function (3), stdev (3), bracket_parameters (2), dst_binom (2), dst_pearson3 (2), encapsulate_p (2), kurtosis (2), kurtosis_exc (2), prob_left (2), sf2 (2), vtype (2), cdf_lower (1), convert_dataframe_to_tibble (1), dgev (1), dgpd (1), directional_inverse (1), dst_bern (1), dst_beta (1), dst_cauchy (1), dst_chisq (1), dst_degenerate (1), dst_exp (1), dst_f (1), dst_gamma (1), dst_geom (1), dst_gev (1), dst_gpd (1), dst_hyper (1), dst_lnorm (1), dst_lp3 (1), dst_nbinom (1), dst_norm (1), dst_null (1), dst_pois (1), dst_t (1), dst_unif (1), dst_weibull (1), enframe_cdf (1), enframe_chf (1), enframe_density (1), enframe_general (1), enframe_hazard (1), enframe_odds (1), enframe_pmf (1), enframe_quantile (1), enframe_return (1), enframe_survival (1), eval_chf (1), eval_chf_from_network (1), eval_density_from_network (1), eval_hazard (1), eval_hazard_from_network (1), eval_kurtosis_exc_from_network (1), eval_kurtosis_from_network (1), eval_mean_from_network (1), eval_median_from_network (1), eval_odds (1), eval_odds_from_network (1), eval_pmf_from_network (1), eval_property (1), eval_quantile_from_network (1), eval_range_from_network (1), eval_realise_from_network (1), eval_return (1), eval_return_from_network (1), eval_skewness_from_network (1), eval_stdev_from_network (1), eval_survival_from_network (1), eval_variance_from_network (1), is_distribution (1), is.distribution (1), mean.dst (1), median.dst (1), new_distribution (1), pgev (1), pgpd (1), plot.dst (1), pretty_name (1), print.dst (1), prob_right (1), qgev (1), representation_as_function (1), skewness (1)

base

scale (62), list (60), max (15), mean (14), min (10), range (10), exp (8), t (7), which (7), length (6), q (6), try (5), abs (4), by (4), gamma (4), seq (4), attributes (3), log (3), vapply (3), c (2), for (2), inherits (2), is.na (2), lapply (2), rep (2), seq_len (2), all (1), append (1), as.data.frame (1), as.numeric (1), character (1), class (1), exists (1), match.arg (1), order (1), paste0 (1), setdiff (1), signif (1), sqrt (1), suppressWarnings (1), tolower (1), unlist (1)

stats

df (14), sd (9), integrate (6), sigma (5), qf (3), dgamma (1), pbeta (1), pchisq (1), pexp (1), pf (1), pgeom (1), ppois (1), pt (1), punif (1), quantile (1), runif (1), var (1)

rlang

eval_tidy (2), enexprs (1), enquos (1), list2 (1), names2 (1), quos (1)

vctrs

vec_as_names (2)

grDevices

pdf (1)

methods

representation (1)

NOTE: Some imported packages appear to have no associated function calls; please ensure with author that these 'Imports' are listed appropriately.


3. Statistical Properties

This package features some noteworthy statistical properties which may need to be clarified by a handling editor prior to progressing.

Details of statistical properties (click to open)

The package has:

  • code in R (100% in 71 files) and
  • 1 authors
  • 2 vignettes
  • no internal data file
  • 4 imported packages
  • 70 exported functions (median 8 lines of code)
  • 145 non-exported functions in R (median 10 lines of code)

Statistical properties of package structure as distributional percentiles in relation to all current CRAN packages The following terminology is used:

  • loc = "Lines of Code"
  • fn = "function"
  • exp/not_exp = exported / not exported

All parameters are explained as tooltips in the locally-rendered HTML version of this report generated by the checks_to_markdown() function

The final measure (fn_call_network_size) is the total number of calls between functions (in R), or more abstract relationships between code objects in other languages. Values are flagged as "noteworthy" when they lie in the upper or lower 5th percentile.

measure value percentile noteworthy
files_R 71 97.7
files_vignettes 2 81.7
files_tests 10 87.2
loc_R 1999 81.8
loc_vignettes 188 45.0
loc_tests 697 77.1
num_vignettes 2 85.4
n_fns_r 215 89.4
n_fns_r_exported 70 91.7
n_fns_r_not_exported 145 88.2
n_fns_per_file_r 2 28.6
num_params_per_fn 2 8.2
loc_per_fn_r 9 24.7
loc_per_fn_r_exp 8 16.8
loc_per_fn_r_not_exp 10 31.7
rel_whitespace_R 2 29.5
rel_whitespace_vignettes 31 41.3
rel_whitespace_tests 3 38.1
doclines_per_fn_exp 26 26.9
doclines_per_fn_not_exp 0 0.0 TRUE
fn_call_network_size 172 86.0

3a. Network visualisation

Click to see the interactive network visualisation of calls between objects in package


4. goodpractice and other checks

Details of goodpractice checks (click to open)

3a. Continuous Integration Badges

R-CMD-check.yaml

GitHub Workflow Results

id name conclusion sha run_number date
13404032609 pages build and deployment success 52720d 13 2025-02-19
13404010117 pkgcheck success 148b57 8 2025-02-19
13404010124 pkgdown.yaml success 148b57 55 2025-02-19
13404010111 R-CMD-check.yaml success 148b57 67 2025-02-19
13404010105 test-coverage.yaml success 148b57 67 2025-02-19

3b. goodpractice results

R CMD check with rcmdcheck

rcmdcheck found no errors, warnings, or notes

Test coverage with covr

Package coverage: 81.19

Cyclocomplexity with cyclocomp

The following functions have cyclocomplexity >= 15:

function cyclocomplexity
eval_kurtosis_from_network 29
plot.dst 27
eval_skewness_from_network 23
encapsulate_p 20
dst_gev 19
dst_hyper 19
dst_pearson3 16
directional_inverse 15
dst_lp3 15

Static code analyses with lintr

lintr found the following 46 potential issues:

message number of times
Avoid library() and require() calls in packages 2
Lines should not be more than 80 characters. This line is 100 characters. 1
Lines should not be more than 80 characters. This line is 101 characters. 2
Lines should not be more than 80 characters. This line is 102 characters. 1
Lines should not be more than 80 characters. This line is 111 characters. 1
Lines should not be more than 80 characters. This line is 119 characters. 1
Lines should not be more than 80 characters. This line is 120 characters. 2
Lines should not be more than 80 characters. This line is 121 characters. 1
Lines should not be more than 80 characters. This line is 126 characters. 1
Lines should not be more than 80 characters. This line is 129 characters. 1
Lines should not be more than 80 characters. This line is 130 characters. 1
Lines should not be more than 80 characters. This line is 131 characters. 1
Lines should not be more than 80 characters. This line is 133 characters. 1
Lines should not be more than 80 characters. This line is 135 characters. 1
Lines should not be more than 80 characters. This line is 137 characters. 1
Lines should not be more than 80 characters. This line is 139 characters. 2
Lines should not be more than 80 characters. This line is 141 characters. 1
Lines should not be more than 80 characters. This line is 142 characters. 1
Lines should not be more than 80 characters. This line is 145 characters. 1
Lines should not be more than 80 characters. This line is 152 characters. 1
Lines should not be more than 80 characters. This line is 161 characters. 1
Lines should not be more than 80 characters. This line is 164 characters. 1
Lines should not be more than 80 characters. This line is 173 characters. 1
Lines should not be more than 80 characters. This line is 176 characters. 1
Lines should not be more than 80 characters. This line is 181 characters. 1
Lines should not be more than 80 characters. This line is 190 characters. 1
Lines should not be more than 80 characters. This line is 211 characters. 1
Lines should not be more than 80 characters. This line is 215 characters. 1
Lines should not be more than 80 characters. This line is 225 characters. 1
Lines should not be more than 80 characters. This line is 81 characters. 3
Lines should not be more than 80 characters. This line is 84 characters. 1
Lines should not be more than 80 characters. This line is 86 characters. 1
Lines should not be more than 80 characters. This line is 87 characters. 3
Lines should not be more than 80 characters. This line is 88 characters. 1
Lines should not be more than 80 characters. This line is 96 characters. 1
Lines should not be more than 80 characters. This line is 98 characters. 1
Lines should not be more than 80 characters. This line is 99 characters. 2

5. Other Checks

Details of other checks (click to open)

:heavy_multiplication_x: The following 14 function names are duplicated in other packages:

    • dgev from CoSMoS, evd, extraDistr, FAdist, fExtremes, GEVcdn, LMoFit, revdbayes, ROOPSD, SpatialExtremes, texmex, TLMoments, VaRES, VGAM
    • dgpd from eva, evd, evir, evmix, extraDistr, fExtremes, hurdlr, LaplacesDemon, OpVaR, POT, ReIns, ROOPSD, smoothtail, SpatialExtremes, tea, texmex, TLMoments, VGAM
    • distribution from arakno, bayestestR, BLCOP, distr, EvalEst, experDesign, greta, lava, mistr, modeltime, rfishbase, spidR, TSDT
    • is_distribution from distributional, distributions3, dynparam
    • kurtosis from agricolae, AMR, analyzer, confintr, cumstats, datawizard, Davies, descstat, distributional, distributions3, DistributionUtils, dlookr, EnvStats, exploratory, extras, fastmatrix, GARCHSK, GLDEX, kim, LambertW, lessR, misty, moments, normalp, npde, PerformanceAnalytics, phenofit, propagate, rapportools, rockchalk, saemix, semTools, Sim.DiffProc, timeDate, TSA, utilities
    • parameters from bamlss, CVXR, dials, distributional, elliptic, EmiR, fixedincome, gnm, ipmr, JointAI, kgrams, mcmcr, Mediana, mistr, mixAR, modeltools, multiverse, mwcsr, nonmemica, oppr, parameters, pop, prioritizr, psychonetrics, quickpsy, rmutil, semtree, sglOptim, SourceSet, SpaDES.core, spatstat.core, stabs, term, tidybayes, TLMoments, umx, urltools
    • pgev from CoSMoS, eva, evd, evir, extraDistr, FAdist, fExtremes, GEVcdn, LMoFit, revdbayes, ROOPSD, SpatialExtremes, texmex, TLMoments, VaRES, VGAM
    • pgpd from eva, evd, evir, evmix, extraDistr, fExtremes, hurdlr, OpVaR, POT, ReIns, ROOPSD, smoothtail, SpatialExtremes, tea, texmex, TLMoments, VGAM
    • qgev from CoSMoS, eva, evd, evir, extraDistr, FAdist, fExtremes, GEVcdn, LMoFit, revdbayes, ROOPSD, SpatialExtremes, texmex, TLMoments, VGAM
    • qgpd from eva, evd, evir, evmix, extraDistr, fExtremes, hurdlr, OpVaR, POT, ReIns, ROOPSD, smoothtail, SpatialExtremes, tea, texmex, TLMoments, VGAM
    • skewness from agricolae, AMR, analyzer, confintr, cumstats, datawizard, Davies, descstat, distributional, distributions3, DistributionUtils, dlookr, EnvStats, exploratory, extras, fastmatrix, GARCHSK, GLDEX, kim, LambertW, liver, misty, mltools, modeest, moments, npde, optimStrat, PerformanceAnalytics, phenofit, propagate, qacBase, quantileDA, rapportools, rockchalk, s20x, saemix, Sim.DiffProc, timeDate, transx, TSA, utilities
    • stdev from descstat, mlogit, splus2R
    • variance from bspec, DAKS, Davies, descstat, distributional, distributions3, extras, fitscape, HyRiM, laeken, lava, lestat, molic, randomUniformForest, ROCket, soc.ca
    • vtype from vtype

Package Versions

package version
pkgstats 0.2.0.50
pkgcheck 0.1.2.119
srr 0.1.3.26

Editor-in-Chief Instructions:

This package is in top shape and may be passed on to a handling editor

ropensci-review-bot avatar Feb 19 '25 04:02 ropensci-review-bot

Thanks for the submission @vincenzocoia. I trust you will have seen the statement in our stats-devguide regarding Probability Disitributions software that,

Unlike most other categories of standards, packages which fit in this category will also generally be expected to fit into at least one other category of statistical software. Reflecting that expectation, standards for probability distributions will be expected to only pertain to some (potentially small) portion of code in any package.

I also immediately note that "distionary" truly is a pure probability distributions package. Given that, I'm confident that we would make an exception here, but would nevertheless ask you first to consider whether our standards for Exploratory Data Analysis and Summary Statistics might also apply? "distionary" is also very much about providing summary statistics, and it seems to me that at least half of those standards might also apply. Could you please have a quick look and provide feedback on:

  1. Whether you think at least 50% of those standards might be applicable? and
  2. Whether complying with those standards might improve your package?

Thanks!


Separately, I can't reproduce your {srr} issue, and the bot checks above seem to find all standards okay. I can, however, reproduce your {autotest} issue - can you please file and issue over there, and we'll investigate further. Thanks

mpadge avatar Feb 19 '25 08:02 mpadge

Hi @mpadge, I did see that statement in the stats-devguide, but I didn't think it was relevant for distionary. Thanks for suggesting I look at the Exploratory Data Analysis and Summary Statistics section.

  1. Whether you think at least 50% of those standards might be applicable? and

No, I see 35% of these standards being applicable.

  1. Whether complying with those standards might improve your package?

The standards EA4.2 and EA6.0 would improve the package. I don't think the others would.

vincenzocoia avatar Feb 19 '25 19:02 vincenzocoia

Thanks @vincenzocoia, 35% means those standards shouldn't be considered applicable. Feel free regardless to incorporate ideas from EA4.2 and EA6.0, but please don't put any {srr} tags on those. We'll then be happy to consider it a probability distributions package alone. But then we do need to address the requirement that stats packages comply with a minimum of 50% of all standards. {distionary} currently complies with only 26 / 82, mainly through complying with only 15 out of 68 general standards. Many of your srrstatsNA statements could actually be compliance statements, notably i ncluding:

  • G1.0 - we require that that be complied with regardless, and your case simply explaining absence of primary reference would suffice, maybe with an additional note that (as far as I can tell) all other CRAN packages for distributional representations also lack primary citations.
  • G1.3 Again, just converting to srrstats with current statement is sufficient to comply

Many other of your srrstatsNA tags could and arguably should be complied with, for example:

  • G2.1, G2.2
  • G2.3, which is directly applicable, for example in fn_prefix or sep parameters
  • G2.4

... and many others. In more general terms:

  • Simply implementing more extensive assertions on input data would enable many more to be complied with. This blog post has helpful advice. I personally use the {checkmate} package, with which each input can be checked, matched, or asserted with a single additional line of code.
  • Missing data, NA values, and control of error responses should be included.
  • Parameter recover tests are indeed valid, as many distributional specifiers effectively convert inputs into parameters of a distribution, and distributional evaluators are a form of "parameter recovery". This test in test-eval_quantile_from_network.R is effectively a parameter recovery test. Complying with these standards would then likely just require extending that kind of test to the full range of eval_ functions.
  • Noise susceptibility tests could also readily be implemented, and those standards complied with.

I appreciate that this may require quite a bit of work on your part, but the minimal 50% compliance is a hard requirement from our side, so we ask that you at least go through your current list of N/A standards and try to comply with at least 16 more of those, to ensure > 50% in total. Please let us know whether you think that will be possible.

mpadge avatar Feb 20 '25 09:02 mpadge

@vincenzocoia thanks for sharing your work with rOpenSci, and thanks @mpadge for discussing scope.

I'll share what I quickly see, so as to support the discussion with the editorial board.

@vincenzocoia if any of this helps as feedback, it's a good time to make changes.

Preliminary checks:

  • [x] Documentation: The package has sufficient documentation available online (README, pkgdown docs) to allow for an assessment of functionality and scope without installing the package. In particular,
    • [x] Is the case for the package well made?
    • [x] Is the reference index page clear (grouped by topic if necessary)?

      It's clear but not grouped. The names sort well but grouping by topic would be nice.

    • [x] Are vignettes readable, sufficiently detailed and not just perfunctory?
  • [x] Fit: The package meets criteria for fit and overlap.

    Comments by @mpadge suggest it's a good fit but the specific category may need to be defined.

  • [x] Installation instructions: Are installation instructions clear enough for human users?

    ! NEWS.md suggests it's on CRAN but isn't.

  • [x] Tests: If the package has some interactivity / HTTP / plot production etc. are the tests using state-of-the-art tooling?

    R/f.R isn't alligned with tests/testthat/test-f.R but test-*.R file groups similar functions meaningfully. Maybe the same groups could also help structure the Reference section of the website?

  • [x] Contributing information: Is the documentation for contribution clear enough e.g. tokens for tests, playgrounds?
  • [x] License: The package has a CRAN or OSI accepted license.

    Note the license says 2021. If you submit to CRAN they may requrie the year to be updated to the year of submission.

  • [ ] Project management: Are the issue and PR trackers in a good shape, e.g. are there outstanding bugs, is it clear when feature requests are meant to be tackled?

    I see a few issues labeled with [bug] that have remained opened 1+ years.

maurolepore avatar Feb 20 '25 14:02 maurolepore

@mpadge

try to comply with at least 16 more of those, to ensure > 50% in total. Please let us know whether you think that will be possible.

This is possible and I'm currently working on it.

@maurolepore

Is the reference index page clear (grouped by topic if necessary)?

It's clear but not grouped. The names sort well but grouping by topic would be nice.

This is done.

! NEWS.md suggests it's on CRAN but isn't.

Deleted "initial CRAN submission" until the time is right.

Note the license says 2021. If you submit to CRAN they may requrie the year to be updated to the year of submission.

Thanks, this is updated.

I see a few issues labeled with [bug] that have remained opened 1+ years.

These have all been fixed with V0.1.0. These issues are now closed, and only enhancements remain for future versions.

vincenzocoia avatar Feb 24 '25 05:02 vincenzocoia

@mpadge I have updated the distionary package to comply with >50% of the SRR standards. The test scripts should also be easier to follow.

I can, however, reproduce your {autotest} issue - can you please file and issue over there, and we'll investigate further. Thanks

I finally got around to doing this as well, here it is: https://github.com/ropensci-review-tools/autotest/issues/90

I see a few issues labeled with [bug] that have remained opened 1+ years.

@maurolepore These have been addressed; the remaining Issues are enhancements.

Let me know if anything else needs clarifying at this point.

vincenzocoia avatar Apr 07 '25 08:04 vincenzocoia

That's great @vincenzocoia, thanks for putting in the work there. You can call @ropensci-review-bot check srr to confirm compliance here, and then it'll be over to @maurolepore to guide things through to the next stage.

mpadge avatar Apr 07 '25 10:04 mpadge

@ropensci-review-bot check srr

vincenzocoia avatar Apr 08 '25 05:04 vincenzocoia

'srr' standards compliance:

  • Complied with: 55 / 82 = 67.1% (general: 44 / 68; distributions: 11 / 14)
  • Not complied with: 27 / 82 = 32.9% (general: 24 / 68; distributions: 3 / 14)

:heavy_check_mark: This package complies with > 50% of all standards. :exclamation: Standards should be documented in most package files, yet are mostly only documented in one file.

ropensci-review-bot avatar Apr 08 '25 05:04 ropensci-review-bot

@vincenzocoia Great work increasing the compliance there, thank you for all the effort!

You'll note the final ❗ from the bot above, which should be addressed as a final step before proceeding. Looking at the output of srr_report(), I see the following still documented in srr-stats-standards.R which should be moved to more appropriate places:

  • Standards on input assertions and the like should be moved to locations where those are implemented - and that will mean repeating the same standard several times, which is okay.
  • All standards related to testing should be somewhere in tests

There are likely others too, but hopefully you'll get the idea from those. Thanks

mpadge avatar Apr 08 '25 12:04 mpadge

@ropensci-review-bot check srr

vincenzocoia avatar Apr 09 '25 09:04 vincenzocoia

'srr' standards compliance:

  • Complied with: 56 / 82 = 68.3% (general: 45 / 68; distributions: 11 / 14)
  • Not complied with: 26 / 82 = 31.7% (general: 23 / 68; distributions: 3 / 14)

:heavy_check_mark: This package complies with > 50% of all standards. :exclamation:

ropensci-review-bot avatar Apr 09 '25 09:04 ropensci-review-bot

@vincenzocoia and @mpadge please ping me when you think you're ready for me to announce the package to our editors. I'm watching but I'll appreciate a reminder to avoid delays.

maurolepore avatar Apr 09 '25 22:04 maurolepore

Hi @maurolepore , the package is ready to be announced to the editors 🎉

vincenzocoia avatar Apr 11 '25 05:04 vincenzocoia

Thanks @vincenzocoia, it's now announced and we're waiting for a handling editor to become available. I scheduled a reminder but if you don't hear back from us in a week feel free to ping me.

maurolepore avatar Apr 14 '25 22:04 maurolepore

Hi @maurolepore , wondering if you've been able to find a handling editor for distionary.

vincenzocoia avatar Apr 21 '25 16:04 vincenzocoia

@vincenzocoia thanks for following up. I'm expecting one editor to confirm their availability tomorrow. I'm sorry it's taking so long :-(

maurolepore avatar Apr 22 '25 03:04 maurolepore

@ropensci-review-bot assign @helske as editor

maurolepore avatar Apr 24 '25 00:04 maurolepore

Assigned! @helske is now the editor

ropensci-review-bot avatar Apr 24 '25 00:04 ropensci-review-bot

@ropensci-review-bot check package

helske avatar Apr 29 '25 09:04 helske

Thanks, about to send the query.

ropensci-review-bot avatar Apr 29 '25 09:04 ropensci-review-bot

:rocket:

Editor check started

:wave:

ropensci-review-bot avatar Apr 29 '25 09:04 ropensci-review-bot

Checks for distionary (v0.1.0)

git hash: dd101f09

  • :heavy_check_mark: Package name is available
  • :heavy_check_mark: has a 'codemeta.json' file.
  • :heavy_check_mark: has a 'contributing' file.
  • :heavy_check_mark: uses 'roxygen2'.
  • :heavy_check_mark: 'DESCRIPTION' has a URL field.
  • :heavy_check_mark: 'DESCRIPTION' has a BugReports field.
  • :heavy_check_mark: Package has at least one HTML vignette
  • :heavy_check_mark: All functions have examples.
  • :heavy_check_mark: Package has continuous integration checks.
  • :heavy_check_mark: Package coverage is 86.4%.
  • :heavy_check_mark: R CMD check found no errors.
  • :heavy_check_mark: R CMD check found no warnings.
  • :eyes: Function names are duplicated in other packages

(Checks marked with :eyes: may be optionally addressed.)

Package License: MIT + file LICENSE


1. rOpenSci Statistical Standards (srr package)

:heavy_check_mark: All applicable standards [v0.2.0] have been documented in this package (437 complied with; 26 N/A standards)


2. Package Dependencies

Details of Package Dependency Usage (click to open)

The table below tallies all function calls to all packages ('ncalls'), both internal (r-base + recommended, along with the package itself), and external (imported and suggested packages). 'NA' values indicate packages to which no identified calls to R functions could be found. Note that these results are generated by an automated code-tagging system which may not be entirely accurate.

type package ncalls
internal distionary 264
internal base 260
internal methods 6
internal grDevices 1
imports checkmate 46
imports stats 41
imports vctrs 12
imports rlang 7
imports ellipsis NA
suggests covr NA
suggests knitr NA
suggests rmarkdown NA
suggests testthat NA
suggests tibble NA
linking_to NA NA

Click below for tallies of functions used in each package. Locations of each call within this package may be generated locally by running 's <- pkgstats::pkgstats(<path/to/repo>)', and examining the 'external_calls' table.

distionary

distribution (67), parameters (62), eval_cdf (8), integrand (6), distionary_integrate (5), eval_survival (5), gev_t_function (5), representation_as_function (5), eval_pmf (4), gev_lower (3), gev_upper (3), gpd_upper (3), vtype (3), bracket_parameters (2), dst_binom (2), dst_pearson3 (2), encapsulate_p (2), eval_density (2), intrinsics (2), stdev (2), algorithm_kurtosis (1), algorithm_variance (1), convert_dataframe_to_tibble (1), dgev (1), dgpd (1), directional_inverse (1), dst_bern (1), dst_beta (1), dst_cauchy (1), dst_chisq (1), dst_degenerate (1), dst_exp (1), dst_f (1), dst_gamma (1), dst_geom (1), dst_gev (1), dst_gpd (1), dst_hyper (1), dst_lnorm (1), dst_lp3 (1), dst_nbinom (1), dst_norm (1), dst_null (1), dst_pois (1), dst_t (1), dst_unif (1), dst_weibull (1), enframe_cdf (1), enframe_chf (1), enframe_density (1), enframe_general (1), enframe_hazard (1), enframe_odds (1), enframe_pmf (1), enframe_quantile (1), enframe_return (1), enframe_survival (1), eval_chf (1), eval_chf_from_network (1), eval_density_from_network (1), eval_hazard (1), eval_hazard_from_network (1), eval_kurtosis_exc_from_network (1), eval_kurtosis_from_network (1), eval_mean_from_network (1), eval_median_from_network (1), eval_odds (1), eval_odds_from_network (1), eval_pmf_from_network (1), eval_property (1), eval_quantile (1), eval_quantile_from_network (1), eval_range_from_network (1), eval_realise_from_network (1), eval_return (1), eval_return_from_network (1), eval_skewness_from_network (1), eval_stdev_from_network (1), eval_survival_from_network (1), eval_variance_from_network (1), is_distribution (1), is_intrinsic (1), is.distribution (1), kurtosis (1), kurtosis_exc (1), mean.dst (1), median.dst (1), new_distribution (1), pgev (1), pgpd (1), variance (1)

base

scale (82), list (60), mean (13), min (11), max (10), q (8), t (8), exp (6), length (6), range (6), try (5), gamma (4), attributes (3), ifelse (3), names (3), which (3), c (2), for (2), inherits (2), lapply (2), rep (2), seq_len (2), vapply (2), all (1), as.data.frame (1), character (1), class (1), exists (1), is.na (1), log (1), match.arg (1), order (1), paste0 (1), setdiff (1), signif (1), sqrt (1), tolower (1), unlist (1)

checkmate

assert_numeric (23), assert_integerish (9), assert_character (5), assert_list (4), assert_logical (4), check_integerish (1)

stats

df (16), sd (9), sigma (4), dgamma (1), integrate (1), pbeta (1), pchisq (1), pexp (1), pf (1), pgeom (1), ppois (1), pt (1), punif (1), quantile (1), runif (1)

vctrs

vec_recycle_common (10), vec_as_names (2)

rlang

eval_tidy (2), enexprs (1), enquos (1), list2 (1), names2 (1), quos (1)

methods

representation (6)

grDevices

pdf (1)

NOTE: Some imported packages appear to have no associated function calls; please ensure with author that these 'Imports' are listed appropriately.


3. Statistical Properties

This package features some noteworthy statistical properties which may need to be clarified by a handling editor prior to progressing.

Details of statistical properties (click to open)

The package has:

  • code in R (100% in 67 files) and
  • 1 authors
  • 2 vignettes
  • no internal data file
  • 5 imported packages
  • 69 exported functions (median 10 lines of code)
  • 137 non-exported functions in R (median 9 lines of code)

Statistical properties of package structure as distributional percentiles in relation to all current CRAN packages The following terminology is used:

  • loc = "Lines of Code"
  • fn = "function"
  • exp/not_exp = exported / not exported

All parameters are explained as tooltips in the locally-rendered HTML version of this report generated by the checks_to_markdown() function

The final measure (fn_call_network_size) is the total number of calls between functions (in R), or more abstract relationships between code objects in other languages. Values are flagged as "noteworthy" when they lie in the upper or lower 5th percentile.

measure value percentile noteworthy
files_R 67 97.5
files_vignettes 2 81.9
files_tests 31 97.5
loc_R 1648 78.0
loc_vignettes 192 45.9
loc_tests 1220 86.6
num_vignettes 2 85.4
n_fns_r 206 89.0
n_fns_r_exported 69 91.7
n_fns_r_not_exported 137 87.5
n_fns_per_file_r 2 28.7
num_params_per_fn 2 8.2
loc_per_fn_r 9 24.6
loc_per_fn_r_exp 10 22.8
loc_per_fn_r_not_exp 9 27.3
rel_whitespace_R 3 30.6
rel_whitespace_vignettes 32 43.0
rel_whitespace_tests 4 55.2
doclines_per_fn_exp 26 25.6
doclines_per_fn_not_exp 0 0.0 TRUE
fn_call_network_size 169 85.8

3a. Network visualisation

Click to see the interactive network visualisation of calls between objects in package


4. goodpractice and other checks

Details of goodpractice checks (click to open)

3a. Continuous Integration Badges

R-CMD-check.yaml

GitHub Workflow Results

id name conclusion sha run_number date
14396490802 pages build and deployment success abe740 34 2025-04-11
14396467797 pkgcheck success dd101f 29 2025-04-11
14396467790 pkgdown.yaml success dd101f 77 2025-04-11
14396467784 R-CMD-check.yaml success dd101f 89 2025-04-11
14396467787 test-coverage.yaml success dd101f 89 2025-04-11

3b. goodpractice results

R CMD check with rcmdcheck

rcmdcheck found no errors, warnings, or notes

Test coverage with covr

Package coverage: 86.4

Cyclocomplexity with cyclocomp

The following functions have cyclocomplexity >= 15:

function cyclocomplexity
plot.dst 25
encapsulate_p 20
directional_inverse 15

Static code analyses with lintr

lintr found the following 5 potential issues:

message number of times
Avoid library() and require() calls in packages 2
Lines should not be more than 80 characters. This line is 81 characters. 1
Lines should not be more than 80 characters. This line is 88 characters. 2

5. Other Checks

Details of other checks (click to open)

:heavy_multiplication_x: The following 14 function names are duplicated in other packages:

    • dgev from CoSMoS, evd, extraDistr, FAdist, fExtremes, GEVcdn, LMoFit, revdbayes, ROOPSD, SpatialExtremes, texmex, TLMoments, VaRES, VGAM
    • dgpd from eva, evd, evir, evmix, extraDistr, fExtremes, hurdlr, LaplacesDemon, OpVaR, POT, ReIns, ROOPSD, smoothtail, SpatialExtremes, tea, texmex, TLMoments, VGAM
    • distribution from arakno, bayestestR, BLCOP, distr, EvalEst, experDesign, greta, lava, mistr, modeltime, rfishbase, spidR, TSDT
    • is_distribution from distributional, distributions3, dynparam
    • kurtosis from agricolae, AMR, analyzer, confintr, cumstats, datawizard, Davies, descstat, distributional, distributions3, DistributionUtils, dlookr, EnvStats, exploratory, extras, fastmatrix, GARCHSK, GLDEX, kim, LambertW, lessR, misty, moments, normalp, npde, PerformanceAnalytics, phenofit, propagate, rapportools, rockchalk, saemix, semTools, Sim.DiffProc, timeDate, TSA, utilities
    • parameters from bamlss, CVXR, dials, distributional, elliptic, EmiR, fixedincome, gnm, ipmr, JointAI, kgrams, mcmcr, Mediana, mistr, mixAR, modeltools, multiverse, mwcsr, nonmemica, oppr, parameters, pop, prioritizr, psychonetrics, quickpsy, rmutil, semtree, sglOptim, SourceSet, SpaDES.core, spatstat.core, stabs, term, tidybayes, TLMoments, umx, urltools
    • pgev from CoSMoS, eva, evd, evir, extraDistr, FAdist, fExtremes, GEVcdn, LMoFit, revdbayes, ROOPSD, SpatialExtremes, texmex, TLMoments, VaRES, VGAM
    • pgpd from eva, evd, evir, evmix, extraDistr, fExtremes, hurdlr, OpVaR, POT, ReIns, ROOPSD, smoothtail, SpatialExtremes, tea, texmex, TLMoments, VGAM
    • qgev from CoSMoS, eva, evd, evir, extraDistr, FAdist, fExtremes, GEVcdn, LMoFit, revdbayes, ROOPSD, SpatialExtremes, texmex, TLMoments, VGAM
    • qgpd from eva, evd, evir, evmix, extraDistr, fExtremes, hurdlr, OpVaR, POT, ReIns, ROOPSD, smoothtail, SpatialExtremes, tea, texmex, TLMoments, VGAM
    • skewness from agricolae, AMR, analyzer, confintr, cumstats, datawizard, Davies, descstat, distributional, distributions3, DistributionUtils, dlookr, EnvStats, exploratory, extras, fastmatrix, GARCHSK, GLDEX, kim, LambertW, liver, misty, mltools, modeest, moments, npde, optimStrat, PerformanceAnalytics, phenofit, propagate, qacBase, quantileDA, rapportools, rockchalk, s20x, saemix, Sim.DiffProc, timeDate, transx, TSA, utilities
    • stdev from descstat, mlogit, splus2R
    • variance from bspec, DAKS, Davies, descstat, distributional, distributions3, extras, fitscape, HyRiM, laeken, lava, lestat, molic, randomUniformForest, ROCket, soc.ca
    • vtype from vtype

Package Versions

package version
pkgstats 0.2.0.54
pkgcheck 0.1.2.126
srr 0.1.4.4

Editor-in-Chief Instructions:

This package is in top shape and may be passed on to a handling editor

ropensci-review-bot avatar Apr 29 '25 09:04 ropensci-review-bot

Editor checks:

  • [x] Documentation: The package has sufficient documentation available online (README, pkgdown docs) to allow for an assessment of functionality and scope without installing the package. In particular,
    • [x] Is the case for the package well made?
    • [x] Is the reference index page clear (grouped by topic if necessary)?
    • [x] Are vignettes readable, sufficiently detailed and not just perfunctory?
  • [x] Fit: The package meets criteria for fit and overlap.
  • [x] Installation instructions: Are installation instructions clear enough for human users?
  • [x] Tests: If the package has some interactivity / HTTP / plot production etc. are the tests using state-of-the-art tooling?
  • [x] Contributing information: Is the documentation for contribution clear enough e.g. tokens for tests, playgrounds?
  • [x] License: The package has a CRAN or OSI accepted license.
  • [x] Project management: Are the issue and PR trackers in a good shape, e.g. are there outstanding bugs, is it clear when feature requests are meant to be tackled?

Editor comments

Hi, I'm your handling editor. The package is very interesting and seems to be in code shape. I have just few minor comments at this point:

  • Based on the automated checks, seems that you have aimed for 80 character line width limit in the code, but lintr notes that the are few cases this is exceeded by few characters.
  • The articles in the distionary webpage orders the topic as "Evaluate..." and "Specify...", but the Evaluate starts with "This vignette covers the second goal..", so perhaps the order should be switched. I'm not sure how to do this though, as my guess is the order is automatic?
  • In the "Specify" vignette, there is one node without any label in the network
  • While you do say that the network presents the inner logic of the package, maybe it would be a good to note that these are not equivalent with theoretical connections of these concepts. For example, by first look I started to wonder the following: -Why odds -> pmf is separate from all the rest? If you know the pmf you also know cdf? Similarly, if you know density, you should be able to compute cdf. And of course from cdf you should be able to define pmf and pdf.
  • Also continuing with the diagram, in my opinion it would be more clear if the arrows were reversed, i.e. pmf "causes" odds etc. But this is more of a matter of taste.
  • Why do you use pmf, cdf, and density, and not pmf, cdf, and pdf?

As a more broader comment, if you haven't already though about it, it might be a good idea to check the ggdist package regarding whether its visualization tools could be useful for distionary as well.

None of the things above are in anyway critical, so I will now start looking for reviewers.

helske avatar Apr 29 '25 11:04 helske

@ropensci-review-bot seeking reviewers

helske avatar Apr 29 '25 11:04 helske

Please add this badge to the README of your package repository:

[![Status at rOpenSci Software Peer Review](https://badges.ropensci.org/688_status.svg)](https://github.com/ropensci/software-review/issues/688)

Furthermore, if your package does not have a NEWS.md file yet, please create one to capture the changes made during the review process. See https://devguide.ropensci.org/releasing.html#news

ropensci-review-bot avatar Apr 29 '25 11:04 ropensci-review-bot

Dear all my EIC rotation comes to an end so I'm writing a few notes to hand things over to the next EiC.

I'm glad we now have a handling editor and we're seeking reviewers. Thanks for your work!

maurolepore avatar May 04 '25 22:05 maurolepore