tinytest
tinytest copied to clipboard
Setup / Teardown / helper
Is there a plan to support setup.R
, teardown.R
and helper_*.R
files, similar to testthat
? I use these quite often to set/reset options or to create functions which are used in many different unit tests.
I'm not familiar with those testthat
features (never used them), but I'll have a look. One thing you can do is to add setup.R
to your tinytest
directory. You can source
it from any test file.
Also, there are probably some subtleties that needs taken care of when running in parallel mode. In tinytest
, files are supposed to be independent and should in principle run in random order.
Oh, by the way. If you call options()
literally in your test file, these options are reset automatically when the test file is finished. (see vignette). Same for environment variables. If you would set them in a setup.R file that you source, the options/envvars will stick for the session.
@markvanderloo, what's your current view on this wish?
I'm considering a "soft" transition from (now way too heavy for my taste) testthat to tinytest. It would be nice to be able to keep the existing test_that()
bundles of expectations in my test scripts by simply defining a test-internal
test_that <- function (desc, code) {
eval(substitute(code), new.env(parent = parent.frame()))
invisible()
}
If I include this definition in tests/tinytest.R
, R CMD check is covered, but user-level tinytest::test_package()
after package installation wouldn't work (because test_that
is undefined).
I know that I could paste the above definition in a file inst/tinytest/startup.R
, say, and source("startup.R")
in each of my test scripts. Before I do that, it would be nice to know whether you are planning to support a startup script. This would simplify migration; of course, some expectations still need to be adjusted, e.g., expect_is
to expect_true(inherits(...))
.
Of note: even R CMD check
itself uses a startup script for the test sessions (from file.path(R.home("share"), "R", "tests-startup.R")
). tools:::.runPackageTests()
configures this script (via the "R_TESTS" environment variable) to be sourced in each R CMD BATCH session launched for the R scripts in tests/
.
That said, I do want tinytest to be kept as simple and clean as possible. :smile:
I think I can scan the folder where the test file resides for a file called setup.R
and afterwards for a file called breakdown.R
. Because of the parallelization & design in general, it would have to be done for each file. If a startup file creates a file or connection this may yield conflicts that are hard to trace from within tinytest. And there could be some subtleties in measuring the side effects. I'll put it on the list. An expect_is
function could be added w/o trouble, although I think that expect_inherits
is a better name.
I see. A setup file which could do anything doesn't really fit into the design of tintest. It is indeed attractive to view test scripts as independent R code that I can just run from top to bottom in my favorite R session (with just tinytest and my package attached). If there was a separate setup script automagically taken into account by tinytest, the installed test scripts would become less reproducible (without tinytest::test_package
). So in the end if I want to embrace tinytest I should just visibly source("setup.R")
in each test script (as I would do if I would only use base R test infrastructure).
A different solution for a "lazy" transition to tinytest just came to my mind where I switch only to satisfy the non-interactive R CMD check scenario replacing testthat expectations by tinytest equivalents. I could simply keep the testthat testdir under tests/
, setup the test_that
replacement in tests/tinytest.R and call run_test_dir("testthat")
(not test_package
as this requires an installed testdir). Of course, I would also have to replicate what test_package
does to throw an error on failure. I'll try if this would work.
Such a lazy solution would be particularly attractive for packages in maintenance mode, which only used basic testthat features and want to get rid of the nowadays heavy testthat dependency by simple means... Of course, such a setup would be against the spirit of tinytest (e.g., always install tests) and ignores much of its functionality.
I don't have a strong opinion on the need for expect_inherits
. It is easy to emulate using expect_true
, but being a very common test (I guess), a dedicated expectation for that could be worthwhile. Just make sure to keep the package simple and clean. :)
Hi Sebastian.
Regarding a setup and teardown script. If it was to be implemtented in tinytest, it would be sourced automatically by run_test_file()
to ensure that a file run remains 'atomic'.
Regarding your soft transition, your remark uncovered a bug. It should be easy to put your test files under /tests
, and point test_package
to that. At the moment something goes wrong with test_package
. Will fix.
Because of the parallelization & design in general, it would have to be done for each file.
wouldn't it be sufficient to clusterCall()
a function that sources the setup / helper scripts and populates the environment in which the test are then run once for each parallelization node?
Regarding a setup and teardown script. If it was to be implemtented in tinytest, it would be sourced automatically by
run_test_file()
to ensure that a file run remains 'atomic'.
That would be nice.
Just a gentle ping. Is this feature still on the roadmap? Would you be open for PR's that try to implement this feature?
I think a setup.R
and teardown.R
would be handy to create common resources at start up of the testing (like connections or mock db's) and tear them down afterwards.
In the case of parallelization, clusters can be on different processes/machines, and so it seems there is no way around the fact that we would need to perform a setup
and teardown
per parallel test in order to have resources to be correctly available.. A bit defeating the purpose of setup and teardown. But I think this is a fine and it could be clearly documented that this is currently a limitation of setup and teardown when using it in combo with clusters.
Hi there,
for now I don't have the bandwidth to work on this. Realistically, I can possibly look into this somewhere next year. That also includes PRs because those also take time.