tinytest
tinytest copied to clipboard
[feature suggestion] Ruby rspec-like structuring of tests
First of all, thanks for this great package. I greatly appreciate working with it. I have a suggestion for structuring tests, although I do not know how difficult it is to implement that in a bullet-proof way. It follows the Ruby test framework rspec and having worked with Ruby, Python and R quite a bit, I have the feeling this is the gold standard of how human readable tests could look like.
Basically it consists of three things:
-
describe
states the subject of your test, so the name of the function, e.g. if the function for connecting to a database is calledconnect
, you'd writedescribe('connect', {...}
-
context
states the data environment for the test, so you provide specific data to a function, you mock things, or you set an environment variable. A call would look like e.g.context('when a database url is specified in the environment, {...})
-
it
basically wraps one ore moretinytest::expect_*
statements. This is the innermost layer, and also just provides a descriptive text about what you want to happen, e.git('connects to the database', {...})
orit('raises error, {...})
At the most basic, this provides a great visual way to visually parse test files: what is tested, what setups there are, under which circumstances you expect what. This creates a logical structure that's easier to read than a flat file sequences of of data = data.frame(...); expect_equal(...)
calls.
context
might modify the environment, which should be undone afterwards.
for example, I have a database connection function which depends on the environment variable APP_ENV
(which is set to "test" when running the test). Based on that, a section of a list is used to provide parameters:
In YAML:
development:
dbname: 'r_example_development'
database_url: 'DATABASE_URL'
test:
dbname: 'r_example_test'
production:
database_url: 'DATABASE_URL'
For the development case, I have a fork: if there is an environment variable called DATABASE_URL
, it would try that first, otherwise use default values, .e.g
host: localhost
port: 5432
In the test
environment, it always uses the defaults, in production
, it always relies on a DATABASE_URL
being set.
A naive implementation of the thing (using the box
package)
report_description <- Sys.getenv('TINYTEST_VERBOSE') != ''
whitespace <- ' '
indentation <- paste(rep(whitespace, 2), collapse = '')
.new_counter <- function() {
value <- 0
function(action) {
if (action == 'increase') {
value <<- value + 1
} else if (action == 'decrease') {
value <<- value - 1
if (value < 0) {
stop('value must not be less than zero.')
}
} else if (action == 'status') {
return(value)
} else {
stop('action ', action, ' not understood.')
}
}
}
counter <- .new_counter()
.run_block <- function(description, block) {
if (report_description) {
message(rep(indentation, counter('status')), description)
counter('increase')
on.exit(counter('decrease'))
}
local(block)
}
#' @export
describe <- .run_block
#' @export
context <- .run_block
#' @export
it <- .run_block
And this is how the test file would look like:
expected_driver <- RPostgres::Postgres()
describe('connect', {
<...other setups...>
context('with development environment', {
context('without url', {
mock_do_call <- mockery::mock('connection')
mockery::stub(where = connect, what = 'do.call', how = mock_do_call, depth = 2)
it('connects', {
expect_equal(connect('development'), 'connection')
call_args <- mockery::mock_args(mock_do_call)
expect_equal(length(call_args), 1)
first_call <- call_args[[1]]
expect_identical(first_call[["what"]], DBI::dbConnect)
expect_equal(
first_call[["args"]],
list(
dbname = "r_example_development",
host = "localhost",
port = 5432,
drv = expected_driver
)
)
})
})
context('when database url is in environment', {
context('when url is valid', {
mock_do_call <- mockery::mock('connection')
mockery::stub(where = connect, what = 'do.call', how = mock_do_call, depth = 2)
remote_url <- 'postgresql://readonly:[email protected]:9999/the_production_database'
Sys.setenv('DATABASE_URL' = remote_url)
it('connects', {
expect_equal(connect('development'), 'connection')
call_args <- mockery::mock_args(mock_do_call)
expect_equal(length(call_args), 1)
first_call <- call_args[[1]]
expect_identical(first_call[["what"]], DBI::dbConnect)
expect_equal(
first_call[["args"]],
list(
dbname = "the_production_database",
host = "192.192.0.0.1",
port = 9999,
user = "readonly",
password = "secret",
drv = expected_driver
)
)
})
})
context('when url is invalid', {
remote_url <- 'postgresql://this/is/invalid'
Sys.setenv('DATABASE_URL' = remote_url)
it('raises error', {
expect_error(connect('production'))
})
})
})
})
})
This leads to a full test output like
with test environment
connects
with development environment
without url
connects
when database url is in environment
when url is valid
connects
when url is invalid
raises error
with production environment
when database url is not provided
raises error
when database url is in environment
when url is valid
connects
when url is invalid
raises error
test_database.R............... 19 tests OK 0.6s
All ok, 19 results (0.6s)
Process finished with exit code 0
It would also be cool to snapshot the failed test before an it
call and append [FAILED]
if the number if failed tests changes during the execution of the block. That makes it very easy to visually grasp what is going wrong.
What do you think about an addition like that? There are many more features that rspec
has, e.g. variables/calls that are lazy and can be defined for a whole context
for a base setup but that can be overwritten in single occasions, but this might be a first start if this is something you'd consider.
Hi there, thanks for the extensive explanation!
In the example there is a test declaration in the form of a function call spanning 61 lines. As someone not used to this framework, that is a lot harder to edit and understand than a short sequence of imperative programming statements. This means it would probably hamper learnability of tinytest
.
Moreover, one of the core design ideas of tinytest
is that a test script is just an R script that should be runnable with source
(or run_test_file
). So I feel this is out of scope for tinytest.