Regenerate all test fixtures
In ropensci/vcr#255 @sckott suggested it is good practice to regenerate all the test fixtures on occasion. We definitely have not done that, mostly because I (the maintainer) don't have access to many of the features we have tested. These fixtures were created by the individuals who did the original PRs (so involve their surveys).
What is the best way forward from here?
vcr does have the re_record_interval config option https://docs.ropensci.org/vcr/reference/vcr_configure.html#cassette-options - it is tested and all, but I've not used it and I don't see anyone else using it in a github search. Other than that, you can just re-record whenever works for you: e.g., whenever making a change to a function which tests use vcr; whenever there's a qualtrics API update - in which case it's a good idea to re-record the cassette to make sure it reflects what you're getting back from the current version of the API.
Yes, but unfortunately I as a maintainer can't update many (maybe even most?) of the cassettes because I don't have access to those API features.
Got it, okay
I agree that the cassettes for existing tests should probably be re-generated regularly, but I have the same issue with feature availability.
How does Qualtrics itself view open-source support tools like ours? Assuming they view our existence at least somewhat positively, perhaps someone there would be open to giving us a full-featured account just for testing?
Referencing #297, which brought to mind an additional challenge here:
We currently have the situation where all the recorded tests use "www." in the credentials so as to not to be bound to a particular datacenter (and also so it doesn't expose anything about the accounts individual developers rely on for their work). This actually comes about manually after test generation: the vcr fixtures for individual tests are first generated using some specific developer's datacenter ID, then converted after the fact to "www." in both the test R code and fixture YAML.
Obviously this won't easily work with any kind of regular test regeneration scheme, automated or otherwise. Realistically we'd probably need something like a package-linked Qualtrics account that contains all the surveys used for tests, or at least someone willing and able to do that with their existing account.
Looking at the docs, I'm seeing a "mock server" for each endpoint (example ) located at their API dev platform stoplight.io. Could that be useful here somehow?
@jmobrien Oh maybe! That would be a really high impact thing to check out.
If that is something that it is OK to ping during CI, then we could feasibly rip out all the vcr/etc from the package and run tests against that mock server on CI, skipping everything that uses the mock server on CRAN.
Right. I wasn't even thinking ahead quite that far, but it does make sense that we could really simplify tests that way.
After a few tests looks like the mocking server doesn't care about API keys, and also will accept basically anything that looks like a proper survey ID in the request (or, presumably, some other form of ID). Not sure about whether/how additional paramters are respected (e.g., not sure whether requesting a QSF from get survey does anything). So, maybe some caveats, but I think this has real potential.
If we use this, the biggest challenge might be to rework the internals around URL building somehow, since the mocking URLs aren't even in the ballpark of what the endpoint URLs look like currently. But it might still be worth it.
In #350 I started on moving from vcr to using Qualtrics' mock servers, removing the need for 4 vcr fixtures. So far so good!
I had to do a CRAN release today and found that some/most of the mock servers are returning 500s so I did skip_on_ci() to be able to check on different plans before releasing. This may be worth keeping an eye on i.e. how often do the mock servers not work.
I just did a bit more work on this, motivated by the breaking release of vcr for us, and it's going to be tough to mock any of these API calls:
https://github.com/ropensci/qualtRics/blob/97f1d1a1008af8ff0bd55a658affa9bea49d5481/R/utils.R#L147
IIUC the mock server returns an empty result there, so there's nothing really to check against.
Bummer. Seems like maybe modifying the fixture files by hand may be worth a try?
I did some on this, because of the new version of vcr which was breaking for us. To sum up where I landed:
- I was able to get the mock servers to work for the features we currently support but I don't have access to via my account. Those mock servers will return 500 for reasons unknown to me for days at a time, so there are downsides to this, for sure!
- I was able to regenerate fixtures using the new vcr for an additional set of features that I do have access to in my account. I had to change quite a bit about our testing setup to do this (including adding my own personal credentials to CI), to get the new vcr to think that the API calls were the same.
- I could not ever get the new vcr to work with the main function in the package,
fetch_survey(). I would see errors like "Failed to find matching request in active cassette". For these, I changed over to live testing of the API.
I'm not sure what the best long term solution is for testing in this package, but at least I am unblocked to re-submit to CRAN when it opens again later this month.
Thanks for the notes on what you did @juliasilge
If you could make a reprex that I can run too that duplicates that error with fetch_survey then I can try to debug that issue