pyresttest icon indicating copy to clipboard operation
pyresttest copied to clipboard

Enhancements to better compete with requests/unittest combination

Open svanoort opened this issue 9 years ago • 10 comments

Currently there are a lot of users that are torn between PyRestTest (YAML) and requests/unittest combination (pure Python) for their REST API testing needs.

Story: As a PyRestTest user I would like to have features to make it easier to use PyRestTest within a python environment. I would like to offer a python builder API that is similar to frisby.js for constructing and running tests, something that is in a fluent style but generates test/testset/testconfig objects just like the existing YAML.

There are a couple outstanding issues that would build in this direction:

https://github.com/svanoort/pyresttest/issues/29 - better logging when invoking directly from Python

https://github.com/svanoort/pyresttest/issues/43 - provide xUnit compatible output, preferably by wrapping unittest compatibility.

https://github.com/svanoort/pyresttest/issues/146 - setup/teardown (as part of v2.0)

Current PyRestTest strengths:

  • Extensibility (easy to add some custom validation options, do extraction & comparsions on data)
  • Dynamic variable handling, allows for setting up more complex scenarios
  • Supports extraction & more complex validation of data
  • Tests are linear, so you can build up complex scenarios

Current PyRestTest weaknesses:

  • Test object structure and properties are kind of complex
  • Log output is not always easy to work with/clean
  • Configuration has a distinction between command-line / testconfig / test level (being addressed in v1.8/v1.9)
  • Logging isn't always helpful, and it can be hard to determine how to get more information in logs (request bodies, headers, etc)
  • Tests do not have a true setup/teardown functionality

Thoughts on what is particular helpful / unhelpful here for your uses? @nitrocode @lerrua @jewzaam (I'm aware you guys were using a legacy fork based off pre 0.x version) @MorrisJobke @jeanCarloMachado

Edit: Scope limits

Unix philosophy: one tool for one task, not One Tool To Rule Them All™ - the goal is to keep PyRestTest focused on what it does well, while letting it grow.

  • Top-level execution options should only support the most common, global settings
    • Failfast, parallel options, logging verbosity, HTTPS options, interactive mode, input files, URL, and top-level variables
  • No conditionals or executable code in YAML: more complex custom scenarios need to be implemented in python via extensions or by writing Python code to compose from the libraries here.
    • Macros allow for custom execution pipelines and special scenarios
  • Every new function needs to make sense for both YAML syntax and Python APIs, nothing that is usage-specific

svanoort avatar Feb 09 '16 17:02 svanoort

Here are some suggestions. Some of these may overlap.

  1. Faster yaml parsing or whatever is taking a while to start the initial curl request
  2. Stronger documentation and many more use cases
  3. I'm having on again and off again issues with pycurl and ssl for some reason
  4. It may be just me, but I'm having trouble wrapping my head around building extractors / validators / generators.
  5. Maybe a use case of requests / unittest2 to verify fields in python and then converting to YAML and the respective benchmarks in comparison would be a good addition.
  6. If pycurl issues continue, maybe add a switch in place to switch from pycurl to requests.
  7. I agree with the logging issues. Perhaps output to TAP or xUnit or integrate with py.test or something else that already has the capability to output to standard outputs

Thank you

nitrocode avatar Feb 09 '16 18:02 nitrocode

@nitrocode Thanks for your feedback!

  1. Yes, I think it may be worth doing a little benchmarking here (easy to add).
  2. I'd love PRs to help build out documentation. Do you feel like the docs changes here help: https://github.com/svanoort/pyresttest/issues/100 and https://github.com/svanoort/pyresttest/issues/151
  3. Ugh. This one is tricky, because I don't have a lot of familiarity with the different SSL implementations and issues. I know there are some subtle issues around the SSL libs pycurl is built with (I don't have a ref handy, but it's in issues if you look). Unfortunately, I don't have an easy way to replicate or debug these at this time (may be worth building out the functional test harnesses, but requires a lot of different certs & config for the Django test app).
  4. I was hoping the test flow diagram would help a little bit, but what would you like to see in advanced_guide?? I know more examples will be helpful (part of PRs in 2), but am not sure how to better explain this.
  5. Hmm. Will have to think about that one.
  6. Perhaps. I don't want to get too far down that path of comparison (since it's kind of apples-to-oranges comparison here); my main concern is to remove any significant regressions and see if there's optimizations.
  7. Yeah, it's coming, I've kicked some of the dynamic features back again to slip this in the next release.

svanoort avatar Feb 09 '16 18:02 svanoort

I'd like it if this can support the following as a testing tool (not for benchmarks):

Suppose, I have 4 filters (as GET params) in an API endpoint. I want all of them, and their combinations to be tested with their possible values and values which are not allowed.

So if I had filter params:

max_age (integers), name (regex, string), country (regex, string), employed (boolean, true, false)

I want a way to generate all or some combinations of these from a provided list (return value checks are not necessary).

Maybe something like:

- test:
  - combinator: {name: 'filter', type:'form', params: {max_age: [10, 20, 30, 40, 50], name: ['Doe', 'Jane']}}
  - url: '/api/v1/person.json?$filter'

# `type` can be json/xml/form so it could also be used to generate test data for body
# This would in turn create test values for `$filter` like:
# max_age=10&name=Doe
# max_age=20&name=Doe
# ... and so on ... 

This may not always be necessary, because the above example looks like something that a testing suite DB layer itself should handle. But there are other areas where this can be quite helpful (params that have less to do with the database and are some business logic things and may be affected by values of the other). I looked hard in the documentation, but couldn't locate anything that can do this.

Also thank you for the work on this great tool!

n9986 avatar Feb 15 '16 13:02 n9986

@n9986 Thank you for your feedback! I'll look into how something like this could be incorporated.

svanoort avatar Feb 17 '16 03:02 svanoort

Thought: perhaps the best way to approach this is as a piece-at-a-time utility from the python side.
This is easy enough to build out: rather than trying to replace unittest + requests in pure-python use, offer up the extensible components as ways to decorate & easy test cases.

For example, validation of response bodies/headers, extract data from HTTP responses, do templating/generation on request bodies, etc.

This is complementary to the effort to provide unittest outputs and better python APIs for working with tests/testsets/etc.

svanoort avatar Feb 23 '16 19:02 svanoort

Sorry for taking to long to answer, here it goes. I'm from an outer context, only pyresttest runs on python on my project. Anyway there some features that I would like to see in pyresttest.

  • As you said xUnit compatible output, currently I end up writing my own pyresttest manager in bash which does this;
  • Better differentiation between error messages and context, the debug flag is too simplistic, so when I got an failure the dump is huge and hard to debug;
  • A way to run it in parallel batches, I have many tests and they are taking too long for developers to keep waiting to do the merge of their pull requests, probably I'll write a way to run batches on the manager I wrote;
  • A way to set all the things I mentioned above through a configuration file.

jeanCarloMachado avatar Mar 06 '16 20:03 jeanCarloMachado

@jeanCarloMachado Thank you for your feedback, and feedback at any time is welcome. To give some idea on timelines, the current full roadmap runs about a year out, and I aim for a major release with a couple big changes + several smaller feature additions & bugfixes every 3-6 months.

Between this and the other feedback, I've bumped the priority of xUnit compatible output up several notches, and it is planned for the next release. The dynamic/flexible variable binding got bumped down in priority to allow for this.

Logging is getting some extra love as well with https://github.com/svanoort/pyresttest/pull/171 and I'll factor in ways to improve the signal-to-noise ratio here.

Config files: I've added https://github.com/svanoort/pyresttest/issues/177 to include this. It depends on several planned features in future releases, so it will be a while out, but it's factored into design planning now.

Parallel: yes, I agree this would improve speed a lot. It is a rather complex, multi-step process to get there, but here are the rough design plans. I'm open to ideas or examples if people have good samples that may help:

  • Parallel execution depends on tests/benchmarks handling their own execution (and a resttest runner that is smart enough to fire them async). The first part is already about halfway there with https://github.com/svanoort/pyresttest/pull/171 and will get there within the next release.
  • Using the CurlMulti object instead of individual Curl handles allows for concurrent requests (and if we do requests compatibility, they have something similar using gevent.
  • It's probably easiest to implement parallel execution a batch at a time, rather than true async, initially.
  • Unfortunately, the dynamic variable binding mechanism (variables/generators/extractors/validators/templating) will cause problems if tests are not executed in the correct order.
    • For this reason, v1 will be something crude, where you have to explicitly mark testsets or tests for parallel execution. Only sequential, parallelizable tests will be run in parallel (to preserve correctness).
    • Then we worry about tracking what can safely run in parallel and automatically doing so.
    • v2 of the feature would only run where no generator/variable re-binding/extractor binding is being used (all configuration uses an unchanged Context).
    • v3 would evaluate predictable variable bindings before each test and pass the results in, then fall back to serial execution where extractors are being used at all.
    • v4 would look at which variables are changing, and create a directed acyclic graph of variable modifications, and only disable parallel where variables are changing in unpredictable ways.

Happy wall-of-text! :-)

svanoort avatar Mar 17 '16 18:03 svanoort

Its been over a year that I saw any activity. @svanoort Are you still actively maintaining this?

rgarg1 avatar Jul 12 '17 07:07 rgarg1

@jeanCarloMachado

  • As you said xUnit compatible output, currently I end up writing my own pyresttest manager in bash which does this;
  • Better differentiation between error messages and context, the debug flag is too simplistic, so when I got an failure the dump is huge and hard to debug;
  • A way to run it in parallel batches, I have many tests and they are taking too long for developers to keep waiting to do the merge of their pull requests, probably I'll write a way to run batches on the manager I wrote;
  • A way to set all the things I mentioned above through a configuration file.
  • Have you done anything on this for your project *

Anjimeduri avatar Jan 04 '18 09:01 Anjimeduri

@rgarg1 last commit to master May 2016 and last commit to refactor-execution April 2017. I don't think this is maintained anymore.

nitrocode avatar Jan 04 '18 17:01 nitrocode