bitex icon indicating copy to clipboard operation
bitex copied to clipboard

Mock API calls instead of direct calling API directly

Open deepbrook opened this issue 8 years ago • 13 comments

The tests currently send requests to the real thing. This exposes the test stage to code 500 and 400, which may be caused by dropped connections or too many requests.

Use mock classes to simulate API calls.

deepbrook avatar Nov 24 '17 06:11 deepbrook

For mocking API Responses, see this SO thread: https://stackoverflow.com/a/28507806/5413116

deepbrook avatar Dec 04 '17 13:12 deepbrook

Hi there! Have you checked out vcrpy? I use this for all of my tests, and it works pretty great (even with aiohttp).

I'm willing to help out with this approach if you're interested in doing it this way!

carpntr avatar Dec 06 '17 16:12 carpntr

@AndrewLCarpenter , thanks for the pointer! I have not heard of this, but it looks really neat! I'll check it out in detail.

Thanks!

deepbrook avatar Dec 06 '17 21:12 deepbrook

@AndrewLCarpenter,

perhaps you could shine some light on the advantages of the above library; I've had a chance to look through this, and I'm interested in giving this a shot - I like trying out new approaches, but I'm not sure how using vcrpy improves on other approaches. Here's how I would implement it without it:

Interface tests

For the interface classes, a mock would be sufficient IMO - we only need to assert that requests.request was called and assert that the payload has all expected keys and values.

REST API tests

Here, the biggest issue for me is validating the signature - since a nonce is required for each, asserting the payload is difficult without knowing the nonce, but that's possible to achieve. I could simply override the nonce method (by mocking it) and thus always generate a static timestamp. I'd basically send an actual request to the API with a timestamp, and if that succeeds, save that timestamp and layout of the payload, and use that as a validator.

Let me know what you think - and of course any help is welcome!

deepbrook avatar Dec 17 '17 09:12 deepbrook

I was just thinking about mocked API tests vs live API tests and the difference between them (and which is more appropriate)... Having a static 'example' that the tests are compared against is great if you want to quickly check if you changed some code that broke a previous test, but won't tell you if the exchange made a breaking change to their API. If they do, you'll need to manually update the unit tests' mocked responses which could be tedious. Not to say that it isn't worthwhile, it would definitely make development easier: knowing you didn't break something when you make a change because the tests still pass.

I would argue that end-to-end tests would also be highly valuable. Especially if they could be run on a CI server somewhere, to test that the exchange hasn't modified it's API, and if it has, be notified of it quickly. Perhaps this is a different feature completely though. Anyway, just thought I'd mention it.

MatthewCochrane avatar Dec 21 '17 01:12 MatthewCochrane

I have an implementation of this, if you like it I can make a pull request. It works like this: Imports:

from unittest.mock import patch, Mock
import requests

MockResponse object

class MockResponse(requests.Response):
    def __init__(self, json_data, status_code):
        super().__init__()
        self.json_data = json_data
        self.status_code = status_code

    def json(self):
        return self.json_data

Test:

class BitfinexFormattingTests(unittest.TestCase):
    @patch('requests.request')
    def test_ticker(self, mock_request):
        mock_request.side_effect = [MockResponse({'btcusd', 'ltcusd'}, 200),  # supported pairs
                                    MockResponse({'bid': '18629.0',
                                                  'timestamp': '1513589744.5260189',
                                                  'ask': '18630.0',
                                                  'mid': '18629.5',
                                                  'low': '18010.0',
                                                  'volume': '63690.87027664',
                                                  'last_price': '18630.0',
                                                  'high': '19891.0'}, 200)]

        formatted_response = bitex.Bitfinex().ticker(bitex.BTCUSD)

        check_ticker_format(formatted_response)
        self.assertDictEqual(formatted_response.formatted,
                             {
                                 'timestamp': datetime(2017, 12, 18, 20, 35, 44, 526019),
                                 'bid': 18629.0,
                                 'ask': 18630.0,
                                 'low': 18010.0,
                                 'volume': 63690.87027664,
                                 'last': 18630.0,
                                 'high': 19891.0
                             })

mock_request.side_effect gets assigned to an array which contains the return values of requests.request. The first time requests.request is called, the first value in the array is returned, the second time it's called, the second value is returned, and so on.

We need to specify two return values because requests.request is called twice: once on the initialisation of the Bitfinex class to retrieve the list of supported pairs, and then again to request the ticker info when we call ticker().

MockResponse({'bid': '18629.0',
                                                  'timestamp': '1513589744.5260189',
                                                  'ask': '18630.0',
                                                  'mid': '18629.5',
                                                  'low': '18010.0',
                                                  'volume': '63690.87027664',
                                                  'last_price': '18630.0',
                                                  'high': '19891.0'}, 200)]

The mock response contains the json that will be returned when we call response.json().

This is using a wrapper class to format responses which I'll explain in the relevant thread. But in the current dev branch calling bitex.Bitfinex().ticker(bitex.BTCUSD) when we've patched requests.response as in the test would result in the json

{'bid': '18629.0',
                                                  'timestamp': '1513589744.5260189',
                                                  'ask': '18630.0',
                                                  'mid': '18629.5',
                                                  'low': '18010.0',
                                                  'volume': '63690.87027664',
                                                  'last_price': '18630.0',
                                                  'high': '19891.0'}

being returned if we called bitex.Bitfinex().ticker(bitex.BTCUSD).json()

Hope that all makes sense. Let me know if I should clarify anything.

MatthewCochrane avatar Dec 22 '17 01:12 MatthewCochrane

Yupp, that's about what I had in mind.#

As for the end-to-end tests, we need to assert these with caution, but generally I agree. They'd be a good early indicator on any api changes that we may have missed to pick up but haven't yet been reported by a user. The problem lies mostly with server faults, code 500s and dropped connections, which make these quite fragile. But perhaps we can integrate them as a nice-if-successful test stage, where failures are reported and errors of no consequences for the CI process ?

deepbrook avatar Dec 22 '17 16:12 deepbrook

Ok great! Would it be possible to do some kind of rolling window statistic on E2E test results? For example run the tests every hour and give a statistic of passed X times out of the last Y requests (eg. passed 95/100).

MatthewCochrane avatar Dec 22 '17 21:12 MatthewCochrane

hm. Perhaps? I'm not sure how to go about it, if you have an idea - shoot!

deepbrook avatar Dec 27 '17 18:12 deepbrook

I've replaced calls to the API for all API objects with mocks, and now only check if the defined signature is generated as expected.

deepbrook avatar Jan 12 '18 15:01 deepbrook

I've replaced all calls to the API with mocked objects for the interface tests - sample payloads have been added as well in tests/payloads.py. We're only missing the APIResponse from PR #140 to add the finishing touches to the test suite.

deepbrook avatar Jan 17 '18 19:01 deepbrook

APIResponse was implemented and the tests can now be run. There are still quite a few errors, but work is progressing.

deepbrook avatar Jan 21 '18 18:01 deepbrook

Commit 1332b0812c7ef765ada70aa9ae054af16974de45 eliminated the last error in unittests. They're all failing as expected now for interface tests.

deepbrook avatar Jan 29 '18 09:01 deepbrook