pytest-benchmark icon indicating copy to clipboard operation
pytest-benchmark copied to clipboard

asyncio support

Open argaen opened this issue 8 years ago • 16 comments

are there any plans on adding support for benchmarking coroutines?

argaen avatar Jan 23 '17 22:01 argaen

Not yet ;-)

But why do you need to benchmark coroutines? Aren't those io-bound?

Lets say you could do benchmarks on two levels:

  • Micro. And then you need to "remove" the io parts from your tests somehow (how?)
  • Macro. Why include ioloop-time in your benchmark?

pytest-benchmark could support benchmarking the total time of a coroutine, and that time would include time spent outside the coroutine (like the ioloop). Would that be enough?

ionelmc avatar Jan 23 '17 22:01 ionelmc

Yup, but in my tests I'm mocking the I/O part and wanted to check the impact of the code around the I/O part. For example imagine the following code:

async def test_my_awesome_fn():
    await my_awesome_fn()

async def my_awesome_fn():
    do_some_stuff()
    do_other_stuff()
    await actual_io_call()
    do_post_stuff()
    clean_up_and_so()

In my tests I'm mocking the coroutine but would like to check the impact of the rest of the calls/code. The solution ofc would be to benchmark directly the non coroutines functions, but given the way I currently have the tests, felt more natural to just surround the my_awesome_fn() than each of the different calls. Also, in some cases maybe we are not that lucky and not everything is wrapped in functions and there is logic directly in the my_awesome_fn.

Hope I explained it well

argaen avatar Jan 23 '17 22:01 argaen

Are you mocking/stubbing the actual_io_call()? My initial proposal assumes you do, otherwise the timing is going to be inflated/unreliable.

ionelmc avatar Jan 23 '17 23:01 ionelmc

Yeah, I am :)

argaen avatar Jan 28 '17 18:01 argaen

Any examples? I need something for integration tests anyway.

ionelmc avatar Jan 28 '17 18:01 ionelmc

My idea would be to benchmark the API calls from the cache class: https://github.com/argaen/aiocache/blob/master/aiocache/cache.py#L147 and I would benchmark it with tests similar to https://github.com/argaen/aiocache/blob/master/tests/ut/test_cache.py#L152.

One for each API call testing different serializers and plugins.

argaen avatar Jan 28 '17 18:01 argaen

I might be sounding strange - but what if I need to benchmark e2e loop+function+network time for given function?

for now I haven't found any other solution then either measure it "manually" or do this:

def run():
    return loop.run_until_complete(fn())

benchmark(run)

but I assume (correct me if I am wrong) what there is an extra time taken to spin up the loop and to stop it down compared to just waiting for certain coroutine to complete.

So in my opinion where should be an option to run benchmark on a living loop there each run is single (pseudo):

measure_start()
await coro(...)
measure_stop()

with an API like:

benchmark_coro(coro, *args, **kwargs, _loop:asyncio.AbstractEventLoop)

or

await benchmark_coro(coro, *args, **kwargs)

second one is integrates nicely with pytest-asyncio.

cases there this is applicable - is testing e2e interaction with third-party solutions like databases. f.ex. it is known in mongo - if you want to check if document exists - it is faster to use find and see if cursor contains any data to fetch then find_one which always fetches data.

In short - when you dont have time/resources to develop and run entire app under load-testing framework to see if approach is good.

also regarding what I said above, it would be interesting to know top performance (stripped) for given coroutine if it runs as "in-real-life", i.e. min/max/avg served results (ops/sec) for "parallel" runs.

dikderoy avatar Jun 15 '18 16:06 dikderoy

Also, not all coroutines are I/O bound - there are tons of constructions based on asyncio there code could work without I/O and coroutines are used to actually perform coprograms on a single thread. F.ex.: a queue based complex conveyor calculation

dikderoy avatar Jun 15 '18 16:06 dikderoy

@dikderoy it's not for coroutines specifically you can provide a custom timer to @pytest.marks.benchmark that can allow more fine-grained control over what gets timed.

class Stopwatch:
    def __init__(self, timer=time.perf_counter):
        self._timer = timer
        self._offset = 0
        self._stopped = False
        self._stop_time = 0
        self._stop_real_time = 0

    def __call__(self):
        if self._stopped:
            return self._stop_time
        return self._timer() - self._offset

    def stop(self):
        if self._stopped:
            return
        self._stopped = True
        self._stop_real_time = self._timer()
        self._stop_time = self._stop_real_time - self._offset

    def start(self):
        if not self._stopped:
            return
        self._stopped = False
        t = self._timer()
        self._offset += t - self._stop_real_time


stopwatch = Stopwatch()


@pytest.marks.benchmark(timer=stopwatch)
def test_part_of_run(benchmark):
    stopwatch.stop()
    @benchmark
    def runner():
        # setup code
        # ...
        stopwatch.start()
        # do work
        time.sleep(0.1)
        stopwatch.stop()

It's probably a common-enough use case that something should be included in the library.

chrahunt avatar Jan 06 '19 18:01 chrahunt

Ok, I came here (among several other places) looking for a solution. I found a way to make it work and would like to share here for anyone coming later on.

The fixture below called aio_benchmark wraps benchmark and can be used with both sync and async function. It works for me as is.

================================================

@pytest.yield_fixture(scope='function')
def aio_benchmark(benchmark):
    import asyncio
    import threading
    
    class Sync2Async:
        def __init__(self, coro, *args, **kwargs):
            self.coro = coro
            self.args = args
            self.kwargs = kwargs
            self.custom_loop = None
            self.thread = None
        
        def start_background_loop(self) -> None:
            asyncio.set_event_loop(self.custom_loop)
            self.custom_loop.run_forever()
        
        def __call__(self):
            evloop = None
            awaitable = self.coro(*self.args, **self.kwargs)
            try:
                evloop = asyncio.get_running_loop()
            except:
                pass
            if evloop is None:
                return asyncio.run(awaitable)
            else:
                if not self.custom_loop or not self.thread or not self.thread.is_alive():
                    self.custom_loop = asyncio.new_event_loop()
                    self.thread = threading.Thread(target=self.start_background_loop, daemon=True)
                    self.thread.start()
                
                return asyncio.run_coroutine_threadsafe(awaitable, self.custom_loop).result()
    
    def _wrapper(func, *args, **kwargs):
        if asyncio.iscoroutinefunction(func):
            benchmark(Sync2Async(func, *args, **kwargs))
        else:
            benchmark(func, *args, **kwargs)

    return _wrapper

mbello avatar Jan 18 '20 01:01 mbello

Can you elaborate how you use it?

Ok, I came here (among several other places) looking for a solution. I found a way to make it work and would like to share here for anyone coming later on.

The fixture below called aio_benchmark wraps benchmark and can be used with both sync and async function. It works for me as is.

================================================

@pytest.yield_fixture(scope='function')
def aio_benchmark(benchmark):
    import asyncio
    import threading
    
    class Sync2Async:
        def __init__(self, coro, *args, **kwargs):
            self.coro = coro
            self.args = args
            self.kwargs = kwargs
            self.custom_loop = None
            self.thread = None
        
        def start_background_loop(self) -> None:
            asyncio.set_event_loop(self.custom_loop)
            self.custom_loop.run_forever()
        
        def __call__(self):
            evloop = None
            awaitable = self.coro(*self.args, **self.kwargs)
            try:
                evloop = asyncio.get_running_loop()
            except:
                pass
            if evloop is None:
                return asyncio.run(awaitable)
            else:
                if not self.custom_loop or not self.thread or not self.thread.is_alive():
                    self.custom_loop = asyncio.new_event_loop()
                    self.thread = threading.Thread(target=self.start_background_loop, daemon=True)
                    self.thread.start()
                
                return asyncio.run_coroutine_threadsafe(awaitable, self.custom_loop).result()
    
    def _wrapper(func, *args, **kwargs):
        if asyncio.iscoroutinefunction(func):
            benchmark(Sync2Async(func, *args, **kwargs))
        else:
            benchmark(func, *args, **kwargs)

    return _wrapper

gammazplaud avatar Jul 27 '20 19:07 gammazplaud

To anyone who ends up here with the same issue, I found mbello's solution to be very good. To anyone wondering how to make it work, here is an example test:

pytest.mark.asyncio
async def test_something(aio_benchmark):
    @aio_benchmark
    async def _():
        await your_async_function()

A few notes:

  • This requires the pytest-asyncio library
  • I didn't give the function a name because otherwise my linter complained
  • I am not sure if this is the exact intended way of using it, but it did work for me.

monkeyman192 avatar Jul 21 '21 08:07 monkeyman192

@monkeyman192 , tested on Python 3.9 still works. Thanks.

ghost avatar Sep 28 '21 04:09 ghost

I couldn't get the proposed solution to work. I think it had to do with the fact that I use async fixtures which makes pytest manage an event loop as well. So I fiddled around and figured out that you can request the event loop from pytest as a fixture. This is what I ended up with.

@pytest_asyncio.fixture
async def aio_benchmark(benchmark, event_loop):
    def _wrapper(func, *args, **kwargs):
        if asyncio.iscoroutinefunction(func):
            @benchmark
            def _():
                return event_loop.run_until_complete(func(*args, **kwargs))
        else:
            benchmark(func, *args, **kwargs)

    return _wrapper

Usage:

async def some_async_function_to_test(some_async_fixture):
    ...

def test_benchmark_please(some_async_fixture, aio_benchmark):
    aio_benchmark(some_async_function_to_test, some_async_fixture)

robsdedude avatar May 25 '22 09:05 robsdedude

@robsdedude Thank you for the snippet, here is a very slight modification addressing a depreciation warning that the fixture raised.

@pytest_asyncio.fixture
async def aio_benchmark(benchmark):
    async def run_async_coroutine(func, *args, **kwargs):
        return await func(*args, **kwargs)

    def _wrapper(func, *args, **kwargs):
        if asyncio.iscoroutinefunction(func):

            @benchmark
            def _():
                future = asyncio.ensure_future(
                    run_async_coroutine(func, *args, **kwargs)
                )
                return asyncio.get_event_loop().run_until_complete(future)
        else:
            benchmark(func, *args, **kwargs)

    return _wrapper

jan-kubica avatar Apr 16 '24 06:04 jan-kubica

Is there anything pytest-asyncio can do to simplify this? I'm not a heavy pytest-benchmark user, but if someone has an idea to simplify the integration of both plugins from the pytest-asyncio side, I encourage them to file an issue in the pytest-asyncio tracker.

seifertm avatar Apr 16 '24 07:04 seifertm