nose
nose copied to clipboard
TimedOutException happens too frequently
I use the 'multiprocess' plugin when my tests are too slow. However I keep getting a TimedOutException almost every time.
The current default value for --process-timeout (10 seconds) is too low. I wouldn't need multiprocess if my tests were fast.
So, I think that:
- either the timeout should be disabled by default;
- or the default value should be a bit higher (e.g. 60 seconds).
I personally think that (1) would be the best solution.
I also think that I should be able to specify --process-timeout=-1, meaning that no timeout should be set.
I don't think changing the default to something higher is a good idea--10 seconds is a long time--and I'm not sure I like the idea of disabling it by default. On one hand, disabling it would provide the same behavior you have without the multiprocess module. Namely, if a test hangs, then nosetests does too. On the other hand, I'm not sure how much I trust the current multiprocess implementation, so the timeouts are kind of a safeguard against that.
I think we could reuse 0 for disabling the process timeouts, because a timeout of 0 is just nonsensical otherwise.
I also ran into this issue recently I thought I'd put here an example that triggers the timeout even with a set of simple tests that are all doing http requests, so the test is the following:
import unittest
import requests
def _run_test():
print('running this parallel test')
for _ in range(0, 10):
print('also going to print stuff to the console')
requests.get('http://www.yahoo.com')
print('all done running')
class ParallelTests(unittest.TestCase):
def test_parallel_1(self):
_run_test()
def test_parallel_2(self):
_run_test()
def test_parallel_3(self):
_run_test()
when you try to run the above with --processes=2 you'll find it times out all the time until you specify --process-timeout=60
The run the above also install:
nose==1.3.0 requests==2.0.1
I don't understand enough about the multiprocess usage of the SIG_ILL signal but it seems that it can get unhandled for quite sometime when there's IO happening ? as is the case with the above requests.get() calls and then results in the early timeout.
Feel free to ask me to any other input on how I produced this issue but I could only run the tests with a much larger timeout.
I also ran into this issue; like andrea-corbellini suggests, I wouldn't be running this if I didn't have big tests. In my case the default of 10 seconds failed, but 11 seconds works. A more reasonable default would be nice.
Have you guys setup a setup.cfg
at all? You could put the default that you desire into there:
[nosetests]
process-timeout=60
Then you don't have to specify it on the command line. You could also put it in a nose.cfg
or .noserc
file. The documentation for this feature are here: https://nose.readthedocs.org/en/latest/usage.html#configuration
GREAT post and I had just about given up when I found this. I'm using nosetest to execute integration tests which utilize lots of SSH calls (via Paramiko). It's an integration env that gets reset, but never destroyed.
Anyways I was certain a set of simple tests could be run concurrently - no shared resources and no setup/teardown - but my tests randomly raised TimedOutException()... EVEN if I specified "--processes=1 --process-restartworker"
(If someone could explain why processes=1 is different than not using the argument, that would be a helpful aside..)
I did not find my answer (--process-timeout=60) in the Nose docs, but Google led me to this issue. I hope this helps someone else.
@sprive re: "If someone could explain why processes=1 is different than not using the argument, that would be a helpful aside.." According to the docs, the default is zero