spidermon
spidermon copied to clipboard
Tried to stop a LoopingCall that was not running
I also added two custom periodic monitors and imported monitor
from spidermon.contrib.scrapy.monitors import (
ErrorCountMonitor,
FinishReasonMonitor,
ItemValidationMonitor,
UnwantedHTTPCodesMonitor,
)
I didn't find the reason that got this error.
Traceback (most recent call last):
File "/app/python/lib/python3.8/site-packages/scrapy/utils/defer.py", line 157, in maybeDeferred_coro
result = f(*args, **kw)
File "/usr/local/lib/python3.8/site-packages/pydispatch/robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "/app/python/lib/python3.8/site-packages/spidermon/contrib/scrapy/extensions.py", line 131, in spider_closed
task.stop()
File "/app/python/lib/python3.8/site-packages/twisted/internet/task.py", line 197, in stop
assert self.running, "Tried to stop a LoopingCall that was " "not running."
AssertionError: Tried to stop a LoopingCall that was not running.
Any update on this issue? I also faced the same issue today.
I got this error when my spider had a few timeout requests and spider close action was triggered but failed with following reason and later above error
File "/app/python/lib/python3.9/site-packages/spidermon/runners.py", line 29, in run
return self.run_suite()
File "/app/python/lib/python3.9/site-packages/spidermon/[runners.py](http://runners.py/)", line 45, in run_suite
[self.run](http://self.run/)_actions()
File "/app/python/lib/python3.9/site-packages/spidermon/[runners.py](http://runners.py/)", line 73, in run_actions
[self.run](http://self.run/)_monitors_failed()
File "/app/python/lib/python3.9/site-packages/spidermon/[runners.py](http://runners.py/)", line 93, in run_monitors_failed
[action.run](http://action.run/)(self.result, [self.data](http://self.data/))
TypeError: run() missing 1 required positional argument: 'data'
Hello, could you please provide more details about this so we can replicate it?
What Spidermon and Scrapy version were you using? What Reactor was installed?
I see some weird URLS like:
- http://runners.py
- http://action.run
- http://self.data/
Are you sure the monitor and/or actions were properly configured?
Hi @VMRuiz
I actually added one "close spider" action dynamically if certain condition is fulfilled to the moniter suite that runs periodically
self.monitors_failed_actions.append(CloseSpiderAction)
We are using scrapy == 2.5.1 and spidermon == 1.17.0 versions.
and twisted reacter is installed:
from scrapy.utils.reactor import install_reactor install_reactor("twisted.internet.asyncioreactor.AsyncioSelectorReactor")