crawly
crawly copied to clipboard
Improvements for spider management
Currently, there the Crawly.Engine
apis are lacking for spider monitoring and management, especially for when there is no access to logs.
I think some critical areas are:
- spider crawl stats (scraped item count, dropped request/item count, scrape speed)
-
stop_all_spiders
to stop all running spiders
The stopping of spiders should be easy to implement.
For the spider stats, since some of the data is nested quite deep in the supervision tree, i'm not so sure how to get it to "bubble up" to the Crawly.Engine
level.
@oltarasenko thoughts?
Actually I think that it's possible to get the stats from the DataStorage: e.g. Crawly.DataStorage.stats(spider_name)
.
https://github.com/oltarasenko/crawly/blob/master/lib/crawly/manager.ex#L83
I see, the crawl speed seems to be calculated based on the previous state's crawl count, so a separate callback would be necessary to obtain the crawl speed from the manager.
as for the drop count, there manager doesn't seem to be tracking it. Neither are the request/response workers:
- https://github.com/oltarasenko/crawly/blob/master/lib/crawly/requests_storage/requests_storage_worker.ex#L87
- https://github.com/oltarasenko/crawly/blob/master/lib/crawly/data_storage/data_storage_worker.ex#L40
Tentative scope:
- start spider (implemented)
- stop spider (implemented)
- start all spiders
- stop all spiders
- spider stats (crawl count, overridden settings, request count, storage count, crawl speed, drop count)
- list all spiders
- schedule spider to start at specific time (maybe cron style scheduling?)
@Ziinc I am thinking of making a major release of Crawly (aka v1.0.0). I think after a year of development and releases it's time to do that (I am seeing that our competitors: https://github.com/fredwu/crawler and https://github.com/Anonyfox/elixir-scrape have already reached their stable state and version, so I am tempted to do the same.
Saying this I would add that I think this is the last ticket that could summarize the 1.0.0 version of Crawly.
After 0.11.0? I think you should only bump the major version when the api scope has stabilized. Right now there are still quite a few areas that are incomplete and may result in api changes.
Not much use comparing to other projects, as they have been around longer.
@Ziinc yes we need to aim to get 1.0.0 release. It's a bit hard to push Crawly into production for larger products atm. The fact that we don't have a first stable major release hints that the framework is still in the testing stage. People are constantly saying that it's not stable.
I agree regarding the API stability. We need to achieve it, however it looks like, psychologically speaking we need to state that we have 1.0.0 aka stable version.
Probably we need to somehow define a scope of things to do before we can approach 1.0.0, however, it's even more important to get more production usages. If we fail to convince people to use crawly on production, we will die as a project :(