modm
modm
may be you can view the swagger doc ```http://localhost:5000/api.html``` , you will find the api to run the spider english is not my language ^ ^
thanks your advice , i think multiple select checkbox will be better :)
try the script below, execute this script after generated the egg file ```` import requests # upload upload_url = 'http://localhost:5000/project/1/spider/upload' # 1 is the project id egg_path = 'output.egg' auth_info...
wow, what a clever implementation, let me try it.
i afraid of a large size log will make the service very slowly
@jxltom 很不错的建议 ,谢谢 ,后续版本会改善这些问题。
@bosbyj SpiderKeeper 基于scrapyd服务,需要scrapyd,win下可以,不过没有测试过
@PythonYXY 现在优先级相当于 运行爬虫的机器数,优先级越高,会再越多的机器上运行同一个爬虫(分布式执行,需要scrapy-redis)
same like pass the params from command line e.g ```foo=1,bar=2``` equals ```scrapy crawl a_scraper -a foo=1 -a bar=2 ```