weibo-search icon indicating copy to clipboard operation
weibo-search copied to clipboard

KeyError: 'search'

Open DedDol opened this issue 2 years ago • 2 comments

不好意思打擾了,之前看了https://github.com/dataabc/weibo-search/issues/2這個貼,按照方法試了一下途中替換清華源時直接錯誤,再添加pip install weibo仍然顯示錯誤。之後換了個思路把大神的整個weibo文件夾換了上去得到了KeyError: 'search'這個問題想請問一下怎麼解決。 C:\Users\18681\weibo>scrapy crawl search -s JOBDIR=crawls/search 2023-06-14 04:33:50 [scrapy.utils.log] INFO: Scrapy 2.9.0 started (bot: weibo) 2023-06-14 04:33:50 [scrapy.utils.log] INFO: Versions: lxml 4.9.2.0, libxml2 2.9.12, cssselect 1.2.0, parsel 1.8.1, w3lib 2.1.1, Twisted 22.10.0, Python 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)], pyOpenSSL 23.2.0 (OpenSSL 3.1.1 30 May 2023), cryptography 41.0.1, Platform Windows-10-10.0.22000-SP0 Traceback (most recent call last): File "O:\python\Lib\site-packages\scrapy\spiderloader.py", line 77, in load return self._spiders[spider_name] ~~~~~~~~~~~~~^^^^^^^^^^^^^ KeyError: 'search'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in run_code File "O:\python\Scripts\scrapy.exe_main.py", line 7, in File "O:\python\Lib\site-packages\scrapy\cmdline.py", line 158, in execute _run_print_help(parser, _run_command, cmd, args, opts) File "O:\python\Lib\site-packages\scrapy\cmdline.py", line 111, in _run_print_help func(*a, **kw) File "O:\python\Lib\site-packages\scrapy\cmdline.py", line 166, in _run_command cmd.run(args, opts) File "O:\python\Lib\site-packages\scrapy\commands\crawl.py", line 23, in run crawl_defer = self.crawler_process.crawl(spname, **opts.spargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "O:\python\Lib\site-packages\scrapy\crawler.py", line 239, in crawl crawler = self.create_crawler(crawler_or_spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "O:\python\Lib\site-packages\scrapy\crawler.py", line 273, in create_crawler return self._create_crawler(crawler_or_spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "O:\python\Lib\site-packages\scrapy\crawler.py", line 353, in _create_crawler spidercls = self.spider_loader.load(spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "O:\python\Lib\site-packages\scrapy\spiderloader.py", line 79, in load raise KeyError(f"Spider not found: {spider_name}") KeyError: 'Spider not found: search'

DedDol avatar Jun 13 '23 19:06 DedDol

pip是用来安装第三方包的,本程序没有发布pip形式,您pip install weibo是不对的。不能直接使用weibo文件夹,您应该先安装scrapy和依赖的包,然后下载本程序,在weibo-search目录运行命令行。

dataabc avatar Jun 14 '23 17:06 dataabc

感謝大神回復,其實問題是我沒有CD到search.py的那個文件夾中再進行scrapy crawl search 這個命令造成的。已自行解決,也為有同樣困擾的朋友2提個醒,再次感謝回復。

DedDol avatar Jun 14 '23 17:06 DedDol