运行出错显示这个怎么解决
2023-03-12 18:15:15 [twisted] CRITICAL:
Traceback (most recent call last):
File "F:\python\anaconda\lib\site-packages\twisted\internet\defer.py", line 1697, in _inlineCallbacks
result = context.run(gen.send, result)
File "F:\python\anaconda\lib\site-packages\scrapy\crawler.py", line 122, in crawl
self.engine = self._create_engine()
File "F:\python\anaconda\lib\site-packages\scrapy\crawler.py", line 136, in create_engine
return ExecutionEngine(self, lambda : self.stop())
File "F:\python\anaconda\lib\site-packages\scrapy\core\engine.py", line 78, in init
self.downloader = downloader_cls(crawler)
File "F:\python\anaconda\lib\site-packages\scrapy\core\downloader_init.py", line 85, in init
self.middleware = DownloaderMiddlewareManager.from_crawler(crawler)
File "F:\python\anaconda\lib\site-packages\scrapy\middleware.py", line 68, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "F:\python\anaconda\lib\site-packages\scrapy\middleware.py", line 43, in from_settings
mwcls = load_object(clspath)
File "F:\python\anaconda\lib\site-packages\scrapy\utils\misc.py", line 60, in load_object
mod = import_module(module)
File "F:\python\anaconda\lib\importlib_init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "
pip install atters后显示Requirement already satisfied: attrs in f:\python\anaconda\lib\site-packages (21.2.0) 若果是要install到某路径下,我应该怎么找到安装在那个目录
可能和python版本有关,需要python3版本。
我的是python3.9,是不是因为安装依赖不在成功的原因,我pip install -r requirements.txt时显示Requirement already satisfied: Pillow>=8.1.1 in f:\python\anaconda\lib\site-packages (from -r requirements.txt (line 1)) (8.4.0) 这个问题要怎么解决
作者大大可以分享下你的版本吗,我重新装了anaconda3.9还是不行
---原始邮件--- 发件人: "Chen @.> 发送时间: 2023年3月12日(周日) 晚上11:56 收件人: @.>; 抄送: @.@.>; 主题: Re: [dataabc/weibo-search] 运行出错显示这个怎么解决 (Issue #339)
可能和python版本有关,需要python3版本。
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
可能和anaconda有关,单独安装python就可以。
解决了,感谢大大,不过我也有了这个报错,看到别人也遇到了,请问大大这个解决了吗
AttributeError: 'NoneType' object has no attribute 'split'
AttributeError: 'NoneType' object has no attribute 'split'
2023-03-14 16:32:31 [scrapy.core.scraper] ERROR: Spider error processing <GET https://s.weibo.com/weibo?q=%E4%BA%A4%E9%80%9A&scope=ori&suball=1×cope=custom:2022-03-06-23:2022-03-07-0&page=1> (referer: https://s.weibo.com/weibo?q=%E4%BA%A4%E9%80%9A&scope=ori&suball=1×cope=custom:2022-03-06-0:2022-03-07-0&page=1)
Traceback (most recent call last):
File "e:\python38\lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback
yield next(it)
File "e:\python38\lib\site-packages\scrapy\utils\python.py", line 312, in next
return next(self.data)
File "e:\python38\lib\site-packages\scrapy\utils\python.py", line 312, in next
return next(self.data)
File "e:\python38\lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync
for r in iterable:
File "e:\python38\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in
另外,作者大大,我的crawl里多了这个文件,用txt打开是乱码,请问这个是什么原因,是正常的吗

这个可能是微博格式改了,您可以参考网友的方案,修改search.py,之前您回复过那个issue。乱码正常,那是进度,不用管。
AttributeError: 'NoneType' object has no attribute 'split'
作者大大,我也遇到了split的问题,这个怎么解决
AttributeError: 'NoneType' object has no attribute 'split'
作者大大,我也遇到了split的问题,这个怎么解决
我修改过了search.py文件,用了微博旧版的cookie但还是遇到了这个问题
@litalxh 你回退回老版本看看,每个人情况不一样。
你好,我也出现ModuleNotFoundError: No module named 'attrs'这个问题,请问是如何解决的
解决了