weibo-search icon indicating copy to clipboard operation
weibo-search copied to clipboard

请问,这种cmd报错如何修复?

Open HEISHALJH opened this issue 2 years ago • 7 comments

C:\Users\DELL\Downloads\weibo-search-master>scrapy crawl search -s JOBDIR=crawls/search 2022-08-11 23:53:57 [scrapy.core.scraper] ERROR: Spider error processing <GET https://s.weibo.com/weibo?q=%E7%A2%B3%E8%BE%BE%E5%B3%B0&scope=ori&suball=1&timescope=custom:2020-09-04-0:2020-09-05-0&page=1> (referer: https://s.weibo.com/weibo?q=%E7%A2%B3%E8%BE%BE%E5%B3%B0&scope=ori&suball=1&timescope=custom:2020-09-01-0:2022-06-01-0) Traceback (most recent call last): File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\utils\defer.py", line 132, in iter_errback yield next(it) File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\utils\python.py", line 354, in next return next(self.data) File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\utils\python.py", line 354, in next return next(self.data) File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\core\spidermw.py", line 66, in _evaluate_iterable for r in iterable: File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output for x in result: File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\core\spidermw.py", line 66, in _evaluate_iterable for r in iterable: File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 342, in return (_set_referer(r) for r in result or ()) File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\core\spidermw.py", line 66, in _evaluate_iterable for r in iterable: File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 40, in return (r for r in result or () if _filter(r)) File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\core\spidermw.py", line 66, in _evaluate_iterable for r in iterable: File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in return (r for r in result or () if _filter(r)) File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\core\spidermw.py", line 66, in _evaluate_iterable for r in iterable: File "C:\Users\DELL\Downloads\weibo-search-master\weibo\spiders\search.py", line 151, in parse_by_day for weibo in self.parse_weibo(response): File "C:\Users\DELL\Downloads\weibo-search-master\weibo\spiders\search.py", line 358, in parse_weibo ).split('/')[-1].split('?')[0] AttributeError: 'NoneType' object has no attribute 'split'

HEISHALJH avatar Aug 11 '22 16:08 HEISHALJH

试了一下把search.py里面的p[@class="from"全部更换为div[@class="from",可以跑起来

Chovyzheng avatar Aug 11 '22 17:08 Chovyzheng

试了一下把search.py里面的p[@Class="from"全部更换为div[@Class="from",可以跑起来

可以了,非常感谢!

HEISHALJH avatar Aug 12 '22 00:08 HEISHALJH

022-08-12 11:06:39 [scrapy.core.scraper] ERROR: Spider error processing <GET https://s.weibo.com/weibo?q=%E5%94%90%E5%B1%B1%20%E6%89%9 3&typeall=1&suball=1&timescope=custom:2022-06-16-0:2022-06-16-1&page=1> (referer: https://s.weibo.com/weibo?q=%E5%94%90%E5%B1%B1%20%E6% 89%93&typeall=1&suball=1&timescope=custom:2022-06-16-0:2022-06-17-0&page=1) Traceback (most recent call last): File "d:\实训\豆瓣\venv\lib\site-packages\scrapy\utils\defer.py", line 120, in iter_errback yield next(it) File "d:\实训\豆瓣\venv\lib\site-packages\scrapy\utils\python.py", line 353, in next return next(self.data) File "d:\实训\豆瓣\venv\lib\site-packages\scrapy\utils\python.py", line 353, in next return next(self.data) File "d:\实训\豆瓣\venv\lib\site-packages\scrapy\core\spidermw.py", line 62, in _evaluate_iterable for r in iterable: File "d:\实训\豆瓣\venv\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output for x in result: File "d:\实训\豆瓣\venv\lib\site-packages\scrapy\core\spidermw.py", line 62, in _evaluate_iterable for r in iterable: File "d:\实训\豆瓣\venv\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 340, in return (_set_referer(r) for r in result or ()) File "d:\实训\豆瓣\venv\lib\site-packages\scrapy\core\spidermw.py", line 62, in _evaluate_iterable for r in iterable: File "d:\实训\豆瓣\venv\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in return (r for r in result or () if _filter(r)) File "d:\实训\豆瓣\venv\lib\site-packages\scrapy\core\spidermw.py", line 62, in _evaluate_iterable for r in iterable: File "d:\实训\豆瓣\venv\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in return (r for r in result or () if _filter(r)) File "d:\实训\豆瓣\venv\lib\site-packages\scrapy\core\spidermw.py", line 62, in _evaluate_iterable for r in iterable: File "C:\Users\尚晨\Desktop\weibo-search-master\weibo\spiders\search.py", line 198, in parse_by_hour for weibo in self.parse_weibo(response): File "C:\Users\尚晨\Desktop\weibo-search-master\weibo\spiders\search.py", line 466, in parse_weibo './/div[@Class="from"]/a/@href').extract_first().split( AttributeError: 'NoneType' object has no attribute 'split'/ 为什么把p[@Class="from"全部更换为div[@Class="from"之后,还是会报错?

Jocelince avatar Aug 12 '22 03:08 Jocelince

同问

agag2296792149 avatar Aug 12 '22 13:08 agag2296792149

试了一下把search.py里面的p[@Class="from"全部更换为div[@Class="from",可以跑起来

我也是这样做的,还是不行呢

ErikChen0001 avatar Aug 13 '22 09:08 ErikChen0001

同问,我也是不能够跑起来

yiweiyi121 avatar Aug 23 '22 03:08 yiweiyi121

search.py文件最后 retweet部分的最后三个'.//p[@class="from"不要改

SkydustZ avatar Sep 06 '22 12:09 SkydustZ