BaiduyunSpider icon indicating copy to clipboard operation
BaiduyunSpider copied to clipboard

百度云网盘搜索引擎,包含爬虫 & 网站

Results 8 BaiduyunSpider issues
Sort by recently updated
recently updated
newest added

File "c:\users\administrator.win-a3unjobi233\appdata\local\programs\python\python38\lib\site-packages\scrapy\crawler.py", line 89, in crawl yield self.engine.open_spider(self.spider, start_requests) redis.exceptions.ConnectionError: Error 10061 connecting to 127.0.0.1:6379. 由于目标计算机积极拒绝,无法连接。. 2021-02-01 10:12:28 [twisted] CRITICAL: Traceback (most recent call last): File "c:\users\administrator.win-a3unjobi233\appdata\local\programs\python\python38\lib\site-packages\redis\connection.py", line 559, in...

我也写了一个百度云搜索 www.81ad.cn 没放广告,调用百度内部接口,现在已经有3千多万数据了

按照你的步骤,执行。。是不是缺少了什么, scrapy crawl baidupan 执行这个命令是一直报这个错

success to fetched hot users: 24 Traceback (most recent call last): File "spider.py", line 475, in spider.seedUsers() File "spider.py", line 328, in seedUsers self.db.commit() File "spider.py", line 101, in commit...

errno=-55;这个是什么造成的,我的爬虫,现在一直被返回这个错误码。能否给我一份大概带注释的爬虫脚本,我自己可以修改下,想减少下弯路,我是Python小白。谢了

大四就能写爬虫了,请收下膝盖~

我对代码进行了改造,使用了代理ip但是仍然报错: uk:2518160999 error to fetch files,try again later getShareLists errno:-55 代码如下: def getHtml(url,ref=None,reget=5): try: **proxies={'http': '222.194.14.130:808'} proxy_support = urllib2.ProxyHandler(proxies) opener = urllib2.build_opener(proxy_support, urllib2.HTTPHandler) #定义Opener # urllib2.install_opener(opener) request = urllib2.Request(url)** request.add_header('User-Agent',...