weibo-crawler icon indicating copy to clipboard operation
weibo-crawler copied to clipboard

新浪微博爬虫,用python爬取新浪微博数据,并下载微博图片和微博视频

Results 248 weibo-crawler issues
Sort by recently updated
recently updated
newest added

请问各位大佬,仅好友或者互关可见的微博可以爬到吗

![Image](https://github.com/user-attachments/assets/c75c21cd-8d48-478d-ac17-4876f2211420)

![Image](https://github.com/user-attachments/assets/0c60cc64-934c-41fe-871f-ce29fd492bc6) ![Image](https://github.com/user-attachments/assets/3b4d2742-d286-4762-8f1f-911d3677b37d)

我想下载很多不同用户的微博所有内容,把他们的图片、视频都存放在一起,会出现重名情况吗?您的图片名称设计逻辑是什么呢,想知道例如20250424T_5158930091869759_1.jpg,您设置的5158930091869759是全局唯一的吗?爬虫自行生成哈希值?还是使用官方ID?非常感谢!!

{ "user_id_list": "user_id_list.txt", "only_crawl_original": 0, "remove_html_tag": 1, "since_date": "2018-01-01", "start_page": 1, "page_weibo_count": 10, "write_mode": ["csv"], "original_pic_download": 1, "retweet_pic_download": 1, "original_video_download": 1, "retweet_video_download": 1, "original_live_photo_download": 1, "retweet_live_photo_download": 1, "download_comment":0, "comment_max_download_count":1000, "download_repost":...

[ ![Image](https://github.com/user-attachments/assets/1df4a205-bff2-481a-84c1-b8c47171abdf) ](url)

看了之前的讨论,发现要获取所在地应该要加上cookie,加完之后发现其他信息都能出来,但是所在地那边显示的是其他(我对这个账号已经进行了设置,应该不会是其他)求解!十分感谢!