bulk-downloader-for-reddit
bulk-downloader-for-reddit copied to clipboard
[FEATURE] Fetch the content from given custom (Reddit) URLs in a TXT file?
I don't see it in the docs and think it may be helpful (may be scraped elsewhere).
- [ ] I am requesting a feature.
- [ ] I am running the latest version of BDfR
- [ ] I have read the Opening an issue
Description
Clearly state the current situation and issues you experience. Then, explain how this feature would solve these issues and make life easier. Also, explain the feature with as many detail as possible.
Are you referring to the same functionality as the --include-id-file
option?
Are you referring to the same functionality as the
--include-id-file
option?
Ah, I missed that. It was touched very briefly. So we could basically add 100s of postids in the yaml file and they'll get processed? Could it be entire reddit URLs too?
You'll have to be more specific. What kind of Reddit URLs?
You'll have to be more specific. What kind of Reddit URLs?
Reddit gives you only 1000 results, but you could use google for example to get extra results from custom date ranges. Example: google => site:www.reddit.com/r/funny Custom range => from 1/1/2020 to 1/1/2021
Sample URLS: https://www.reddit.com/r/funny/comments/kmllz1/for_my_fiancés_birthday_i_made_a_book_of_quotes/ https://www.reddit.com/r/funny/comments/km2kta/shirtception_my_favorite_gift_every_year_from_my/
Or you just need the SUBIDs like: kmllz1 km2kta ?
Both are possible, just put them on separate lines of a simple text file. Why don't you just try it out instead of asking?