facebook-scraper
facebook-scraper copied to clipboard
Login unsuccessful - You Can't Use This Feature Right Now
Before i start: @neon-ninja & everyone else helping this community, Thanks! You are all awesome for helping us n00bs out.
I'm using a dB in the backend to manage my requests, start_pages, credentials, etc.
After doing multiple scrapes, i'm now faced with this:
I think it's because i'm loging in using credentials, instead of using cookies.
Questions:
1 - Can i login using credentials, and store the cookies generated from the login using this library? (does someone have a code sample?)
2 - Can i load cookies from a variable, instead of a file? I wish to reduce the IO to disk as much as possible.
3 - I've seen this You Can't Use This Feature Right Now
all over my logs at times, and wonder if this should trigger an actual exception, so we can Sleep and retry the scrape.
1 & 2. Yes. You can extract a RequestsCookieJar
like so:
from facebook_scraper import _scraper
from facebook_scraper import *
_scraper.login(email, passsword)
cookies = _scraper.session.cookies
# Store & retrieve cookies
set_cookies(cookies)
# Check it worked
print(_scraper.is_logged_in())
You can convert a RequestsCookieJar to a dict with a direct cast dict(cookies)
or by calling cookies.get_dict()
3. That's actually how the scraper used to work, but a user wanted to still be able to extract the partial results that were retrieved prior to the exception being hit (https://github.com/kevinzg/facebook-scraper/issues/457#issuecomment-912611667, https://github.com/kevinzg/facebook-scraper/commit/bfffecd705c667bf5f1053c16dac2a4a084522b8, https://github.com/kevinzg/facebook-scraper/issues/422#issuecomment-893187790). For this kind of advanced handling, I think you should request a generator and iterate through it yourself. Then you can decide what to do with interim results, and how to handle Exceptions. utils.safe_consume
is just a convenience function.