JustAnotherArchivist
JustAnotherArchivist
Yeah, that would certainly be possible and is likely the most reasonable approach. The [`/api/info` endpoint](https://old.reddit.com/dev/api#GET_api_info) accepts 100 IDs at once, I believe, which should help with the rate limits....
I recently thought about this a bit more. The reasonable thing to do here is to use Pushshift for ID retrieval and then collect the data from Reddit, discarding everything...
The changes sound good so far, though I haven't reviewed the code thoroughly yet. Some quick comments on things I noticed at a glance: * I don't mind the number...
The error indicates that you're comparing `Tweet` objects. And yes, that's not supported because it's unclear to me what that would even sensibly mean. Your code snippet (which isn't valid...
That shouldn't matter. But your code above is obviously not complete. For example, there's a call to `self.get_mension`, but neither `self` nor `get_mension` are defined in the snippet... snscrape never...
That would be nice. Unfortunately, LinkedIn *really* hates scrapers and has aggressive rate limiting from the little experimenting I've done a while ago. But I'll happily accept a PR for...
There is not. The ridiculous rate limits pretty much prevent implementing this. For authentication, see #270.
It's called UsersByRestIds and part of the GraphQL API. I'll have to look up the query ID (which appears in the URL before the operation name) though.
So... is anything publicly accessible?
As far as I can tell, everything requires auth. So closing this as non-actionable. If someone discovers that there is a way to bypass the login requirement, we can reopen...