facebook-scraper icon indicating copy to clipboard operation
facebook-scraper copied to clipboard

Getting Blank CSV file

Open urooj471 opened this issue 1 year ago • 2 comments

I am trying to scrape comments from a facebook page "SacheHaiSatyanarayan" and I am geeting blank csv file. But, when I am scraping post and comments form "TimesofIndia" page then I am getting proper output. What can be the issue?

import csv import time from facebook_scraper import FacebookScraper

Set the output file name and number of pages to scrape

output_file = 'output.csv' num_pages = 15

Set the waiting time between API calls (in seconds)

wait_time = 5

Create an instance of the FacebookScraper class

scraper = FacebookScraper()

Open the output file and write the header row

with open(output_file, 'w', encoding='utf-8', newline='') as file: writer = csv.writer(file) writer.writerow(['post_id', 'post_text', 'commenter_name','commenter_text'])

# Iterate over the desired number of pages of posts
for page_num, post in enumerate(scraper.get_posts(account="SacheHaiSatyanarayan", pages=num_pages, options={"comments": True}), start=1):
    # Check if the comments_full key is present and is a list in the post object
    if 'comments_full' in post and isinstance(post['comments_full'], list):
        # Iterate over the comments for the current post and write to the output file
        for comment in post['comments_full']:
            writer.writerow([post['post_id'], post['text'], comment["commenter_name"],comment['comment_text']])

    # Wait before making the next API call to avoid being blocked
    time.sleep(wait_time)

    # Print progress information to the console
    print(f'Processed page {page_num}/{num_pages}')

I have given snapshot of my code.

urooj471 avatar Mar 31 '23 10:03 urooj471

Try pass cookies as per the readme

neon-ninja avatar Mar 31 '23 23:03 neon-ninja

I have passed cookies as per readme but still i am getting blank csv file. Could you please look into it.

import csv import time from facebook_scraper import FacebookScraper

Set the output file name and number of pages to scrape

output_file = 'output.csv' num_pages = 15

Set the waiting time between API calls (in seconds)

wait_time = 5

Create an instance of the FacebookScraper class

scraper = FacebookScraper()

Open the output file and write the header row

with open(output_file, 'w', encoding='utf-8', newline='') as file: writer = csv.writer(file) writer.writerow(['post_id', 'post_text', 'commenter_name','commenter_text'])

# Iterate over the desired number of pages of posts
for page_num, post in enumerate(scraper.get_posts(account="SacheHaiSatyanarayan", pages=num_pages, cookies="C:/Users/urooj/Downloads/cookies.txt",options={"comments": True}), start=1):
    # Check if the comments_full key is present and is a list in the post object
    if 'comments_full' in post and isinstance(post['comments_full'], list):
        # Iterate over the comments for the current post and write to the output file
        for comment in post['comments_full']:
            writer.writerow([post['post_id'], post['text'], comment["commenter_name"],comment['comment_text']])

    # Wait before making the next API call to avoid being blocked
    time.sleep(wait_time)

    # Print progress information to the console
    print(f'Processed page {page_num}/{num_pages}')

urooj471 avatar Apr 01 '23 07:04 urooj471