facebook-post-scraper
facebook-post-scraper copied to clipboard
Error: 'charmap' codec can't encode characters
Complete noob here, so excuse my naivete
I was getting the error "'charmap' codec can't encode characters
if args.usage == "WT":
with open('output.txt', 'w') as file:
for post in postBigDict:
file.write(json.dumps(post)) # use json load to recover
elif args.usage == "CSV":
with open('data.csv', 'w',) as csvfile:
writer = csv.writer(csvfile)
#writer.writerow(['Post', 'Link', 'Image', 'Comments', 'Reaction'])
writer.writerow(['Post', 'Link', 'Image', 'Comments', 'Shares'])
for post in postBigDict:
writer.writerow([post['Post'], post['Link'],post['Image'], post['Comments'], post['Shares']])
#writer.writerow([post['Post'], post['Link'],post['Image'], post['Comments'], post['Reaction']])
else:
for post in postBigDict:
print("\n")
to this `
if args.usage == "WT":
with io.open('output.txt', 'w', encoding='utf-8') as file:
for post in postBigDict:
file.write(json.dumps(post)) # use json load to recover
elif args.usage == "CSV":
with io.open('data.csv', 'w', encoding='utf-8') as csvfile:
writer = csv.writer(csvfile)
#writer.writerow(['Post', 'Link', 'Image', 'Comments', 'Reaction'])
writer.writerow(['Post', 'Link', 'Image', 'Comments', 'Shares'])
for post in postBigDict:
writer.writerow([post['Post'], post['Link'],post['Image'], post['Comments'], post['Shares']])
#writer.writerow([post['Post'], post['Link'],post['Image'], post['Comments'], post['Reaction']])
else:
for post in postBigDict:
print("\n")
`
It then worked.
+1
Add import io
on top.
Also create pull request so it gets updated for all users
Also create pull request so it gets updated for all users
Sorry, I am not much of a coder so do not know what a pull request is
encoding='utf-8' is built-in in the python 3. So, no need to add io.