Scrape empty lists
Hi joey,
Recently I found that I can not scrape anything from Linkedin, most of the pulled information is empty. Both for person and company information, any suggestions? Below are the code and output. Many thanks!
Output 1:
Jaime Gilberto Adrián Zúñiga Espinoza About [] Experience [] Education [] Interest [] Accomplishments [] Contacts []
Output 2:
{"name": "Google", "about_us": null, "specialties": null, "website": null, "industry": null, "company_type": "Google", "headquarters": null, "company_size": null, "founded": null, "affiliated_companies": [], "employees": [null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null]}
Code1:
import os from linkedin_scraper import Person, actions from selenium import webdriver from selenium.webdriver.chrome.options import Options
chrome_options = Options() chrome_options.add_argument("--headless") driver = webdriver.Chrome(r"C:\Research_Software\Anaconda\envs\tf\Scripts\chromedriver.exe", options=chrome_options)
email = os.getenv("XXX my email") password = os.getenv("XXX password") actions.login(driver, email, password) # if email and password isnt given, it'll prompt in terminal person = Person("https://www.linkedin.com/in/adrian0350", contacts=[], driver=driver) print(person)
Code2:
import os from linkedin_scraper import Person, Company, actions from selenium import webdriver from selenium.webdriver.chrome.options import Options
driver = webdriver.Chrome(r"C:\Research_Software\Anaconda\envs\tf\Scripts\chromedriver.exe") email = os.getenv("[email protected]") password = os.getenv("8792950liyifuc") actions.login(driver, email, password) # if email and password isnt given, it'll prompt in terminal company = Company("https://ca.linkedin.com/company/google", driver=driver, get_employees=True, close_on_complete=False, scrape=False) driver.implicitly_wait(3) company.scrape(close_on_complete=False) print(company)
I'm getting this issue too. Any solution?
same here
This code won’t work. Linked in changes there html very often to make it impossible to realistically scrap