Clément Denoix

Results 7 comments of Clément Denoix

Hey @motin , indeed, travis updated their way to use `xvfb`, I fixed the config in #50 , could you rebase your PR? Thanks!

Hi @Tamplier, Thanks for opening this! What you experiencing looks similar to #21, and you are right about the single instance of the webdriver. Exposing the `driver` in the response...

@guillaumedsde Will need to investigate this further, but it might be related to your `COOKIES_ENABLED=False` in your settings. For the middleware to work, you need to activate the cookies: https://github.com/clemfromspace/scrapy-cloudflare-middleware/blob/master/scrapy_cloudflare_middleware/middlewares.py#L45

Hi @mark5280 , Thanks for opening this! Could you add a `try...except` clause to the pip imports like this: https://github.com/clemfromspace/scrapy-selenium/blob/develop/setup.py#L4 ? This way, we also handle the previous pip version...

Hi there, I was actually looking to propose Algolia as the search Engine and I saw this PR... If you allow me to answer the question about the benefits of...

Scrapy have a built-in support for the robots.txt: https://doc.scrapy.org/en/latest/topics/settings.html?highlight=robot#std:setting-ROBOTSTXT_OBEY https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#topics-dlmw-robots Should be easy to add the Scrapy settings here (`'ROBOTSTXT_OBEY': True`): https://github.com/algolia/docsearch-scraper/blob/master/scraper/src/index.py#L52 But it will maybe impact existing configurations though.

Yeah, let's wait for the refactor, we can then add a new middleware inspired by the built-in one from scrapy: https://github.com/scrapy/scrapy/blob/master/scrapy/downloadermiddlewares/robotstxt.py#L88