scrapy-splash icon indicating copy to clipboard operation
scrapy-splash copied to clipboard

Middleware settings for scrapy-splash with scrapy-cluster, SplashRequest not work

Open hustshawn opened this issue 7 years ago • 18 comments

In single node scrapy project, the settings like below as your document indicate works well.

# ====== Splash settings ======
SPLASH_URL = 'http://localhost:8050'
DOWNLOADER_MIDDLEWARES = {
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}

SPIDER_MIDDLEWARES = {
    'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}

DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

While if I integrate with the scrapy-cluster with below settings, the request with SplashRequest may not successfully send request to splash, so the splash not will not respond. Actually, the splash itself works fine when I just directly access it with constructed url from render.html endpoint.

SPIDER_MIDDLEWARES = {
    # disable built-in DepthMiddleware, since we do our own
    # depth management per crawl request
    'scrapy.spidermiddlewares.depth.DepthMiddleware': None,
    'crawling.meta_passthrough_middleware.MetaPassthroughMiddleware': 100,
    'crawling.redis_stats_middleware.RedisStatsMiddleware': 105,
    # The original 100 is conflict with the MetaPassthroughMiddleware, thus changed to 101
    'scrapy_splash.SplashDeduplicateArgsMiddleware': 101,
}

DOWNLOADER_MIDDLEWARES = {
    # Handle timeout retries with the redis scheduler and logger
    'scrapy.downloadermiddlewares.retry.RetryMiddleware': None,
    'crawling.redis_retry_middleware.RedisRetryMiddleware': 510,
    # exceptions processed in reverse order
    'crawling.log_retry_middleware.LogRetryMiddleware': 520,
    # custom cookies to not persist across crawl requests
    'scrapy.downloadermiddlewares.cookies.CookiesMiddleware': None,
    # 'crawling.custom_cookies.CustomCookiesMiddleware': 700,
    # Scrapy-splash DOWNLOADER_MIDDLEWARES
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}

# Scrapy-splash settings
SPLASH_URL = 'scrapy_splash:8050'
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

Anyone knows what's going wrong with the settings?

hustshawn avatar Jan 24 '17 02:01 hustshawn

I think it could be related to dupefilter used by crawling.distributed_scheduler.DistributedScheduler - this dupefilter uses request_fingerprint function which doesn't work correctly for Splash requests. Default dupefilter doesn't take request.meta values in account, while requests to Splash may differ only in request.meta until they are fixed by a downloader middleware.

kmike avatar Jan 24 '17 07:01 kmike

Facing the same issue.

rksaxena avatar Feb 10 '17 07:02 rksaxena

See also: https://github.com/istresearch/scrapy-cluster/issues/94. I'm not sure how it can be solved in scrapy-splash itself.

kmike avatar Feb 11 '17 11:02 kmike

so the scrapy-splash can't work with scrapy-cluster now?

wenxzhen avatar Mar 27 '17 08:03 wenxzhen

Yes, it can't. Currently one have to fork & fix scrapy-cluster to make them work together. An alternative way is to use Splash HTTP API directly, as shown at https://github.com/scrapy-plugins/scrapy-splash#why-not-use-the-splash-http-api-directly; I'm not completely sure, but likely it would work with scrapy-cluster.

kmike avatar Mar 27 '17 09:03 kmike

Thanks to @kmike

Do you happen to know where the problem is?

wenxzhen avatar Mar 27 '17 10:03 wenxzhen

@wenxzhen I'm not a scrapy-cluster user myself, but a brief look results are in this comment: https://github.com/scrapy-plugins/scrapy-splash/issues/101#issuecomment-274729809

kmike avatar Mar 27 '17 18:03 kmike

Thanks to @kmike After some investigation, found that python is not quite easy to support the serialization and deserialization of class instance. Therefore, I turn to another way:

  1. add a download middleware to populate some "splash" meta in the original scrapy request
  2. in the scrapy core downloander, when meeting with "splah" meta, replace the Scrapy request with a new Request with replaced URL -> to call the Splash HTTP API directly

Now it works

wenxzhen avatar Mar 28 '17 10:03 wenxzhen

@wenxzhen Could you please share some core code with ur or sent a PR to this repo?

hustshawn avatar Mar 28 '17 12:03 hustshawn

@hustshawn the basic idea is to not use the scrapy-splash stuffs, but to make use of the functionalities of the scrapy-cluster + scrapy.

The followings are mainly for PoC without optimization.

  1. we need to reuse the feeding capability of scrapy-cluster, so I add extra "attrs" in the json request

python kafka_monitor.py feed '{"url": "https://www.test.com", "appid":"testapp", "crawlid":"09876abc", "useragent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36", "attrs": {"splash": "1"}, "spiderid": "test"}'

the "splash": 1 is to tell that the reuqest needs to go to Splash with Http API directly

  1. add a downloader middleware into the scrapy-cluster and in the "process_request", if we detect the splash attribute, insert some necessary meta
splash_meta = request.meta[self.splash_meta_name]

        args = splash_meta.setdefault('args', {})
        splash_url = urljoin(self.splash_base_url, self.default_endpoint)
        args.setdefault('splash_url', splash_url)

        # only support POST api to Splash now
        args.setdefault('http_method', 'POST')

        body = json.dumps({"url": request.meta['url'], "wait": 5, "timeout": 10}, sort_keys=True)
        args.setdefault('body', body)

        headers = Headers({'Content-Type': 'application/json'})
        args.setdefault('headers', headers)
  1. When the request arrives at scrapy downloader, in the HTTP11DownloadHandler's download_reuqest: we need to replace the request:
def download_request(self, request, spider):
        """Return a deferred for the HTTP download"""
        agent = ScrapyAgent(contextFactory=self._contextFactory, pool=self._pool,
            maxsize=getattr(spider, 'download_maxsize', self._default_maxsize),
            warnsize=getattr(spider, 'download_warnsize', self._default_warnsize))

        if "splash" in request.meta:
            # we got a Splash forward request now
            splash_args = request.meta['splash']['args']
            new_splash_request = request.replace(
                url = splash_args['splash_url'],
                method = splash_args['http_method'],
                body = splash_args['body'],
                headers = splash_args['headers'],
                priority = request.priority
            )
            return agent.download_request(new_splash_request)
        else:
            return agent.download_request(request)

wenxzhen avatar Mar 29 '17 02:03 wenxzhen

Got your idea. Thanks a lot. @wenxzhen

hustshawn avatar Mar 29 '17 15:03 hustshawn

Could you please do PR with this code? Parse JS is really useful feature.

Dgadavin avatar Apr 07 '17 12:04 Dgadavin

we need to ask @kmike whether the 'basic' solution is acceptable or not? If yes, we can start the PR work.

wenxzhen avatar Apr 10 '17 03:04 wenxzhen

@wenxzhen did you create a download_handler middleware to implement your solution or did you modify the HTTP11DownloadHandler directly?

DreadfulDeveloper avatar May 21 '17 16:05 DreadfulDeveloper

I need to do both as I need to bypass the proxy to Splash too

wenxzhen avatar Jun 20 '17 09:06 wenxzhen

@wenxzhen did you solve it? i also need to proxy and splash.

LazerJesus avatar Sep 29 '18 21:09 LazerJesus

@FinnFrotscher check the code snippets above, hope it can help.

wenxzhen avatar Oct 26 '18 02:10 wenxzhen

It seems like https://github.com/scrapy/scrapy/issues/900 could be a good first step towards fixing this.

Gallaecio avatar May 09 '19 12:05 Gallaecio