headless-chrome-crawler icon indicating copy to clipboard operation
headless-chrome-crawler copied to clipboard

duplicated url are crawled twice

Open Minyar2004 opened this issue 6 years ago • 6 comments

What is the current behavior?

Duplicated urls are not skipped. The same url is crawled twice.

If the current behavior is a bug, please provide the steps to reproduce

const HCCrawler = require('./lib/hccrawler');

(async () => {
  const crawler = await HCCrawler.launch({
    evaluatePage: () => ({
      title: document.title,
    }),
    onSuccess: (result => {
      /console.log(result);
    }),
    skipDuplicates: true,
    jQuery: false,
    maxDepth: 3,
    args: ['--no-sandbox']
  });
  
  await crawler.queue([{
        url: 'https://www.example.com/'
      }, {
        url: 'https://www.example.com/'
  }]);

  await crawler.onIdle(); 
  await crawler.close(); 
})();

What is the expected behavior?

Crawled urls should be skipped even if they come from the queue.

Please tell us about your environment:

  • Version: lastest
  • Platform / OS version: Centos 7.1
  • Node.js version: v8.4.0

Minyar2004 avatar Jul 31 '18 14:07 Minyar2004

The reason might lie in helper.js:

static generateKey(options) {
    const json = JSON.stringify(pick(options, PICKED_OPTION_FIELDS), Helper.jsonStableReplacer);
    return Helper.hash(json).substring(0, MAX_KEY_LENGTH);
  }

Uniqueness is assessed from a hash generated on the result of JSON.stringify(), but this method doesn't guarantee constant order.

I'm looking for opinions. See https://github.com/substack/json-stable-stringify

davidebaldini avatar Sep 25 '18 12:09 davidebaldini

Same as #299 @yujiosaka should look into this.

BubuAnabelas avatar Oct 20 '18 18:10 BubuAnabelas

headless 模式下一直报302

SuperFireFoxy avatar Oct 11 '19 11:10 SuperFireFoxy

I found two reasons:

  1. maxConcurrency > 1, same page requested in parallel threads.
  2. Page that redirected will deduplicate source url, not target. You can skip these urls by setting skipRequestedRedirect: true

popstas avatar Mar 05 '20 15:03 popstas

is anyone consider creating a PR?

kulikalov avatar Oct 17 '20 07:10 kulikalov

Just posting here hoping this would help someone. This is true it crawls duplicate URLs when concurrency > 1. So here is what I did.

  1. First created a sqlite database.
  2. Then in RequestStarted event, insert the current url.
  3. In preRequest function (You can pass this function along with options object) , just check whether there is a record of current url. If it is there that means url has crawler or still crawling. so return false. It will skip the url
  4. In RequestRetried, RequestFailed events, delete the url. So that will allows crawler to try it again.

iamprageeth avatar Jun 19 '22 06:06 iamprageeth