CheerioCrawler not persisting cookies
Which package is this bug report for? If unsure which one to select, leave blank
@crawlee/cheerio (CheerioCrawler)
Issue description
The CheerioCrawler is not persisting cookies at all. The session storage does have the cookies for the request.url but it is not being set. Manually trying to set it in the preNavigationHooks does not work as session.getCookieString(request.url) is empty.
- Create new CheerioCrawler with
useSessionPool: trueandpersistCookiesPerSession: true - Visit url that assigns cookie on response
- Visit url again
- Cookie is not being set in request headers
Code sample
import { CheerioCrawler } from 'crawlee';
const crawler = new CheerioCrawler({
minConcurrency: 1,
maxConcurrency: 10,
requestHandlerTimeoutSecs: 30,
maxRequestRetries: 10,
useSessionPool: true,
persistCookiesPerSession: true,
preNavigationHooks: [
async ({ request, session }, gotOptions) => {
gotOptions.useHeaderGenerator = true;
gotOptions.headerGeneratorOptions = {
browsers: [{ name: 'firefox', minVersion: 115, maxVersion: 115 }],
devices: ['desktop'],
operatingSystems: ['windows'],
locales: ['en-US', 'en'],
};
// Cookies are not present here on the second request
console.log(session.getCookieString(request.url));
},
],
requestHandler: async ({ request, session, addRequests }) => {
// Cookies are present here
console.log(session.getCookies(request.url));
// Requeue same URL with different uniqueKey
await addRequests([{ url: request.url, uniqueKey: new Date().toString() }]);
},
});
await crawler.run(['http://localhost:8000']);
Package version
v3.11.1
Node.js version
v20.16.0
Operating system
MacOS Sonoma
Apify platform
- [ ] Tick me if you encountered this issue on the Apify platform
I have tested this on the next release
No response
Other context
Here's a small Python script to test if Crawlee is properly setting cookies. It will set a cookie on GET /
#!/usr/bin/env python3
import http.server as SimpleHTTPServer
from http import cookies
import socketserver as SocketServer
import logging
PORT = 8000
class GetHandler(
SimpleHTTPServer.SimpleHTTPRequestHandler
):
def do_GET(self):
logging.error(self.headers)
self.send_response(200)
self.send_header("Content-type", "text/html")
cookie = cookies.SimpleCookie()
cookie['a_cookie'] = "Cookie_Value"
self.send_header("Set-Cookie", cookie.output(header='', sep=''))
self.end_headers()
self.wfile.write(bytes("TEST", 'utf-8'))
Handler = GetHandler
httpd = SocketServer.TCPServer(("", PORT), Handler)
httpd.serve_forever()
Cookies are persisted per session, your second request is (almost certainly) getting a new session.
Cookies are persisted per session, your second request is (almost certainly) getting a new session.
How do I make sure the second request is using the same session?
What are you trying to do?
You could set maxPoolSize: 1, that way there will be only one session. Otherwise I don't think we have a way to force a session id on new requests (but we should add one, that's a good point).
What are you trying to do?
The website I'm trying to scrape has a anti-bot feature where you need to wait in a access queue. The access queue page sends a Refresh header which indicates the amount of seconds you need to wait. Afterwards you need to refresh the page to gain access. After you gain access you are given an access cookie which must be present in all future requests.
When I detect this I'm sleeping the required amount and then re-queuing the same URL. I can't find a way to refresh a pay via Cheerio directly so I'm having to requeue it with a different unique key. However this seems difficult to implement with many sessions since I cannot specify the request go through the same session. Maybe there's a better way to handle this use case in Crawlee I'm not aware of?
Can you give me the Url of that website ?