Nuclei fails to wait for dynamic redirection on slow pages: could not wait load event: {-32000 Execution context was destroyed. }
Description of the problem
I would like to scan a webpage, which is sometime really slow and potentially doing redirects by Javascript. I would like to wait until its final state, and perform some checks there.
The problem occurs when a page takes more than 3 seconds to load, and it navigates to another page in the meanwhile. It seems that the headless waitload action can't handle that.
If you know any workaround to this, please let me know!
Nuclei version:
Nuclei Engine Version: v3.0.4
Current Behavior:
Nuclei stop the analysis with the error: could get html element: error occurred executing action: could not wait load event: {-32000 Execution context was destroyed. }
Expected Behavior:
The headless waitload action waits until all redirections are performed and the final page is loaded.
Steps To Reproduce:
Template:
id: redirect_bug
info:
name: Dynamic redirect bug
severity: info
author: Levente Kovats
headless:
- steps:
- action: navigate
args:
url: "{{BaseURL}}"
- action: waitload
Nuclei command: nuclei -disable-update-check -headless -target http://localhost:8000/ -templates redirect_bug.yaml -v -matcher-status -page-timeout 60 -timeout 60
output:
__ _
____ __ _______/ /__ (_)
/ __ \/ / / / ___/ / _ \/ /
/ / / / /_/ / /__/ / __/ /
/_/ /_/\__,_/\___/_/\___/_/ v3.0.4
projectdiscovery.io
[VER] Started metrics server at localhost:9092
[WRN] Template redirect_bug is not signed or tampered
[INF] Current nuclei version: v3.0.4 (outdated)
[INF] Current nuclei-templates version: (outdated)
[INF] New templates added in latest release: 0
[INF] Templates loaded for current scan: 1
[INF] Targets loaded for current scan: 1
[WRN] [redirect_bug] Could not execute request for http://localhost:8000/: could get html element: error occurred executing action: could not wait load event: {-32000 Execution context was destroyed. }
[INF] No results found. Better luck next time!
I created a simple python HTTP server to reproduce this behavior:
import http.server
import socketserver
from urllib.parse import urlparse
import time
class MyHttpRequestHandler(http.server.SimpleHTTPRequestHandler):
def do_GET(self):
url = urlparse(self.path)
response = "Hellooo!"
if url.path == "/":
response = """
<!DOCTYPE html>
<head>
<script>
// redirect to a different page after 3 seconds
setTimeout( () => location.href = "https://imdb.com/", 3000);
</script>
<script src="/delay" ></script><!-- slowly loading resource -->
</head>
<body>
<h1>Hello-hello!</h1>
</body>
</html>
"""
elif url.path == "/delay":
# wait 5 seconds before sending the response back
time.sleep(float(5))
self.send_response(200) # HTTP status: 200-OK
self.send_header("Content-type", "text/html")
# prevent the browser from caching the response:
self.send_header("Cache-Control", "no-cache, no-store, must-revalidate")
self.send_header("Pragma", "no-cache")
self.send_header("Expires", "0")
self.end_headers()
try:
self.wfile.write(bytes(response, "utf8"))
except (BrokenPipeError, ConnectionAbortedError):
pass
return
class ThreadingSimpleServer(socketserver.ThreadingMixIn, http.server.HTTPServer):
pass
PORT = 8000
my_server = ThreadingSimpleServer(("localhost", PORT), MyHttpRequestHandler)
try:
print(f"Starting the server on port {PORT}")
my_server.serve_forever()
except KeyboardInterrupt:
pass
my_server.server_close()
print("Server stopped.")
Refs:
https://docs.projectdiscovery.io/templates/protocols/headless#waitload https://github.com/projectdiscovery/nuclei/blob/8d7bbdd97883b00e297b0cb5201e0516328eed38/pkg/protocols/headless/engine/page_actions.go#L464-L475 https://github.com/go-rod/rod/blob/167ecc0d37408d3a9ee8bb21bfc4bc8b4b336a86/lib/js/helper.go#L118-L122 https://github.com/go-rod/rod/blob/167ecc0d37408d3a9ee8bb21bfc4bc8b4b336a86/page_eval_test.go#L149-L166
@tovask you can increase -page-timeout option value.
nuclei -h page-timeout
Usage:
nuclei [flags]
Flags:
-page-timeout int seconds to wait for each page in headless mode (default 20)
Thanks @ehsandeep , I've already tried that, neither -page-timeout nor -timeout helps unfortunately 😕
(edited the original description with this)
When I run it with -show-browser, instead of the "Execution context was destroyed." message I got the {-32000 Inspected target navigated or closed } error message. (I have no idea why it's different, but anyway the problem is the same - the page is not analyzed)