[Analyzer] Nuclei
Name
Nuclei
Link
https://docs.nuclei.sh/
Type of scanner
IP/domain/URLS
Why should we use it
MIT licensed scanner with strong templating engine and community behind. There are also templates to detect common c2 servers. See the article here
Possible implementation
It could make sense to create another Docker Integration (a new container) for this, where we put all the scanners.
Install the tool: https://docs.nuclei.sh/getting-started/install Donwload Nuclei templates (from the community): https://github.com/projectdiscovery/nuclei-templates and provide the chance to create new ones.
Leverage JSON output option.
Can I work on this?
ehi sure, you can!
@mlodic I would like to work on creating a Docker-based analyzer for Nuclei. Please assign this issue to me.
yes, absolutely
I have a couple of questions regarding the Nuclei analyzer:
- Which templates should I focus on for our use case, or should I run all of them for a comprehensive analysis?
- Since the JSON output can be large, any suggestions for filtering or managing it efficiently?
Which templates should I focus on for our use case, or should I run all of them for a comprehensive analysis?
I think all templates should be downloaded by default periodically. Then, the analyzer should allow customization: the user should be able to choose which category to run (in practice, select the folder)
Since the JSON output can be large, any suggestions for filtering or managing it efficiently?
Can you provide some examples about this?
Can you provide some examples about this?
Here are the few lines of the output we get, when we analyze a URL for default (all) nuclei templates. This output will contain a list of analysis by each template with their template id.
{ "results": [ { "curl-command": "curl -X 'GET' -d '' -H 'Host: login.microsoftonline.com' -H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:97.0) Gecko/20100101 Firefox/97.0' 'https://login.microsoftonline.com:443/amazon.com/v2.0/.well-known/openid-configuration'", "extracted-results": [ "5280104a-472d-4538-9ccf-1e1d0efe8b1b" ], "host": "amazon.com", "info": { "author": [ "v0idc0de" ], "classification": { "cve-id": null, "cvss-metrics": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:N", "cwe-id": [ "cwe-200" ] }, "description": "Microsoft Azure Domain Tenant ID was detected.", "metadata": { "max-request": 1 }, "name": "Microsoft Azure Domain Tenant ID - Detect", "reference": [ "https://azure.microsoft.com" ], "severity": "info", "tags": [ "azure", "microsoft", "cloud", "exposure" ] }, "ip": "2603:1046:2000:90::4", "matched-at": "https://login.microsoftonline.com:443/amazon.com/v2.0/.well-known/openid-configuration", "matcher-status": true, "port": "443", "request": "...", "scheme": "https", "template": "http/exposures/configs/azure-domain-tenant.yaml", "template-id": "azure-domain-tenant", "template-path": "/home/pc/nuclei-templates/http/exposures/configs/azure-domain-tenant.yaml", "template-url": "https://cloud.projectdiscovery.io/public/azure-domain-tenant", "timestamp": "2025-01-04T14:02:31.987050643+05:30", "type": "http", "url": "https://amazon.com" }, { "extracted-results": [ "[tls10 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA]" ], "host": "amazon.com", "info": { "author": [ "pussycat0x" ], "description": "A weak cipher is defined as an encryption/decryption algorithm that uses a key of insufficient length. Using an insufficient length for a key in an encryption/decryption algorithm opens up the possibility (or probability) that the encryption scheme could be broken.", "metadata": { "max-request": 4 }, "name": "Weak Cipher Suites Detection", "reference": [ "https://www.acunetix.com/vulnerabilities/web/tls-ssl-weak-cipher-suites/", "http://ciphersuite.info" ], "severity": "low", "tags": [ "ssl", "tls", "misconfig" ] }, "ip": "52.94.236.248", "matched-at": "amazon.com:443", "matcher-name": "tls-1.0", "matcher-status": true, "port": "443", "template": "ssl/weak-cipher-suites.yaml", "template-id": "weak-cipher-suites", "template-path": "/home/pc/nuclei-templates/ssl/weak-cipher-suites.yaml", "template-url": "https://cloud.projectdiscovery.io/public/weak-cipher-suites", "timestamp": "2025-01-04T14:03:21.113155508+05:30", "type": "ssl" }, ....................
I think that one idea could be forcing the user to select a folder (so a group of templates). In this way, we provide the customization of the templates selection and we manage the long output at the same time because that would be reduced by the number of templates that are executed with is a lower number. Thoughts?
Instead of requiring users to select a folder initially, we can run a small subset of templates by default. This approach ensures a smoother user experience while still offering the flexibility to customize and perform more detailed analyses by increasing the number of folders. This way, users can start quickly without sacrificing the ability to dive deeper when needed.
I like that option
@drosetti I have built a docker image for this analyzer. I was trying to build a compose file for it. How should I get my image published to intelowl so that the compose file can pull it from there? Could you guide me on the correct procedure here?
I suggest to follow the steps in the doc: https://intelowlproject.github.io/docs/IntelOwl/contribute/?h=docker+based#integrating-a-docker-based-analyzer.
If you have some problems writing the compose.yml file you can take example of other integrations. Docker will download the image from docker hub in case an image is available on it otherwise it will build it locally. It's correct that is building locally now.
Let me know if I've been clear and if you have other questions
I reminder about this: as maintainers, we need to add and configure an additional image in Docker Hub to have it auto build automatically at every release. @drosetti can you please try creating it to see if you have the correct permissions?
available from 6.3.0