HuntingAbuseAPI analyzer
Closes #2778
Description
This PR aims to add an analyzer to identify if the provided observable is present in the false positive list or not via Hunting Abuse API.
Type of change
Please delete options that are not relevant.
- [ ] Bug fix (non-breaking change which fixes an issue).
- [x] New feature (non-breaking change which adds functionality).
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected).
Checklist
- [x] I have read and understood the rules about how to Contribute to this project
- [x] The pull request is for the branch
develop - [x] A new plugin (analyzer, connector, visualizer, playbook, pivot or ingestor) was added or changed, in which case:
- [x] I strictly followed the documentation "How to create a Plugin"
- [ ] Usage file was updated. A link to the PR to the docs repo has been added as a comment here.
- [ ] Advanced-Usage was updated (in case the plugin provides additional optional configuration). A link to the PR to the docs repo has been added as a comment here.
- [x] I have dumped the configuration from Django Admin using the
dumpplugincommand and added it in the project as a data migration. ("How to share a plugin with the community") - [ ] If a File analyzer was added and it supports a mimetype which is not already supported, you added a sample of that type inside the archive
test_files.zipand you added the default tests for that mimetype in test_classes.py. - [ ] If you created a new analyzer and it is free (does not require any API key), please add it in the
FREE_TO_USE_ANALYZERSplaybook by following this guide. - [ ] Check if it could make sense to add that analyzer/connector to other freely available playbooks.
- [x] I have provided the resulting raw JSON of a finished analysis and a screenshot of the results.
- [x] If the plugin interacts with an external service, I have created an attribute called precisely
urlthat contains this information. This is required for Health Checks (HEAD HTTP requests). - [x] If the plugin requires mocked testing,
_monkeypatch()was used in its class to apply the necessary decorators. - [x] I have added that raw JSON sample to the
MockUpResponseof the_monkeypatch()method. This serves us to provide a valid sample for testing. - [ ] I have created the corresponding
DataModelfor the new analyzer following the documentation
- [x] I have inserted the copyright banner at the start of the file:
# This file is a part of IntelOwl https://github.com/intelowlproject/IntelOwl # See the file 'LICENSE' for copying permission. - [ ] Please avoid adding new libraries as requirements whenever it is possible. Use new libraries only if strictly needed to solve the issue you are working for. In case of doubt, ask a maintainer permission to use a specific library.
- [ ] If external libraries/packages with restrictive licenses were added, they were added in the Legal Notice section.
- [x] Linters (
Black,Flake,Isort) gave 0 errors. If you have correctly installed pre-commit, it does these checks and adjustments on your behalf. - [ ] I have added tests for the feature/bug I solved (see
testsfolder). All the tests (new and old ones) gave 0 errors. - [ ] If the GUI has been modified:
- [ ] I have a provided a screenshot of the result in the PR.
- [ ] I have created new frontend tests for the new component or updated existing ones.
- [x] After you had submitted the PR, if
DeepSource,Django Doctorsor other third-party linters have triggered any alerts during the CI checks, I have solved those alerts.
Important Rules
- If you miss to compile the Checklist properly, your PR won't be reviewed by the maintainers.
- Everytime you make changes to the PR and you think the work is done, you should explicitly ask for a review by using GitHub's reviewing system detailed here.
Hi @fgibertoni, hope you are doing well. The PR is ready to be reviewed.
FP found Response:
FP not found Response:
Hi @spoiicy I had previously explored this issue a bit and just wanted to share a quick suggestion. Would it make sense to use a TTL-based cache for the false positives list? That way, we can store it once and refresh it periodically, instead of fetching the full list on every search. This could help with performance and reduce API load. Totally appreciate your work on this — feel free to ignore if you’ve already considered it or have a better approach!
@fgibertoni As per suggestion from @AnshSinghal, it actually makes sense to store the results for some amount of time, update them after, let's say 1 day, so that the server doesn't have to query on every call.
And I am aware that we used a file-based approach previously to perform such tasks and now @cristinaascari is working on solution, making it a DB-based approach and refactoring old code.
So how should I proceed with this? Let me know what you think.
Yes I do agree with you and @AnshSinghal. At the moment @cristinaascari is blocked by other work on that task so it may take longer than expected. I think we can proceed with the "standard" approach and then we will refactor slighlty the code to adapt to her changes.
Data Model Result
{
"report": {
"details": {
"platform": "MalwareBazaar",
"entry_type": "md5_hash",
"removed_by": "user",
"time_stamp": "2025-06-22 09:45:33 UTC",
"entry_value": "099137899ece96f311ac5ab554ea6fec",
"removal_notes": null
},
"fp_status": true
},
"data_model": {
"id": 387,
"analyzers_report": [
519
],
"signatures": [],
"evaluation": "trusted",
"reliability": 9,
"kill_chain_phase": null,
"external_references": [],
"related_threats": [],
"tags": null,
"malware_family": null,
"additional_info": {
"platform": "MalwareBazaar",
"entry_type": "md5_hash",
"removed_by": "user",
"time_stamp": "2025-06-22 09:45:33 UTC",
"entry_value": "099137899ece96f311ac5ab554ea6fec",
"removal_notes": null
},
"date": "2025-06-28T19:39:09.202500Z",
"comments": [],
"file_information": {},
"stats": {}
},
"errors": [],
"parameters": {}
}