Oversized Alert Context Data in AppSec leads to stuck alerts
What happened?
As reported on discord, if an appsec alerts is triggered with alert context data that is too big (supposedly), it leads to a repeating failure:
time="2024-04-22T16:37:37+02:00" level=info msg="(46b7cd78f2944b33b2b9d6ac5d49725f5AU0UwVyyyRtiGSf) alert : native_rule:400015 by ip 188.120.11.170"
time="2024-04-22T16:37:37+02:00" level=error msg="while pushing to api : failed sending alert to LAPI: API error: machine '46b7cd78f2944b33b2b9d6ac5d49725f5AU0UwVyyyRtiGSf': creating alert meta: ent: validator failed for field \"Meta.value\": value is greater than the required length: unable to insert bulk"
From the user's report, it seems that this error will spam (it will retry) until crowdsec is restarted.
What did you expect to happen?
Two things should happen:
- Crowdsec should truncate the alert context to be sure it fits size constraints
- If the insert in database (or validation) fails, we should "drop" the alert as we do for normal invalid alerts [1]
[1] Because our LAPI protocol ain't smart enough to allow for a better solution for now.
How can we reproduce it (as minimally and precisely as possible)?
Hopefully just by triggering a very very long URIs with appsec alert.
Anything else we need to know?
No response
Crowdsec version
1.6.1
OS version
No response
Enabled collections and parsers
No response
Acquisition config
No response
Config show
No response
Prometheus metrics
No response
Related custom configs versions (if applicable) : notification plugins, custom scenarios, parsers etc.
No response
@buixor: Thanks for opening an issue, it is currently awaiting triage.
In the meantime, you can:
- Check Crowdsec Documentation to see if your issue can be self resolved.
- You can also join our Discord.
- Check Releases to make sure your agent is on the latest version.
Details
I am a bot created to help the crowdsecurity developers manage community feedback and contributions. You can check out my manifest file to understand my behavior and what I can do. If you want to use this for your project, you can check out the BirthdayResearch/oss-governance-bot repository.
on of our ingress servers have not managed to post an alert since 06:58 this morning. However the blocks are actually working so it's just a problem with the local api.
time="2024-04-25T06:58:23+02:00" level=info msg="127.0.0.1 - [Thu, 25 Apr 2024 06:58:23 CEST] \"POST /v1/alerts HTTP/1.1 201 2.951078ms \"crowdsec/v1.6.1-debian-pragmatic-amd64-0746e0c0\" \""
time="2024-04-25T06:58:23+02:00" level=info msg="127.0.0.1 - [Thu, 25 Apr 2024 06:58:23 CEST] \"POST /v1/alerts HTTP/1.1 201 6.616489ms \"crowdsec/v1.6.1-debian-pragmatic-amd64-0746e0c0\" \""
time="2024-04-25T06:58:24+02:00" level=info msg="127.0.0.1 - [Thu, 25 Apr 2024 06:58:24 CEST] \"POST /v1/alerts HTTP/1.1 201 1.847007ms \"crowdsec/v1.6.1-debian-pragmatic-amd64-0746e0c0\" \""
time="2024-04-25T06:58:24+02:00" level=info msg="127.0.0.1 - [Thu, 25 Apr 2024 06:58:24 CEST] \"POST /v1/alerts HTTP/1.1 201 4.061225ms \"crowdsec/v1.6.1-debian-pragmatic-amd64-0746e0c0\" \""
time="2024-04-25T06:58:24+02:00" level=info msg="127.0.0.1 - [Thu, 25 Apr 2024 06:58:24 CEST] \"POST /v1/alerts HTTP/1.1 201 7.08647ms \"crowdsec/v1.6.1-debian-pragmatic-amd64-0746e0c0\" \""
(base) root@r4-u6-ing:~$ date
Thu Apr 25 09:28:24 AM CEST 2024
:~$ cscli alert list
╭────────┬────────────────────┬────────────────────┬─────────┬────┬───────────┬───────────────────────────────╮
│ ID │ value │ reason │ country │ as │ decisions │ created_at │
├────────┼────────────────────┼────────────────────┼─────────┼────┼───────────┼───────────────────────────────┤
│ 357964 │ Ip:51.89.81.168 │ native_rule:410026 │ │ │ │ 2024-04-25 04:58:24 +0000 UTC │