[➕ Feature]: Extraction and Mapping should expose logs
As a user, I would like to be able to see the last X logs for every rule I've created, to understand why, for example, the rule did not match some alert I expected it to match. As a user, I should be able to expand the rule row in the rules table and see X last logs for that specific rule with some indicative information.
As a user, I should be able to test a mapping/extraction rule by manually running it against an alert that is already present in my feed (same way like workflow) -> https://github.com/keephq/keep/issues/1818#issuecomment-2370644955
Has anyone picked up this issue, or assign it to me @talboren
@cu8code not yet, it's up for grabs :)
@talboren could you guild me a bit about which part of the codebase need to change and I should focus on
@cu8code actually I don't have complete PRD for this. The motivation I had in mind is this: right now, when a user configures mapping/extraction rule, it's hard for him to know when it succeeded or when it failed and why (in the perspective of a single alert for example).
As a user, I push some alert in, I expect it to be enriched from mapping/extraction (or both), and it didn't happen - "now what?"
So the general idea here is to create some way for the user to know what happened. It can be via exposing logs that the user can query for mapping & extraction (enrichments_bl.py is probably the way to get started with it), or it can be via a "manual run" for mapping/extraction rule, where the user can select the alert he wants to test it against and see what happens in the process (we have something quite similar in workflow execution).
https://github.com/user-attachments/assets/5700e9d1-424b-48f6-a843-d9903858ead9
Let me know if you have further questions, we can discuss it over Slack
/bounty 100
💎 $100 bounty • Keep (YC W23)
Steps to solve:
- Start working: Comment
/attempt #1818with your implementation plan - Submit work: Create a pull request including
/claim #1818in the PR body to claim the bounty - Receive payment: 100% of the bounty is received 2-5 days post-reward. Make sure you are eligible for payouts
Thank you for contributing to keephq/keep!
Add a bounty • Share on socials
| Attempt | Started (GMT+0) | Solution |
|---|---|---|
| 🔴 @ezhil56x | Oct 10, 2024, 12:22:49 PM | WIP |
| 🔴 @rajesh-jonnalagadda | Oct 10, 2024, 3:18:15 PM | WIP |
| 🟢 @onyedikachi-david | Jan 27, 2025, 7:35:02 PM | WIP |
/attempt #1818
@talboren I would like to give it a try.
| Algora profile | Completed bounties | Tech | Active attempts | Options |
|---|---|---|---|---|
| @rajeshj11 | 5 keephq bounties + 16 bounties from 7 projects |
TypeScript, JavaScript, HTML & more |
Cancel attempt |
@talboren can you please assign this to me
@cu8code actually I don't have complete PRD for this. The motivation I had in mind is this: right now, when a user configures mapping/extraction rule, it's hard for him to know when it succeeded or when it failed and why (in the perspective of a single alert for example).
As a user, I push some alert in, I expect it to be enriched from mapping/extraction (or both), and it didn't happen - "now what?"
So the general idea here is to create some way for the user to know what happened. It can be via exposing logs that the user can query for mapping & extraction (enrichments_bl.py is probably the way to get started with it), or it can be via a "manual run" for mapping/extraction rule, where the user can select the alert he wants to test it against and see what happens in the process (we have something quite similar in workflow execution).
CleanShot.2024-09-24.at.11.42.44.mp4 Let me know if you have further questions, we can discuss it over Slack
@talboren Please correct me if I'm wrong, the requirement seems to be a rule audit on alerts.
@cu8code actually I don't have complete PRD for this. The motivation I had in mind is this: right now, when a user configures mapping/extraction rule, it's hard for him to know when it succeeded or when it failed and why (in the perspective of a single alert for example). As a user, I push some alert in, I expect it to be enriched from mapping/extraction (or both), and it didn't happen - "now what?" So the general idea here is to create some way for the user to know what happened. It can be via exposing logs that the user can query for mapping & extraction (enrichments_bl.py is probably the way to get started with it), or it can be via a "manual run" for mapping/extraction rule, where the user can select the alert he wants to test it against and see what happens in the process (we have something quite similar in workflow execution). CleanShot.2024-09-24.at.11.42.44.mp4 Let me know if you have further questions, we can discuss it over Slack
@talboren Please correct me if I'm wrong, the requirement seems to be a rule audit on alerts.
Not precisely since it means I’ll have to look at a specific alert to understand if it was enriched or not and not on the rule itself to understand why it didn’t work. It’s closer to workflowexecutionlogs.
@rajeshj11 any progress on this?
@rajeshj11 any progress on this?
please assign it to me. will start today
@rajeshj11 assigned. Please pay close attention to code quality on this one! I'm going to be picky :P
@talboren Based on our discussion in Slack, I will be canceling my attempt.
Thank you for attempting, @rajeshj11!
Payout for this bounty will be executed via GitHub Sponsors, not via Algora. Please activate https://github.com/sponsors/accounts for your account to receive the payout, sorry for the inconvenience
@cu8code actually I don't have complete PRD for this. The motivation I had in mind is this: right now, when a user configures mapping/extraction rule, it's hard for him to know when it succeeded or when it failed and why (in the perspective of a single alert for example).
As a user, I push some alert in, I expect it to be enriched from mapping/extraction (or both), and it didn't happen - "now what?"
So the general idea here is to create some way for the user to know what happened. It can be via exposing logs that the user can query for mapping & extraction (enrichments_bl.py is probably the way to get started with it), or it can be via a "manual run" for mapping/extraction rule, where the user can select the alert he wants to test it against and see what happens in the process (we have something quite similar in workflow execution).
CleanShot.2024-09-24.at.11.42.44.mp4 Let me know if you have further questions, we can discuss it over Slack
I am interested in taking this issue. @talboren if I am not wrong, we need to give out the logs https://github.com/keephq/keep/blob/1d1bc7af62080f68a0fcfd56522e7dad01040889/keep/api/bl/enrichments_bl.py#L69-L71
like here for the x logs should be displayed in the UI?
@Abiji-2020 that was the idea. so one could understand what happened with a specific event that arrived, any why was it or was it not enriched by some rule.
So how are we going to keep track of the x logs ? from enrichments_bl we can take the logs of each run if i am not wrong ? @talboren any idea on this?
@Abiji-2020 not sure I understand the question - I guess we'll need to keep some logs per enrichment per event. I'm not 100% sure about the implementation details here, I just know that some users asked to understand which event gets enriched by which rule and if something doesn't get enriched while they expect it to enrich, how can they know what was the problem.
/attempt #1818
| Algora profile | Completed bounties | Tech | Active attempts | Options |
|---|---|---|---|---|
| @onyedikachi-david | 14 bounties from 7 projects | TypeScript, Python, JavaScript & more |
Cancel attempt |
Closing this issue as it is idle for too long.