When an indicator score is set to 1000000, the object does not load properly and the system hangs
Description
We don't have any check on indicator/observable score. I mean that we can set 1000000 as the score (should be between 0 and 100).
It may cause problems. For example, if we set up an indicator with a score of 1000000, the indicator does not load properly and the system hangs.
Environment
OCTI 6.0.9
Reproducible Steps
Steps to create the smallest reproducible scenario:
- Create an indicator with a score of 1000000 / 100
- Try to open this Indicator
Expected Output
Prevent users from setting a score not between 0 and 100 (on observable and indicator)
What is the expected behavior if a connector creates and indicator with a score out of 0-100 ? Drop it or fix the score to nearest 0 or 100 ?
To my knowledge, no control implemented. Ideally, a score should not be out of 0-100.
IMO, @aHenryJard :
- If score < 0 -> set to 0
- If score > 100 -> set to 100
We can set a @constraint(min:0, max: 100) in the Input type graphql side, for creation.
Frontend can be easily updated so the input has similar constraint in the UI
Note this won't prevent bad values on API update as our fieldPatch mutation is uncontrolled (payload is Any!).
If we want to address this last case, I think we need to implement attribute constraints at the schema level + validation of these constraints on updateEntity / updateRelationship. This is not trivial as you can imagine, but not very complex.
What should we do to fix this issue ? If we define contraints on the graphql API, it will fail the creation if the score is not between 0 and 100. Do we want to fail the creation ? or should we accept but cap the score to 0 or 100 if it's beyond ? On the frontend side, do we want to add contraints in our creation and edition form ?
On the frontend side, do we want to add contraints in our creation and edition form ?
We definitely want that yes
Do we want to fail the creation ? or should we accept but cap the score to 0 or 100 if it's beyond ?
That's a good question. Is there any legit reason for a connector to send something out of 0-100 @Jipegien @RomainGUIGNARD ? A special scale we're not aware of ? If not, then I'm for adding constraints at API level and raise an error on ingestion. Error would be visible in connector's logs and errors UI.
On the frontend side, do we want to add contraints in our creation and edition form ?
We definitely want that yes
Do we want to fail the creation ? or should we accept but cap the score to 0 or 100 if it's beyond ?
That's a good question. Is there any legit reason for a connector to send something out of 0-100 @Jipegien @RomainGUIGNARD ? A special scale we're not aware of ? If not, then I'm for adding constraints at API level and raise an error on ingestion. Error would be visible in connector's logs and errors UI.
My take: cap to 100 to avoid failing repetitively, and log a warn error about it?
Logging a warning will probably be unnoticed. But if I reckon it could lead to repetitive errors, and not very actionable for the user as it might come from external source they have no control over.
Do we prefer to avoid missing data and ingest anyway with a maximum score, or do we consider it "bad data" ?
If we prefer ingest anyway, let's go for your solution @Jipegien.
To my knowledge, There is no standard way to express this kind of score in cybersecurity. A common way is to use a range from 0 to 100. In OpenCTI we use this range.As we have no way to know which range is used by the external provider we cannot apply a scaling. For me the best here is to cap to 100 (and also cap to 0 if negative value) in order for the user to continue to exploit other attributes of the Indicator
ok understood, so let's go for your solution : cap between 0 and 100, and log a warning.