[backend] Bug correction for setting x_opencti_score on SCO. Uses confidence factor value from User/Group
Maybe x_opencti_score isn't a direct corollary to "confidence" for an SCO, however it is required for decay rules, so just in case:
The new PR#(https://github.com/OpenCTI-Platform/connectors/pull/3526) removes the setting of confidence value directly via a connector. It is expected to inherit the value via the User/Group setting of the users running the connector. However, SCO objects use x_opencti_score and not confidence value as the key in the DB that tracks this "similar" value. Thus, all SCOs would not be created with a "confidence" value (x_opencti_score) (i.e. would be set to None). This PR seeks to create a similar behavior for x_opencti_score, based on the confidence level that the User/Group creating the record is rated.
middleware.js has been updated to account for this difference.
Issues
NOTE: The Artifact create drawer within the Frontend off - /dashboard/observations/artifacts - does not have a score field on it, so artifacts created via this panel will now inherit the score of the creator. However, the drawer off - /dashboard/observations/observables - and create and select Artifact - does have a score field, defaulted to 50.
Previous behavior would be a score of None set off /dashboard/observations/artifacts and a score of 50 (or whatever the user defined) set off /dashboard/observations/observables. I believe the None is a bug in itself, since the docs seem to indicate all should have 50 to start at a minimum. (https://docs.opencti.io/latest/usage/indicators-lifecycle/#score-decay)
To fix / make the behavior the same - artifactImport would require the support of x_opencti_score, which it currently doesn't support. This is viewed as a separate tech debt PR should this behavior correction be required.
Checklist
- [X] I consider the submitted work as finished
- [X] I tested the code for its functionality
- [ ] I wrote test cases for the relevant uses case (coverage and e2e)
- [ ] I added/update the relevant documentation (either on github or on notion)
- [X] Where necessary I refactored code to improve the overall quality
Further comments
See Issue - https://github.com/OpenCTI-Platform/opencti/issues/10153
@ParamConstructor : Thank you for the contribution and sorry for the late reply.
We discussed this pull request internally, particularly regarding the mapping of the confidence level associated with a user to the score of an observable or indicator. We are aligned on the approach, which consists of applying the confidence level of the user who created the observable or indicator only when it is not provided by the source or connector.
However, the code proposed in this pull request appears to be too generic and should ideally be restricted to entities such as Indicators and Observables. For instance, we are currently updating the Organization model to include a new 'x_opencti_score' field, and in this context, mapping the confidence level to the score does not seem appropriate.
@richard-julien : Can you add some recommendations on PR?
Codecov Report
:white_check_mark: All modified and coverable lines are covered by tests.
:white_check_mark: Project coverage is 65.54%. Comparing base (122e438) to head (2c25622).
:warning: Report is 272 commits behind head on release/current.
Additional details and impacted files
@@ Coverage Diff @@
## release/current #10154 +/- ##
================================================
Coverage 65.54% 65.54%
================================================
Files 744 744
Lines 73921 73933 +12
Branches 8284 8287 +3
================================================
+ Hits 48449 48461 +12
Misses 25472 25472
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
- :package: JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.
Hi @ParamConstructor, After looking at this PR, i think its not the correct approach to solve this problem. Using the max_confidence level of a user as a default value could work but it will generate difficulties to manage the correct confidence in front of the expected score for the same user. I think this problem must be firstly addressed in the connectors that send the information. It was previously link to confidence but as its no longer the same concern, bring back a default score in the configuration make sens. Do you have the list of the connectors we need to take care to bring back this option?
Then to be able to do it without impacting the connector, we need more a kind of pre processing engine where following some rules, the platform will be able to adapt data before ingestion.
@richard-julien - Yes - adding x_opencti_score back to specific connectors - would solve our initial issue. So - that would be an acceptable approach from our side.
There is still the random issue in the platform side - see the Platform UI Front Level note on the PR description. The Artifact create drawer within the Frontend off - /dashboard/observations/artifacts - does not have a score field on it, so artifacts created via this panel get a None score versus a base of 50 like is suggested in the documentation (https://docs.opencti.io/latest/usage/indicators-lifecycle/#score-decay).
@ParamConstructor, do you have the list of the connectors we need to take care to bring back this option?