Coleman Kane
Coleman Kane
Thanks. I like this idea, and then the current behavior can be voluntary for any connectors which are incompatible with overwriting the external STIX IDs, and which then can be...
@SouadHadjiat fixed
Are you running the connectors using the admin token?
Thanks, when it occurs there doesn't seem to be any errors registering in the platform - I just see the ESET connector working normally, and the redis usage keeps climbing...
So, I am able to reproduce the issue on **6.0.5** and was able to catch the system in the middle of the run-away memory usage. It looks like the ESET...
these seem to occur after the worker says it is forging a new `stix_core_relationship` - at least in this section, it is trying to create new `related-to` between an **Identity**...
I was also able to verify that the memory consumption in redis continues to increase even after I have stopped the `connector-eset` container. Clearing the works using the UI doesn't...
After I've waited for the connector to turn "red" after I stopped it, I was able to click the "clear connector" button, which did halt the growth in memory consumption...
I also see the `Message reprocess` message in the logs too, ahead of the aforementioned error, suggesting this block of code around line `355` in `worker.py` from the worker code...
Curious if these exceptions occurring may be causing OpenCTI to skip the trimming step on the redis database or something like that