scylla-cluster-tests
scylla-cluster-tests copied to clipboard
feature(kafka-localstack): introducing docker-compose base kafka setup
Since we want to be able to run scylla kafka connectors with scylla clusters create by SCT, we are introducing here the first of kafka backend that would be used for local development (with SCT docker backend)
- inculde a way to configure the connector as needed (also multi ones)
- get it intsall from hub or by url
Note: this doesn't yet include any code that can read out of kafka
PR pre-checks (self review)
- [ ] I followed KISS principle and best practices
- [ ] I didn't leave commented-out/debugging code
- [ ] I added the relevant
backport
labels - [ ] New configuration option are added and documented (in
sdcm/sct_config.py
) - [ ] I have added tests to cover my changes (Infrastructure only - under
unit-test/
folder) - [ ] All new and existing unit tests passed (CI)
- [ ] I have updated the Readme/doc folder accordingly (if needed)
are there kafka metrics worth to add to monitoring? If yes, can be done in followup task.
are there kafka metrics worth to add to monitoring? If yes, can be done in followup task.
this one is a local setup of kafka, I don't think monitoring is needed, as least not yet. (we have monitoring data of the sct-runner)
are there kafka metrics worth to add to monitoring? If yes, can be done in followup task.
this one is a local setup of kafka, I don't think monitoring is needed, as least not yet. (we have monitoring data of the sct-runner)
they look nice: https://grafana.com/docs/grafana-cloud/monitor-infrastructure/integrations/integration-reference/integration-kafka/ It could help us when there are issues with kafka-connector
are there kafka metrics worth to add to monitoring? If yes, can be done in followup task.
this one is a local setup of kafka, I don't think monitoring is needed, as least not yet. (we have monitoring data of the sct-runner)
they look nice: https://grafana.com/docs/grafana-cloud/monitor-infrastructure/integrations/integration-reference/integration-kafka/ It could help us when there are issues with kafka-connector
JMX never looks nice...
it's too early for this, once we'll have VMs and a full cluster, we might consider installation of those.
now I care more on the functional side of things, and how this setup integrates with a longevity test. and the real missing part is reading/writing to kafka for the actual test/verification
So the longevity code we have basically works
But it hangs cause we don't have code to stop the Kafka reading thread, might use the idea of teardown validator to validate and stop the reading thread
So the longevity code we have basically works
But it hangs cause we don't have code to stop the Kafka reading thread, might use the idea of teardown validator to validate and stop the reading thread
I don't understand why we cannot add this verification to teardown itself? Why teardown validator is required?
So the longevity code we have basically works
But it hangs cause we don't have code to stop the Kafka reading thread, might use the idea of teardown validator to validate and stop the reading thread
I don't understand why we cannot add this verification to teardown itself? Why teardown validator is required?
It was an idea, validators seemed like a natural place for it
I'm now trying a different approach of adding this logic to the reader thread itself.
Two Jobs introduced are passing now
One small pre-commit issue, and it's good to go
@Bouncheck
I would recommend you try it again, to get familiar with it.