julie
julie copied to clipboard
Julie removes Placement Contstraints
Describe the bug Boker has a default placement constraint for new topics. Julie-Ops respects that when deploying a new "blank" topic, but removes it when run again.
To Reproduce Deploy this descriptor:
context: "context"
source: "src"
projects:
- name: "name"
topics:
- name: "topic2"
and check config with kafka-topics:
kafka-topics --bootstrap-server broker1-participant-0.kafka:9093 --command-config julie.properties --describe --topic context.src.name.topic2`
Topic: context.src.name.topic2 TopicId: urMxYNH_SSOGbV7sr1LVog PartitionCount: 1 ReplicationFactor: 3 Configs: compression.type=snappy,min.insync.replicas=2,segment.bytes=1073741824,retention.ms=3600000,confluent.placement.constraints={"version":1,"replicas":[{"count":1,"constraints":{"rack":"rack-1"}},{"count":1,"constraints":{"rack":"rack-2"}},{"count":1,"constraints":{"rack":"rack-3"}}],"observers":[]}
Topic: context.src.name.topic2 Partition: 0 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3 Offline:
Rerun julie - log indicates that config is going to be deleted:
{
"Operation" : "com.purbon.kafka.topology.actions.topics.UpdateTopicConfigAction",
"Topic" : "context.src.name.topic2",
"Action" : "update",
"Changes" : {
"DeletedConfigs" : {
"confluent.placement.constraints" : "{\"version\":1,\"replicas\":[{\"count\":1,\"constraints\":{\"rack\":\"rack-1\"}},{\"count\":1,\"constraints\":{\"rack\":\"rack-2\"}},{\"count\":1,\"constraints\":{\"rack\":\"rack-3\"}}],\"observers\":[]}"
}
}
}
Checking with kafka-topics again confirms that config was deleted:
Topic: context.src.name.topic2 TopicId: urMxYNH_SSOGbV7sr1LVog PartitionCount: 1 ReplicationFactor: 3 Configs: compression.type=snappy,min.insync.replicas=2,segment.bytes=1073741824,retention.ms=3600000
Topic: context.src.name.topic2 Partition: 0 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3 Offline:
Expected behavior Julie should never delete config that is set per default from broker side!
Runtime (please complete the following information):
- Version [e.g. 22]
julie-ops --version
4.2.5
Thanks a lot for your report @Fobhep as always very much appreciated it. My current way of thinking is to introduce something like https://github.com/kafka-ops/julie/blob/master/src/main/java/com/purbon/kafka/topology/Constants.java#L6 but for configs.
In your case, you have this config introduced automatically either the cluster or an external tool. Which case is yours?
Thanks a lot for your continous help in the project.
question, why not manage placement constraints with JulieOps,
---
context: "o"
projects:
- name: "f"
consumers:
- principal: "User:NewApp2"
topics:
- name: "t"
config:
confluent.placement.constraints: "{\"version\":1,\"replicas\":[{\"count\":1,\"constraints\":{\"rack\":\"rack-1\"}},{\"count\":1,\"constraints\":{\"rack\":\"rack-2\"}}],\"observers\":[]}"
$ docker exec kafka kafka-topics --bootstrap-server kafka:29092 \ 2.7.0
--describe --topic o.f.t
Topic: o.f.t TopicId: dJImanTbSd2sbUjLVDMoVA PartitionCount: 1 ReplicationFactor: 2 Configs: confluent.placement.constraints={"version":1,"replicas":[{"count":1,"constraints":{"rack":"rack-1"}},{"count":1,"constraints":{"rack":"rack-2"}}],"observers":[]}
Topic: o.f.t Partition: 0 Leader: 1 Replicas: 1,2 Isr: 1,2 Offline:
do you see a limitation for this operationally? I understand and tested when the config is there, no problem with being deleted.
What do you think?
removing the label bug for now until we're clear about the reason and causes behind the issue.
related to https://github.com/kafka-ops/julie/issues/241