Isaac Banner
Isaac Banner
My fork only fixes part of the problem, unfortunately - I managed to keep it from maxing out on 9092 and from maxing out the number of open files, but...
Yeah, we ran into that too. Right now our accepted solution is just "eh, try refreshing it and waiting several seconds."
Updated delete workflow in my personal fork - issue no longer persists. Until the changes get merged into master, code can be found here: https://github.com/ibanner56/kafka-web-console
To note, pull request #40 fixes this.
I ran into the second half of this yesterday (see #36). I'll let you know when I figure out what's doing it.
Are you just using this repo without any edits?
It's being created in the 'feed' function in Topic.scala - I just need to figure out what the correct course of action is.
Hm, it looks like the feed function is supposed to be deleting it: ``` val in = Iteratee.foreach[String](println).map { _ => consumer.commitOffsets() consumer.shutdown() deleteZNode(zkClient, "/consumers/" + consumerGroup) } ``` But...
I added a debug line to check which nodes it's trying to delete and it looks like it's successfully deleting them most of the time, but sometimes empty nodes end...
I'm going to take a look tomorrow to see if I can get something from the Zookeeper logs. I'll post here with updates then. Sorry for the brevity - sent...