cppkafka::Consumer Destructor Can Throw
Love the project! I've been using librdkafka for a while now in C/C++ projects requiring native access to Kafka, but it's always felt fairly unwieldy in the typical case. I wasn't sure if this was an intentional design choice or not, but cppkafka::Consumer::~Consumer calls close (without a try-catch), which calls check_error, which throws upon error conditions. Destructors are implicitly marked noexcept starting in C++11 unless explicitly marked otherwise (derived classes inherit this specification), but regardless, destructors that can throw strike me as a bad idea. My personal suggestion would be to leave the current close implementation untouched, expose it in the public API (to allow explicit client handling of handle-destruction error conditions if so desired), make the close call in the Consumer destructor conditional, and swallow all exceptions from close in the destructor (I don't know if cppkafka currently has any logging strategy, but this would probably be a place to put a WARN log). Admittedly this isn't as nice from an encapsulation perspective, and will require additional checks throughout the rest of the public interface, but currently a failed call to rd_kafka_consumer_close during destruction will result in a call to std::terminate without the chance for the client to intervene.
Good catch, I didn't notice that one. Sounds like a good idea! There's no logging strategy at this point. At some point I wanted to use rdkafka's logging as it already exists but it's not exposed publicly, which kind of sucks. I guess for now just catching that would work.
Oh yeah, it would be great if it could tap into the existing logging. Personally, I've never been a big fan of libraries logging in the first place, but there are situations like this where it's really the only viable response.
I think cppkafka could actually log through rdkafka via the configured log callback or rd_kafka_log_print (the default) in case there's none. Reopening so I don't forget about this one.
One way to make it possible for a client to detect these kinds of errors would be to expose close() to the public, which can then throw. But yes, the destructor should be assumed to be called while unwinding because of another exception.
Would this work? Not sure about the facility name...I used LOG_USER
Consumer::~Consumer() {
try {
// make sure to destroy the function closures. in case they hold kafka
// objects, they will need to be destroyed before we destroy the handle
assignment_callback_ = nullptr;
revocation_callback_ = nullptr;
rebalance_error_callback_ = nullptr;
close();
}
catch (const Exception& ex) {
auto& callback = get_configuration().get_log_callback();
if (callback) {
callback(*this, LOG_ERR, "user", ex.what());
}
else {
rd_kafka_log_print(get_handle(), LOG_ERR, "user", ex.what());
}
}
}
lol at "Reopening so I don't forget about this one.".
That looks good. I would probably change "user" to "cppkafka" so it's clear where the error is coming from.
Maybe this should be closed now...