kafka-protocol-rs
kafka-protocol-rs copied to clipboard
Rust Kafka protocol
I have been researching about auth mechanism support for kafka. I found that there is rsasl crate for that. We need to figure out how to integrate these two and...
Error handling is a bit of a mess right now. We emit two different serde errors with very little context, that also don't impl `std::error::Error` which makes them hard to...
Possible solution for https://github.com/tychedelia/kafka-protocol-rs/issues/84 I'll have a go at porting my application to it and report back. edit: https://github.com/shotover/shotover-proxy/pull/1759 The benchmark for decoding produce requests improved by 20% ! Fun...
add RecordBatchEncoder::records() which creates a RecordIterator instead of requiring a Vec to fill.
the current implementation uses a map with topic names as keys: ```rs pub struct CreateTopicsRequest { /// The topics to create. /// /// Supported API versions: 0-7 pub topics: indexmap::IndexMap,...
this code actually prevents writing user code that works across multiple versions. for example, if i want to support both new and latest version of ListOffsets api, i would just...
currently compression/decompression of record batches is embedded in this library. i feel its really out of scope for this lib and limits its usage. for example, i may want to...
getting error: ``` snappy: corrupt input (expected valid offset but got offset 20545; dst position: 0) ``` changing to another compression like lz4 works fine.
this allows for code paths that are like: ```rs let error = if auth_succeeded() { ResponseError::None } else { ResponseError::InvalidMsg }; ResponseBuilder::default().with_error_code(error.code()); ```
Can anyone explain for me the meaning of offset and sequence fields of `Record` struct pls. I found that when encode a `Vec`. It only produce successfully if offset increases...