Yunze Xu
Yunze Xu
> with a Kafka Client that doesn't use our new SerDe ? What deserializer do you use? If you don't use the new SerDes, you must implement your own deserializer...
I think what you mentioned is a compatibility issue. But it looks like there is nothing we need to keep it compatible.
> is this change supposed to change KOP in a way that we ALWAYS prepend the "magic + pulsar schema version" header in the Kafka payload received on by the...
We need an internal discussion in StreamNative for whether to drop the Avro SerDes support for Kafka client
We can also see Confluent's Avro serializer also adds some extra bytes before the bytes serialized from the value object. | MAGIC_BYTE (1 byte) | Schema ID (4 bytes) |...
> Why would it break compatibility? If a record was produced by a Pulsar producer with schema version configured, - Before: `record.headers()` is empty. - After: `record.headers()` has one header...
BTW, see https://github.com/confluentinc/schema-registry/blob/master/avro-serializer/src/main/java/io/confluent/kafka/serializers/KafkaAvroDeserializer.java. Confluent's `KafkaAvroDeserializer` also doesn't implement the `serialize` API with the `Header` parameter. It prepends the schema id in the value as https://github.com/streamnative/kop/issues/1290#issuecomment-1133132666 describes.
> The header with the key "schema.version" will respond to the kafka client, is there any problem? Not a big problem. But maybe some logic at application side might rely...
After the internal discussion in StreamNative, this task might be delayed for a while.
Could you upload the broker logs? BTW, you can attach files instead of pasting the massive logs.