set buffer memory to control max memory usage when produce lots of topic
Description
has any config to set max memory usage for all topic on producer? like java kafka sdk: props.put("buffer.memory", 33554432); // 32MB
Versions
| Sarama | Kafka | Go |
|---|---|---|
| v1.45.1 | 3.6.2 | 1.23.0 |
Configuration
cf := sarama.NewConfig()
cf.Producer.Return.Successes = true
cf.Producer.Return.Errors = true
cf.Producer.Compression = sarama.CompressionLZ4
cf.Producer.Flush.Bytes = 1048576
cf.Producer.Flush.Messages = 10000
cf.Producer.Flush.Frequency = 10 * time.Second
cf.Producer.Flush.MaxMessages = 12000
cf.Producer.MaxMessageBytes = 10485760
Additional Context
implemented by: https://github.com/IBM/sarama/pull/3088/
implemented by: #3088
thanks for reply. conf.Producer.Retry.MaxBufferBytes setting control retry send message on retryHandler(), it works only when produce to one broker error(e.g. greater thanMaxRequestSize/MaxMessageBytes/Flush.MaxMessages). what i need is control the max memory usage in client, no matter how many brokers(e.g. 30 brokers) or topics (e.g. 100 topics), this feature doesn't work in my scenario.
Thank you for taking the time to raise this issue. However, it has not had any activity on it in the past 90 days and will be closed in 30 days if no updates occur. Please check if the main branch has already resolved the issue since it was raised. If you believe the issue is still valid and you would like input from the maintainers then please comment to ask for it to be reviewed.