eventmesh
eventmesh copied to clipboard
[Question] Why eventmesh re-write RocketMQ's consume serivce and use a global thread pool to consume all messages?
Search before asking
- [X] I had searched in the issues and found no similar issues.
Question
I'm new to Eventmesh, after reading some source code of Eventmesh, I have some questions.
Eventmesh re-write RocketMQ's ConsumeMessageConcurrentlyService to implement async offset commitment, and using a global thread pool EventMeshHTTPServer.pushMsgExecutor to process all HTTP push requests. Besides, Eventmesh implement it's own retry logic using Java's DelayQueue.
- Why not use RocketMQ consumer's consume thread pool to process HTTP push requests? Global thread pool may be busy when some of the HTTP targets are unreachable, and will stuck all HTTP push requests
- Is it profitable to choose async offset commit from RocketMQ's original offset commit mechanism?
- Why not use RocketMQ's consumer retry mechanism?
Welcome to the Apache EventMesh (incubating) community!! We are glad that you are contributing by opening this issue. :D
Please make sure to include all the relevant context. We will be here shortly.
If you are interested in contributing to our project, please let us know! You can check out our contributing guide on contributing to EventMesh.
Want to get closer to the community?
WeChat Group:

Mailing Lists:
| Name | Description | Subscribe | Unsubscribe | Archive |
|---|---|---|---|---|
| Users | User support and questions mailing list | Subscribe | Unsubscribe | Mail Archives |
| Development | Development related discussions | Subscribe | Unsubscribe | Mail Archives |
| Commits | All commits to repositories | Subscribe | Unsubscribe | Mail Archives |
@HScarb thanks for your attention, you can join our wechat group community.
It has been 90 days since the last activity on this issue. Apache EventMesh values the voices of the community. Please don't hesitate to share your latest insights on this matter at any time, as the community is more than willing to engage in discussions regarding the development and optimization directions of this feature.
If you feel that your issue has been resolved, please feel free to close it. Should you have any additional information to share, you are welcome to reopen this issue.