components-contrib
components-contrib copied to clipboard
Redis Streams pubsub - connection pool timeout
In Dapr 1.8, customers are reporting issues with Redis Streams pubsub:
2022-08-18T03:34:15.520779106Z time="2022-08-18T03:34:15.520525903Z" level=error msg="redis streams: error reading from stream Redacted: redis: connection pool timeout" app_id=redacted instance=redacted scope=dapr.contrib type=log ver=1.8.3-msft-1
2022-08-18T03:34:16.205565661Z time="2022-08-18T03:34:16.205382359Z" level=error msg="error retrieving pending Redis messages: redis: connection pool timeout" app_id=redacted instance=redacted scope=dapr.contrib type=log ver=1.8.3-msft-1
Happens with Redis 6 and 7. Did not happen with Dapr 1.7.
Increasing poolSize
to 20 mitigated the issue.
See: https://github.com/microsoft/azure-container-apps/issues/365
@Taction is this something you could take a look at please?
Sure, I'll look into this later.
I've reproduced that and found the reason. The poolSize
should be greater than the number of topics subscribed(every subscription topic needs a conn, and poolSize defines the maximum number of socket connections).
I think we can set a bigger default value for poolSize
(100 maybe), and add more messages to the poolSize
description doc. @ItalyPaleAle WDYT?
This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged (pinned, good first issue, help wanted or triaged/resolved) or other activity occurs. Thank you for your contributions.
This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as pinned, good first issue, help wanted or triaged/resolved. Thank you for your contributions.
This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as pinned, good first issue, help wanted or triaged/resolved. Thank you for your contributions.
This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as pinned, good first issue, help wanted or triaged/resolved. Thank you for your contributions.
This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as pinned, good first issue, help wanted or triaged/resolved. Thank you for your contributions.
I'm not quite sure why this was an issue for us, because we had this application failing with the same error but it's subscribed only to 10 topics and it seems like the default value is 20.
So what is the total number of topics? Do we have to count every topic from every application inside the cluster? Or is it just the ones that the application is listening to?
Anyway, we added the poolSize
property on both the pubsub and statestore components by running
kubectl edit component/<component-name>
Then we had to restart the dapr-placement-server statefulset, and the deployment of this application, and it started working properly.
It'll be great to increase the default value of the components