r2dbc-mysql
r2dbc-mysql copied to clipboard
Pool hangs when trying to get connections just before it's closed on MySQL server timeout
Hello, I saw #82 which was marked to be resolved. I'm using r2dbc-pool 0.8.3 and but I'am still facing the similar issue.
Version
spring-data-r2dbc 1.1.2.RELEASE
r2dbc-pool 0.8.3.RELEASE
r2dbc-mysql 0.8.2-RELEASE
Case
I saw that if connections are closed on the server side, without any incoming requests, they're closed normally as expected with WARN log saying Connection has been closed by peer and new connection will be established on the next incoming request.
However, when a request gets a connection and the server closes the connection before the it's executed, It prints logs like below and the pool hangs forever.
Configuration
- MySQL
wait_timeout: 8h
spring:
r2dbc:
pool:
initial-size: 10
max-size: 30
max-idle-time: 15s
validation-query: SELECT 1
properties:
connect-timeout: 10
ssl: false
max-create-connection-time: 3
max-acquire-time: 5
max-life-time: 15
Logs
Here's the log for it. My request arrived right before MySQL server closed the connection. After MySQL server close the connection after wait_timeout, as you can see, 7 of 10 connections are closed normally, but exception occured while closing the next one (which has been acquired by the incoming request). After this, any request through MySQL hangs and the number of connections is maintained to be 7, which should be 10 in normal case.
2020-10-24 05:40:00.833 INFO 1 --- [or-http-epoll-3] c.l.l.my.package.controller.MyController : Request comes in
2020-10-24 05:40:00.837 WARN 1 --- [tor-tcp-epoll-2] d.m.r.mysql.client.MessageDuplexCodec : Connection has been closed by peer
2020-10-24 05:40:00.838 WARN 1 --- [tor-tcp-epoll-2] d.m.r.mysql.client.MessageDuplexCodec : Connection has been closed by peer
2020-10-24 05:40:00.839 WARN 1 --- [tor-tcp-epoll-2] d.m.r.mysql.client.MessageDuplexCodec : Connection has been closed by peer
2020-10-24 05:40:00.840 WARN 1 --- [tor-tcp-epoll-2] d.m.r.mysql.client.MessageDuplexCodec : Connection has been closed by peer
2020-10-24 05:40:00.841 WARN 1 --- [tor-tcp-epoll-2] d.m.r.mysql.client.MessageDuplexCodec : Connection has been closed by peer
2020-10-24 05:40:00.841 WARN 1 --- [tor-tcp-epoll-2] d.m.r.mysql.client.MessageDuplexCodec : Connection has been closed by peer
2020-10-24 05:40:00.841 WARN 1 --- [tor-tcp-epoll-2] d.m.r.mysql.client.MessageDuplexCodec : Connection has been closed by peer
2020-10-24 05:40:00.844 ERROR 1 --- [or-http-epoll-3] reactor.core.publisher.Operators : Operator called default onErrorDropped
dev.miku.r2dbc.mysql.client.MySqlConnectionException: null
at dev.miku.r2dbc.mysql.client.ClientExceptions.wrap(ClientExceptions.java:47)
at dev.miku.r2dbc.mysql.client.ReactorNettyClient.resumeError(ReactorNettyClient.java:214)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:88)
at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.innerError(FluxConcatMap.java:301)
at reactor.core.publisher.FluxConcatMap$ConcatMapInner.onError(FluxConcatMap.java:863)
at reactor.core.publisher.Operators.error(Operators.java:196)
at reactor.core.publisher.MonoError.subscribe(MonoError.java:52)
at reactor.core.publisher.Mono.subscribe(Mono.java:4213)
at reactor.netty.NettyOutbound.subscribe(NettyOutbound.java:331)
at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.drain(FluxConcatMap.java:441)
at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.onNext(FluxConcatMap.java:243)
at reactor.core.publisher.EmitterProcessor.drain(EmitterProcessor.java:426)
at reactor.core.publisher.EmitterProcessor.onNext(EmitterProcessor.java:268)
at dev.miku.r2dbc.mysql.client.ReactorNettyClient.lambda$null$12(ReactorNettyClient.java:205)
at reactor.core.publisher.MonoRunnable.call(MonoRunnable.java:73)
at reactor.core.publisher.MonoRunnable.call(MonoRunnable.java:32)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:132)
at reactor.core.publisher.MonoCreate$DefaultMonoSink.success(MonoCreate.java:156)
at dev.miku.r2dbc.mysql.client.RequestTask.run(RequestTask.java:46)
at dev.miku.r2dbc.mysql.client.RequestQueue.submit(RequestQueue.java:93)
at dev.miku.r2dbc.mysql.client.ReactorNettyClient.lambda$close$13(ReactorNettyClient.java:205)
at reactor.core.publisher.MonoCreate.subscribe(MonoCreate.java:57)
at reactor.core.publisher.Mono.subscribe(Mono.java:4213)
at reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.drain(MonoIgnoreThen.java:172)
at reactor.core.publisher.MonoIgnoreThen.subscribe(MonoIgnoreThen.java:56)
at reactor.core.publisher.Mono.subscribe(Mono.java:4213)
at reactor.core.publisher.Mono.subscribeWith(Mono.java:4324)
at reactor.core.publisher.Mono.subscribe(Mono.java:4184)
at reactor.core.publisher.Mono.subscribe(Mono.java:4120)
at reactor.pool.SimplePool.drainLoop(SimplePool.java:283)
at reactor.pool.SimplePool.drain(SimplePool.java:187)
at reactor.pool.SimplePool.doAcquire(SimplePool.java:136)
at reactor.pool.AbstractPool$Borrower.request(AbstractPool.java:363)
at reactor.core.publisher.FluxPeek$PeekSubscriber.request(FluxPeek.java:130)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:103)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onSubscribe(FluxPeek.java:163)
at reactor.pool.SimplePool$QueueBorrowerMono.subscribe(SimplePool.java:381)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.FluxRetry$RetrySubscriber.resubscribe(FluxRetry.java:110)
at reactor.core.publisher.MonoRetry.subscribeOrReturn(MonoRetry.java:49)
at reactor.core.publisher.Mono.subscribe(Mono.java:4198)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:97)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onError(MonoFlatMap.java:165)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onError(FluxMapFuseable.java:134)
at reactor.core.publisher.FluxHandleFuseable$HandleFuseableSubscriber.onNext(FluxHandleFuseable.java:185)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2344)
at reactor.core.publisher.FluxHandleFuseable$HandleFuseableSubscriber.request(FluxHandleFuseable.java:243)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.request(FluxMapFuseable.java:162)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:103)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onSubscribe(FluxMapFuseable.java:90)
at reactor.core.publisher.FluxHandleFuseable$HandleFuseableSubscriber.onSubscribe(FluxHandleFuseable.java:148)
at reactor.core.publisher.MonoCurrentContext.subscribe(MonoCurrentContext.java:35)
at reactor.core.publisher.Mono.subscribe(Mono.java:4213)
at reactor.core.publisher.FluxUsingWhen.subscribe(FluxUsingWhen.java:103)
at reactor.core.publisher.Flux.subscribe(Flux.java:8325)
at kotlinx.coroutines.reactive.PublisherAsFlow.collectImpl(ReactiveFlow.kt:83)
at kotlinx.coroutines.reactive.PublisherAsFlow.collect(ReactiveFlow.kt:68)
at kotlinx.coroutines.flow.FlowKt__CollectionKt.toCollection(Collection.kt:32)
at kotlinx.coroutines.flow.FlowKt.toCollection(Unknown Source)
at kotlinx.coroutines.flow.FlowKt__CollectionKt.toList(Collection.kt:15)
at kotlinx.coroutines.flow.FlowKt.toList(Unknown Source)
at kotlinx.coroutines.flow.FlowKt__CollectionKt.toList$default(Collection.kt:15)
at kotlinx.coroutines.flow.FlowKt.toList$default(Unknown Source)
at
......
Reproduce step
For test, I changed MySQL's default timeout value to 15s, and max-idle-time and max-life-time of R2DBC config to 10s.
I create a breakpoint on the below line, in ExecuteSpecSupport class.
After connections are established, I wait on the breakpoint until 15s passes, in order to wait for the connection to be expired.
When I continue after than, I could be able to reproduce the issue.
return new DefaultSqlResult<>(DefaultDatabaseClient.this, // <<<< Here
sql, //
resultFunction, //
it -> sumRowsUpdated(resultFunction, it), //
mappingFunction);
Hello, is there any estimation when this issue will be fixed? Thanks.
I've just post a question looks related with this issue #82 (link)
this is the answer : https://stackoverflow.com/a/75023023/12553450
I have same issue, So any update for this issue?