MySQL Connection Pool Deadlock with 100% CPU Usage in Multi-threaded Environment
SQLx MySQL connection pool experiences complete deadlock when multiple tokio::spawn tasks attempt to acquire connections simultaneously. The first task succeeds, but all subsequent tasks hang indefinitely at pool.acquire().await, causing 100%+ CPU usage and application freeze.
Description
Symptoms
- First tokio::spawn task successfully acquires connection and executes query
- All subsequent tasks hang at pool.acquire().await
- CPU usage spikes to 100%+ (in our case 188%)
- tokio::timeout mechanisms become ineffective
- Application becomes completely unresponsive
Test Code
use dotenv::dotenv;
use sqlx::mysql::MySqlPoolOptions;
use sqlx::{MySql, MySqlPool, Pool};
use std::sync::Arc;
use tokio::time::Duration;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
dotenv().ok();
let database_url = "mysql://root:123456@localhost:3306/remote_rental";
let pool = MySqlPoolOptions::new()
.max_connections(5)
.min_connections(1)
.acquire_timeout(Duration::from_secs(3))
.idle_timeout(Duration::from_secs(30))
.max_lifetime(Duration::from_secs(1800))
.connect(&database_url)
.await?;
let app_state = Arc::new(AppState::new(pool).await);
for i in 0..5 {
let state = app_state.clone();
let handle = tokio::spawn(async move {
let tid = std::thread::current().id();
println!("Thread[{:?}] Start", tid);
let mut conn =state.db.acquire().await.unwrap();
println!("Thread[{:?}] Acquire Connection", tid);
let result: (i32,) = sqlx::query_as("SELECT 1 as test")
.fetch_one(&mut *conn)
.await
.unwrap();
println!("Thread[{:?}] result : {:?}", tid, result.0);
drop(conn);
println!("Thread[{:?}] End", tid);
});
}
println!("Press enter to exit.");
tokio::time::sleep(Duration::from_secs(60)).await;
Ok(())
}
pub struct AppState {
pub db: Pool<MySql>,
}
impl AppState {
pub async fn new(db: Pool<MySql>) -> Self {
Self { db }
}
}
Observed Logs:
2025-07-29T05:34:00.124141Z INFO: Database pool initialized - max: 5, min: 1
2025-07-29T05:34:00.124198Z INFO: Thread [ThreadId(10)] starting
2025-07-29T05:34:00.124213Z INFO: Thread [ThreadId(9)] starting
2025-07-29T05:34:00.124209Z INFO: Thread [ThreadId(11)] starting
2025-07-29T05:34:00.124232Z INFO: Thread [ThreadId(4)] starting
2025-07-29T05:34:00.124243Z INFO: Thread [ThreadId(5)] starting
2025-07-29T05:34:00.130084Z INFO: Thread [ThreadId(10)] acquired connection
// ALL OTHER THREADS HANG HERE - NO MORE LOGS
// CPU usage spikes to 188%
[package] name = "test_sqlx" version = "0.1.0" edition = "2024"
[dependencies] tokio = { version = "1.47.0", features = ["full"] } sqlx = { version = "0.8.6", features = ["mysql", "runtime-tokio", "tls-rustls", "macros", "chrono", "uuid"] } dotenv = "0.15.0" anyhow="1.0.98"
SQLx version
0.8.6
Enabled SQLx features
"mysql", "runtime-tokio", "tls-rustls", "macros", "chrono", "uuid"
Database server and version
mysql 8.0.31
Operating system
macos 15.5
Rust version
rustc 1.88.0
@wangxiaore888 this could use more information:
- The logs you give don't match the
println!()statements in the code. If you have modified the code from the original, please post the logs from running this actual code snippet. - Please enable SQLx logging:
- Install a
tracingsubscriber orenv_logger. - Set
RUST_LOG=sqlx=debugin the environment. - Re-run the code.
- Install a
- You do not specify how long you waited to be sure that it was a deadlock.
- If you can, it would be very helpful to capture a backtrace of the hung thread(s) in a debugger.
This is probably, just like #3645, because of this bug in rustls 0.23.30. Since this issue was opened just after that rustls release (0.23.30 was released on 27 jul 2025).
@wangxiaore888 could you try to reproduce the issue with a different rustls version? I can't reproduce this with rustls 0.23.33.