MySqlConnector
MySqlConnector copied to clipboard
ConnectTimeout global or per-server
Hi,
The documentation say:
FailOver Each new connection tries to connect to the first host; subsequent hosts are used only if connecting to the first one fails.
I try this example:
static async Task Main(string[] args)
{
Console.WriteLine("Hello World!");
const string sql2 = "server=sql2,sql3;user id=USER_ID;password=PASSWORD;database=DB_NAME;LoadBalance=FailOver;ConnectionTimeout=2;SslMode=None;ConnectionLifeTime=300;ApplicationName=TEST;Pooling=False";
const string sql3 = "server=sql3,sql2;user id=USER_ID;password=PASSWORD;database=DB_NAME;LoadBalance=FailOver;ConnectionTimeout=2;SslMode=None;ConnectionLifeTime=300;ApplicationName=TEST;Pooling=False";
while(true)
{
try
{
Console.WriteLine("Connection to SQL2");
using (MySqlConnection db = new MySqlConnection(sql2))
{
await db.OpenAsync();
db.Close();
}
Console.WriteLine("Connection to SQL3");
using (MySqlConnection db = new MySqlConnection(sql3))
{
await db.OpenAsync();
db.Close();
}
}
catch(Exception ex)
{
Console.WriteLine("EXCEPTION: " + ex.Message);
}
Thread.Sleep(500);
}
}
The result: I have always a timeout on open:
Hello World!
Connection to SQL2
Connection to SQL3
EXCEPTION: Connect Timeout expired.
Connection to SQL2
Connection to SQL3
EXCEPTION: Connect Timeout expired.
Connection to SQL2
Connection to SQL3
EXCEPTION: Connect Timeout expired.
Connection to SQL2
Connection to SQL3
EXCEPTION: Connect Timeout expired.
SQL2 work and SQL3 is down. When try to use the second connection string sql3, sql3 throw exception time out without failover on sql2.
Is it a bug or could you explain the behaviour of failover.
Regards
The ConnectTimeout
is not per-server, but applies to the entire connection process. Failover will happen if the ConnectTimeout
doesn't elapse when attempting to connect to the first server. (This might be the case if DNS resolution fails or if the connection is refused.)
It seems possible that MySqlConnector might need to change its algorithm to use ConnectTimeout
per-server, or divide the total timeout by the number of servers.
I would naturally expect the connection timeout to be per server.
How would it work if you say had 20 servers listed(extreme example)? If your timeout is the default 15 seconds and the timeout is per server, then the cumulative connection timeout would be 5 minutes. Would that really make sense? The other possible solution would give each server a 750 ms timeout (15 seconds/20 servers). That would make sure you have a connection within 15 seconds, but might not give enough time for a shorter connection timeouts (i.e. 1 seconds/20 = 50 ms per server connection timeout) . Then you might have connection errors because it doesn't have enough time to negotiate the connection on higher latency connections.
My preference would probably be the second option, but with supporting documentation. If you have a very low latency network, you probably don't really want to have it waiting for a timeout of multiple seconds for a connection as it could effect application performance. Back to the original example in the issue of 2 server with a timeout of 2 seconds. With a per server timeout, then you are looking at a minimum of 2 seconds for each new connection to sql2 when using the sql3 connection string (tries to connect to sql3 for 2 seconds, then tries to connect to sql2). If it were to divide the connection timeout by the number of servers, then you will only have a minimum of 1 second for connecting to sql2.
Found this issue after doing a similar test. Using the Load Balance=FailOver option doesn't cause the connection to fail over when a Connect Timeout occurs if the host is offline.
Am I missing the point of the Fail Over feature in the connection string?