finch icon indicating copy to clipboard operation
finch copied to clipboard

The dangers of `pool_max_idle_time` defaulting to `:infinity`

Open PragTob opened this issue 9 months ago • 8 comments

👋

As usual, thanks a ton for finch - it's been serving us well!

Over the weekend we ran into an issue where our system ran out of processes and if my analysis is correct that was due to too many HTTP pools that were never terminated. This morning in ~4hs our related finch/finch configuration made it to ~30k children. I think this because one of the end points we're adding seems to have a subdomain as part of its domain (dqX1PIrNfb81743399336.bla.woot.com) and these seem to be changing frequently. So, if for each of these (of course depends on how often we get those), we create count many new HTTP pools (which for us was ~40 which may be a bit much but see #279 ) and never terminate them we run out of processes eventually.

This brings me to a question: is the pool_max_idle_time of :infinity a good default?

And, I don't know. It's clearly somewhat dangerous (if I'm understanding it right) but for most applications it should not be dangerous - as it stands to reason that you'd only ever talk to a set number of known origins and so you wouldn't notice the explosion we've seen.

We only saw the problems we're seeing because:

  • we're acting as a fancy proxy with unknown destinations, so we'll hit more origins than the average application
  • we have that one (ok, multiple actually) destination that decides to make random strings part of their subdomain

So, in summary - I guess it's a safe default for 99.99% of users and the 0.01% should know better (that's me! 😁 ). But I think documenting that impact of this default better is worth it and I'm happy to do so. I looked at it this morning and was like "maybe that's it... nah if it was then it wouldn't be the default, I should trust the default" and too me some time to circle back around.

What do you think? :)

PragTob avatar Mar 31 '25 12:03 PragTob

Also, thanks! Image

PragTob avatar Mar 31 '25 12:03 PragTob

Hey @PragTob ! Thanks for the issue.

I guess :infinity is the default because setting the idle timeout introduces a very small amount of overhead in NimblePool.

With Req taking off, meaning Finch is more often used as part of a full featured, user friendly http client, I would argue that :infinity probably is no longer a safe default for the majority of users.

I'm definitely open to discussing a new default value, and users who need to minimize overhead can manually set it to nil.

What would you suggest as a reasonable default?

If you ran out of processes in just 4 hours, then I think you would probably need an idle timeout that is much lower than most users would need.

sneako avatar Mar 31 '25 13:03 sneako

PR setting a different default in Req would be appreciated!

wojtekmach avatar Mar 31 '25 13:03 wojtekmach

👋 Oh no we didn't run out of processes in 4 hours. The supervisor just had 30k children within that time and we were ate ~41k processes total iirc (the maximum being ~1 Million). This was during a low traffic time. It took us about 22hs to run out of processes it seems - but only after I had set the pool count to 70 (on :default) 😅 And that's with between 100k-200k req/hour processed by that particular finch configuration. That said, I think it depends more on how many of the "weird" requests/origins we're getting than anything. Before this, the application ran for around 2 months without a single restart and all was fine (we got some new traffic middle of last week).

The question about the default is a great one. I set mine to 1 hour for now - the traffic we get is repeating in a pre-determined pattern (mostly).

I guess it's tough to find a great default for everyone... 12 hours? 24 hous? My one hour? I honestly don't know, but I think it's ok to go long as most people should never/rarely need this...

We hit the System Limit early on in the weekend (Saturday morning, actually) and I only saw it this Monday morning (no on call so far 🤞 ). Much to my surprise the system remained mostly responsive (we of course produced a bunch 500s but a bigger chunk of requests still made it though), part of the reason is also that we have 2 types of traffic and each has their own finch configuration.

PragTob avatar Mar 31 '25 14:03 PragTob

👋

Something doesn't quite add up yet. We're still seeing an infinite growth to the pool, i.e. after ~3 days (I decreased pool count) we were at 440k processes again, 435k of which were linked to that specific supervisor.

So either:

  1. I configured it wrong
  2. there is a bug somewhere
  3. the connections are weird/they are constantly in use so that it is "correct" to inflate like this (I highly doubt that all of them are kept in constant use every hour though, for that our traffic shifts way too much)
  4. ??? magic ???

The config right now looks roughly like this:

    [
      name: Parallax.PopHTTPPool,
      pools: %{
        default: [
          size: 150,
          count: 40,
          pool_max_idle_time: :timer.hours(1),
          conn_opts: [skip_target_validation: true]
        ]
      }
    ]
  end

I'll see if I can reproduce and debug locally, if not I'll need to do it on the remote (right now at ~150k processes).

I'll let you know what I find.

PragTob avatar Apr 09 '25 09:04 PragTob

I'm guessing I'm doing something wrong and I'll read the docs again after lunch but this is my current script in which the pools are seemingly not shut down although I'd expect them to (as they were definitely idle for 15seconds with a max idle time of one second):

finch_name = Tobi.Tries
_finch = Finch.start_link(name: finch_name, pools: %{default: [count: 3, pool_max_idle_time: to_timeout(second: 1), start_pool_metrics?: true]})


urls = ["http://www.google.com", "https://www.screenversemedia.com/", "http://hex.pm"]

urls
|> Enum.map(fn url ->
  Task.async(fn -> Req.get!(url, finch: finch_name) end)
end)
|> Task.await_many()

urls
|> Enum.map(& (Finch.get_pool_status(finch_name, &1)))
|> dbg()

# should kill the connections/pools
Process.sleep(to_timeout(second: 15))

urls
|> Enum.map(& (Finch.get_pool_status(finch_name, &1)))
|> dbg()
output
urls #=> ["http://www.google.com", "https://www.screenversemedia.com/", "http://hex.pm"]
|> Enum.map(&Finch.get_pool_status(finch_name, &1)) #=> [
  ok: [
    %Finch.HTTP1.PoolMetrics{
      pool_index: 1,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 2,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 3,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    }
  ],
  ok: [
    %Finch.HTTP1.PoolMetrics{
      pool_index: 1,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 2,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 3,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    }
  ],
  ok: [
    %Finch.HTTP1.PoolMetrics{
      pool_index: 1,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 2,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 3,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    }
  ]
]

[finch.exs:22: (file)]
urls #=> ["http://www.google.com", "https://www.screenversemedia.com/", "http://hex.pm"]
|> Enum.map(&Finch.get_pool_status(finch_name, &1)) #=> [
  ok: [
    %Finch.HTTP1.PoolMetrics{
      pool_index: 1,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 2,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 3,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    }
  ],
  ok: [
    %Finch.HTTP1.PoolMetrics{
      pool_index: 1,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 2,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 3,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    }
  ],
  ok: [
    %Finch.HTTP1.PoolMetrics{
      pool_index: 1,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 2,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 3,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    }
  ]
]


dep versions:

* req 0.5.10 (Hex package) (mix)
* finch 0.19.0 (Hex package) (mix)
* nimble_pool 1.1.0 (Hex package) (mix)

PragTob avatar Apr 09 '25 09:04 PragTob

Ok, I adjusted the script a bit to get the actual pools from the registry in case pool metrics wasn't accurate:

finch_name = Tobi.Tries
_finch = Finch.start_link(name: finch_name, pools: %{default: [count: 3, pool_max_idle_time: to_timeout(second: 1), start_pool_metrics?: true]})


urls = ["http://www.google.com", "https://www.screenversemedia.com/", "http://hex.pm"]

urls
|> Enum.map(fn url ->
  Task.async(fn -> Req.get!(url, finch: finch_name) end)
end)
|> Task.await_many()

urls
|> Enum.map(& (Finch.get_pool_status(finch_name, &1)))
|> dbg()

urls
|> Enum.map(&Finch.Request.parse_url/1)
|> Enum.map(fn {s, h, p, _, _} -> {{s, h, p}, Registry.lookup(finch_name, {s, h, p}) |> length()} end)
|> dbg()


# should kill the connections/pools
Process.sleep(to_timeout(second: 5))

urls
|> Enum.map(& (Finch.get_pool_status(finch_name, &1)))
|> dbg()


urls
|> Enum.map(&Finch.Request.parse_url/1)
|> Enum.map(fn {s, h, p, _, _} -> {{s, h, p}, Registry.lookup(finch_name, {s, h, p}) |> length()} end)
|> dbg()

I have a bunch of debug statements all over but the most interesting bit is:

  • the handle_ping callback is only called 4 times (once with each and one more time with another, in my case hex.pm) - I'd have expected 9 times (once for each pool, 3 origins with 3 pools each)
  • looking up the pools I realize that for each one exactly 2 pools remain (or said in another way, 1 pool was removed)

I'm not sure about the interaction here. Maybe that we have multiple of the "same" pool maybe finch should implement terminate_pool/2 - to terminate all the other pools of the same origin? As right now, I guess they would be terminated one after the other leaving you with a smaller than configured pool count. But then, that also doesn't seem to happen and stop somewhere. I don't know why yet.

I can also take the script and put it into a separate repo, right now it's in the repo of my application.

Full debug logs probably not useful for anyone
tobi@qiqi:~/screenverse/multiverse/parallax(main)$ mix run finch.exs 
[(finch 0.19.0) lib/finch/http1/pool.ex:37: Finch.HTTP1.Pool.start_link/1]
pool_idle_timeout(pool_max_idle_time) #=> 1000

[(finch 0.19.0) lib/finch/http1/pool.ex:37: Finch.HTTP1.Pool.start_link/1]
pool_idle_timeout(pool_max_idle_time) #=> 1000

[(finch 0.19.0) lib/finch/http1/pool.ex:37: Finch.HTTP1.Pool.start_link/1]
pool_idle_timeout(pool_max_idle_time) #=> 1000

[(finch 0.19.0) lib/finch/http1/pool.ex:37: Finch.HTTP1.Pool.start_link/1]
pool_idle_timeout(pool_max_idle_time) #=> 1000

[(finch 0.19.0) lib/finch/http1/pool.ex:37: Finch.HTTP1.Pool.start_link/1]
pool_idle_timeout(pool_max_idle_time) #=> 1000

[(finch 0.19.0) lib/finch/http1/pool.ex:37: Finch.HTTP1.Pool.start_link/1]
pool_idle_timeout(pool_max_idle_time) #=> 1000

[(finch 0.19.0) lib/finch/http1/pool.ex:37: Finch.HTTP1.Pool.start_link/1]
pool_idle_timeout(pool_max_idle_time) #=> 1000

[(finch 0.19.0) lib/finch/http1/pool.ex:37: Finch.HTTP1.Pool.start_link/1]
pool_idle_timeout(pool_max_idle_time) #=> 1000

[(finch 0.19.0) lib/finch/http1/pool.ex:37: Finch.HTTP1.Pool.start_link/1]
pool_idle_timeout(pool_max_idle_time) #=> 1000

{"message":"redirecting to https://hex.pm/","time":"2025-04-09T12:39:12.122Z","metadata":{},"severity":"debug"}
[(finch 0.19.0) lib/finch/http1/pool.ex:37: Finch.HTTP1.Pool.start_link/1]
pool_idle_timeout(pool_max_idle_time) #=> 1000

[(finch 0.19.0) lib/finch/http1/pool.ex:37: Finch.HTTP1.Pool.start_link/1]
pool_idle_timeout(pool_max_idle_time) #=> 1000

[(finch 0.19.0) lib/finch/http1/pool.ex:37: Finch.HTTP1.Pool.start_link/1]
pool_idle_timeout(pool_max_idle_time) #=> 1000

[(finch 0.19.0) lib/finch/pool_manager.ex:59: Finch.PoolManager.lookup_pool/2]
pools #=> [
  {#PID<0.872.0>, Finch.HTTP1.Pool},
  {#PID<0.873.0>, Finch.HTTP1.Pool},
  {#PID<0.874.0>, Finch.HTTP1.Pool}
]

[(finch 0.19.0) lib/finch/pool_manager.ex:59: Finch.PoolManager.lookup_pool/2]
pools #=> [
  {#PID<0.869.0>, Finch.HTTP1.Pool},
  {#PID<0.870.0>, Finch.HTTP1.Pool},
  {#PID<0.871.0>, Finch.HTTP1.Pool}
]

[(finch 0.19.0) lib/finch/pool_manager.ex:59: Finch.PoolManager.lookup_pool/2]
pools #=> [
  {#PID<0.866.0>, Finch.HTTP1.Pool},
  {#PID<0.867.0>, Finch.HTTP1.Pool},
  {#PID<0.868.0>, Finch.HTTP1.Pool}
]

[finch.exs:15: (file)]
urls #=> ["http://www.google.com", "https://www.screenversemedia.com/", "http://hex.pm"]
|> Enum.map(&Finch.get_pool_status(finch_name, &1)) #=> [
  ok: [
    %Finch.HTTP1.PoolMetrics{
      pool_index: 1,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 2,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 3,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    }
  ],
  ok: [
    %Finch.HTTP1.PoolMetrics{
      pool_index: 1,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 2,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 3,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    }
  ],
  ok: [
    %Finch.HTTP1.PoolMetrics{
      pool_index: 1,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 2,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 3,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    }
  ]
]

[finch.exs:20: (file)]
urls #=> ["http://www.google.com", "https://www.screenversemedia.com/", "http://hex.pm"]
|> Enum.map(&Finch.Request.parse_url/1) #=> [
  {:http, "www.google.com", 80, "/", nil},
  {:https, "www.screenversemedia.com", 443, "/", nil},
  {:http, "hex.pm", 80, "/", nil}
]
|> Enum.map(fn {s, h, p, _, _} ->
  {{s, h, p}, Registry.lookup(finch_name, {s, h, p}) |> length()}
end) #=> [
  {{:http, "www.google.com", 80}, 3},
  {{:https, "www.screenversemedia.com", 443}, 3},
  {{:http, "hex.pm", 80}, 3}
]

[(finch 0.19.0) lib/finch/http1/pool.ex:252: Finch.HTTP1.Pool.handle_ping/2]
"handle ping!" #=> "handle ping!"

[(finch 0.19.0) lib/finch/http1/pool.ex:252: Finch.HTTP1.Pool.handle_ping/2]
"handle ping!" #=> "handle ping!"

[(finch 0.19.0) lib/finch/http1/pool.ex:252: Finch.HTTP1.Pool.handle_ping/2]
"handle ping!" #=> "handle ping!"

[(finch 0.19.0) lib/finch/http1/pool.ex:253: Finch.HTTP1.Pool.handle_ping/2]
pool_state #=> {Tobi.Tries, {:http, "hex.pm", 80}, 1, #Reference<0.2700987968.3880910854.7741>,
 %{
   count: 3,
   size: 50,
   mod: Finch.HTTP1.Pool,
   conn_opts: [
     transport_opts: [keepalive: true, nodelay: true, timeout: 5000],
     protocols: [:http1],
     ssl_key_log_file_device: nil
   ],
   pool_max_idle_time: 1000,
   conn_max_idle_time: :infinity,
   start_pool_metrics?: true
 }}

[(finch 0.19.0) lib/finch/http1/pool.ex:253: Finch.HTTP1.Pool.handle_ping/2]
pool_state #=> {Tobi.Tries, {:https, "www.screenversemedia.com", 443}, 1,
 #Reference<0.2700987968.3880910854.7769>,
 %{
   count: 3,
   size: 50,
   mod: Finch.HTTP1.Pool,
   conn_opts: [
     protocols: [:http1],
     transport_opts: [keepalive: true, nodelay: true, timeout: 5000],
     ssl_key_log_file_device: nil
   ],
   pool_max_idle_time: 1000,
   conn_max_idle_time: :infinity,
   start_pool_metrics?: true
 }}

[(nimble_pool 1.1.0) lib/nimble_pool.ex:722: NimblePool.handle_info/2]
"shutting down?" #=> "shutting down?"

[(finch 0.19.0) lib/finch/http1/pool.ex:253: Finch.HTTP1.Pool.handle_ping/2]
pool_state #=> {Tobi.Tries, {:http, "www.google.com", 80}, 1,
 #Reference<0.2700987968.3880910852.15239>,
 %{
   count: 3,
   size: 50,
   mod: Finch.HTTP1.Pool,
   conn_opts: [
     transport_opts: [keepalive: true, nodelay: true, timeout: 5000],
     protocols: [:http1],
     ssl_key_log_file_device: nil
   ],
   pool_max_idle_time: 1000,
   conn_max_idle_time: :infinity,
   start_pool_metrics?: true
 }}

[(nimble_pool 1.1.0) lib/nimble_pool.ex:722: NimblePool.handle_info/2]
"shutting down?" #=> "shutting down?"

[(nimble_pool 1.1.0) lib/nimble_pool.ex:734: NimblePool.terminate/2]
"trying to terminate" #=> "trying to terminate"

[(nimble_pool 1.1.0) lib/nimble_pool.ex:722: NimblePool.handle_info/2]
"shutting down?" #=> "shutting down?"

[(nimble_pool 1.1.0) lib/nimble_pool.ex:734: NimblePool.terminate/2]
"trying to terminate" #=> "trying to terminate"

[(nimble_pool 1.1.0) lib/nimble_pool.ex:737: NimblePool.terminate/2]
"terminating workers" #=> "terminating workers"

[(nimble_pool 1.1.0) lib/nimble_pool.ex:734: NimblePool.terminate/2]
"trying to terminate" #=> "trying to terminate"

[(nimble_pool 1.1.0) lib/nimble_pool.ex:737: NimblePool.terminate/2]
"terminating workers" #=> "terminating workers"

[(nimble_pool 1.1.0) lib/nimble_pool.ex:737: NimblePool.terminate/2]
"terminating workers" #=> "terminating workers"

[(finch 0.19.0) lib/finch/http1/pool.ex:271: Finch.HTTP1.Pool.terminate_worker/3]
conn #=> %{
  parent: #PID<0.872.0>,
  port: 80,
  scheme: :http,
  opts: [
    transport_opts: [keepalive: true, nodelay: true, timeout: 5000],
    protocols: [:http1],
    ssl_key_log_file_device: nil
  ],
  host: "www.google.com",
  mint: %Mint.HTTP1{
    host: "www.google.com",
    port: 80,
    request: nil,
    streaming_request: nil,
    socket: #Port<0.105>,
    transport: Mint.Core.Transport.TCP,
    mode: :active,
    scheme_as_string: "http",
    case_sensitive_headers: false,
    skip_target_validation: false,
    requests: {[], []},
    state: :open,
    buffer: "",
    proxy_headers: [],
    private: %{},
    log: false
  },
  max_idle_time: :infinity,
  last_checkin: -576460751199905182
}

[(finch 0.19.0) lib/finch/http1/pool.ex:271: Finch.HTTP1.Pool.terminate_worker/3]
conn #=> %{
  parent: #PID<0.869.0>,
  port: 443,
  scheme: :https,
  opts: [
    protocols: [:http1],
    transport_opts: [keepalive: true, nodelay: true, timeout: 5000],
    ssl_key_log_file_device: nil
  ],
  host: "www.screenversemedia.com",
  mint: %Mint.HTTP1{
    host: "www.screenversemedia.com",
    port: 443,
    request: nil,
    streaming_request: nil,
    socket: {:sslsocket, {:gen_tcp, #Port<0.106>, :tls_connection, :undefined},
     [#PID<0.880.0>, #PID<0.879.0>]},
    transport: Mint.Core.Transport.SSL,
    mode: :active,
    scheme_as_string: "https",
    case_sensitive_headers: false,
    skip_target_validation: false,
    requests: {[], []},
    state: :open,
    buffer: "",
    proxy_headers: [],
    private: %{},
    log: false
  },
  max_idle_time: :infinity,
  last_checkin: -576460750981232788
}

[(finch 0.19.0) lib/finch/http1/pool.ex:271: Finch.HTTP1.Pool.terminate_worker/3]
conn #=> %{
  parent: #PID<0.866.0>,
  port: 80,
  scheme: :http,
  opts: [
    transport_opts: [keepalive: true, nodelay: true, timeout: 5000],
    protocols: [:http1],
    ssl_key_log_file_device: nil
  ],
  host: "hex.pm",
  mint: %Mint.HTTP1{
    host: "hex.pm",
    port: 80,
    request: nil,
    streaming_request: nil,
    socket: #Port<0.104>,
    transport: Mint.Core.Transport.TCP,
    mode: :active,
    scheme_as_string: "http",
    case_sensitive_headers: false,
    skip_target_validation: false,
    requests: {[], []},
    state: :open,
    buffer: "",
    proxy_headers: [],
    private: %{},
    log: false
  },
  max_idle_time: :infinity,
  last_checkin: -576460751153946714
}

[(nimble_pool 1.1.0) lib/nimble_pool.ex:746: NimblePool.terminate/2]
"terminate finished" #=> "terminate finished"

[(nimble_pool 1.1.0) lib/nimble_pool.ex:746: NimblePool.terminate/2]
"terminate finished" #=> "terminate finished"

[(nimble_pool 1.1.0) lib/nimble_pool.ex:746: NimblePool.terminate/2]
"terminate finished" #=> "terminate finished"

[(finch 0.19.0) lib/finch/http1/pool.ex:252: Finch.HTTP1.Pool.handle_ping/2]
"handle ping!" #=> "handle ping!"

[(finch 0.19.0) lib/finch/http1/pool.ex:253: Finch.HTTP1.Pool.handle_ping/2]
pool_state #=> {Tobi.Tries, {:https, "hex.pm", 443}, 1,
 #Reference<0.2700987968.3880910854.7890>,
 %{
   count: 3,
   size: 50,
   mod: Finch.HTTP1.Pool,
   conn_opts: [
     protocols: [:http1],
     transport_opts: [keepalive: true, nodelay: true, timeout: 5000],
     ssl_key_log_file_device: nil
   ],
   pool_max_idle_time: 1000,
   conn_max_idle_time: :infinity,
   start_pool_metrics?: true
 }}

[(nimble_pool 1.1.0) lib/nimble_pool.ex:722: NimblePool.handle_info/2]
"shutting down?" #=> "shutting down?"

[(nimble_pool 1.1.0) lib/nimble_pool.ex:734: NimblePool.terminate/2]
"trying to terminate" #=> "trying to terminate"

[(nimble_pool 1.1.0) lib/nimble_pool.ex:737: NimblePool.terminate/2]
"terminating workers" #=> "terminating workers"

[(finch 0.19.0) lib/finch/http1/pool.ex:271: Finch.HTTP1.Pool.terminate_worker/3]
conn #=> %{
  parent: #PID<0.881.0>,
  port: 443,
  scheme: :https,
  opts: [
    protocols: [:http1],
    transport_opts: [keepalive: true, nodelay: true, timeout: 5000],
    ssl_key_log_file_device: nil
  ],
  host: "hex.pm",
  mint: %Mint.HTTP1{
    host: "hex.pm",
    port: 443,
    request: nil,
    streaming_request: nil,
    socket: {:sslsocket, {:gen_tcp, #Port<0.107>, :tls_connection, :undefined},
     [#PID<0.887.0>, #PID<0.886.0>]},
    transport: Mint.Core.Transport.SSL,
    mode: :active,
    scheme_as_string: "https",
    case_sensitive_headers: false,
    skip_target_validation: false,
    requests: {[], []},
    state: :open,
    buffer: "",
    proxy_headers: [],
    private: %{},
    log: false
  },
  max_idle_time: :infinity,
  last_checkin: -576460750929310039
}

[(nimble_pool 1.1.0) lib/nimble_pool.ex:746: NimblePool.terminate/2]
"terminate finished" #=> "terminate finished"

[(finch 0.19.0) lib/finch/pool_manager.ex:59: Finch.PoolManager.lookup_pool/2]
pools #=> [{#PID<0.873.0>, Finch.HTTP1.Pool}, {#PID<0.874.0>, Finch.HTTP1.Pool}]

[(finch 0.19.0) lib/finch/pool_manager.ex:59: Finch.PoolManager.lookup_pool/2]
pools #=> [{#PID<0.870.0>, Finch.HTTP1.Pool}, {#PID<0.871.0>, Finch.HTTP1.Pool}]

[(finch 0.19.0) lib/finch/pool_manager.ex:59: Finch.PoolManager.lookup_pool/2]
pools #=> [{#PID<0.867.0>, Finch.HTTP1.Pool}, {#PID<0.868.0>, Finch.HTTP1.Pool}]

[finch.exs:28: (file)]
urls #=> ["http://www.google.com", "https://www.screenversemedia.com/", "http://hex.pm"]
|> Enum.map(&Finch.get_pool_status(finch_name, &1)) #=> [
  ok: [
    %Finch.HTTP1.PoolMetrics{
      pool_index: 1,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 2,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 3,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    }
  ],
  ok: [
    %Finch.HTTP1.PoolMetrics{
      pool_index: 1,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 2,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 3,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    }
  ],
  ok: [
    %Finch.HTTP1.PoolMetrics{
      pool_index: 1,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 2,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    },
    %Finch.HTTP1.PoolMetrics{
      pool_index: 3,
      pool_size: 50,
      available_connections: 50,
      in_use_connections: 0
    }
  ]
]

[finch.exs:34: (file)]
urls #=> ["http://www.google.com", "https://www.screenversemedia.com/", "http://hex.pm"]
|> Enum.map(&Finch.Request.parse_url/1) #=> [
  {:http, "www.google.com", 80, "/", nil},
  {:https, "www.screenversemedia.com", 443, "/", nil},
  {:http, "hex.pm", 80, "/", nil}
]
|> Enum.map(fn {s, h, p, _, _} ->
  {{s, h, p}, Registry.lookup(finch_name, {s, h, p}) |> length()}
end) #=> [
  {{:http, "www.google.com", 80}, 2},
  {{:https, "www.screenversemedia.com", 443}, 2},
  {{:http, "hex.pm", 80}, 2}
]

[(nimble_pool 1.1.0) lib/nimble_pool.ex:734: NimblePool.terminate/2]
"trying to terminate" #=> "trying to terminate"

[(nimble_pool 1.1.0) lib/nimble_pool.ex:734: NimblePool.terminate/2]
"trying to terminate" #=> "trying to terminate"

[(nimble_pool 1.1.0) lib/nimble_pool.ex:734: NimblePool.terminate/2]
"trying to terminate" #=> "trying to terminate"

[(nimble_pool 1.1.0) lib/nimble_pool.ex:734: NimblePool.terminate/2]
"trying to terminate" #=> "trying to terminate"

[(nimble_pool 1.1.0) lib/nimble_pool.ex:734: NimblePool.terminate/2]
"trying to terminate" #=> "trying to terminate"

[(nimble_pool 1.1.0) lib/nimble_pool.ex:734: NimblePool.terminate/2]
"trying to terminate" #=> "trying to terminate"

[(nimble_pool 1.1.0) lib/nimble_pool.ex:734: NimblePool.terminate/2]
"trying to terminate" #=> "trying to terminate"

[(nimble_pool 1.1.0) lib/nimble_pool.ex:734: NimblePool.terminate/2]
"trying to terminate" #=> "trying to terminate"

PragTob avatar Apr 09 '25 12:04 PragTob

Opened up #311 to discuss the pools not shutting down, so that this can be focussed on providing a default for pool_max_idle_time, however now that I spent some time thinking of it I see some dangers with shutting down the pools by default detailed in the second part of "Solutions" on #311. In short: If we shut down pool processes one by one individually we may be left with way fewer pools than pool count - without any being restarted. It may be better to shut down pool groups in full or not at all - I'm interested in that discussions but probably better over on #311

Image

PragTob avatar Apr 14 '25 10:04 PragTob