nginx-amplify-agent icon indicating copy to clipboard operation
nginx-amplify-agent copied to clipboard

Option to automatically remove dead FPM pools

Open kukulich opened this issue 8 years ago • 9 comments

We create new FPM pool for every version of our application.

Our dashboard is currently full of dead FPM pools: fpmpools

It would be superb if dead FPM pools could be removed automatically after some time without no data. Remove dead pool one by one is very complicated.

kukulich avatar Aug 12 '17 13:08 kukulich

Hi, @kukulich! We automatically remove all objects (including FPM pools) which sent payload last time more than a week ago. Yes, a week is a long period, so we will think about providing some options to tune this period in UI here.

If you could choose the "non-active time" from 1w, 2d, 1d, 1h - will it solve your issue?

dedm avatar Aug 12 '17 16:08 dedm

It would be better however it would be nice to have even short periods - eg. 5m, 1m.

kukulich avatar Aug 12 '17 17:08 kukulich

Just wanted to throw it out there but this looks like it's a snapshot of PHPFPM masters, not pools. While I think offering tune-able time periods for auto remove is a good idea, I think we should also look at why we are "discovering" so many master instances repeatedly.

gshulegaard avatar Aug 12 '17 18:08 gshulegaard

Our account is [email protected] if it helps you.

kukulich avatar Aug 12 '17 19:08 kukulich

Hi @kukulich!

I dug a bit deeper into our inventory for your account and I think I have figured out why we are creating so many PHPFPM master objects for you.

We identify different FPM objects on the same host by their config path and ps output (which also has the config path). It appears that each of these FPM objects has a different config file name. What I am guessing happens here is:

  1. You update the config file (including changing the name).
  2. You restart FPM.
  3. Agent finds this FPM but calculates a different local ID hash with this new config path.
  4. Backend creates a new PHPFPM object since it does not match the locally reported object with an existing one.

Unfortunately, we don't have a good way of distinguishing co-resident FPM processes except by configuration path.

As a work around you might try:

  1. Copying the current config from an archive directory to the active config location.
  2. Using a symbolic link to the current config from an archive directory.

As a side note, you have revealed a visual bug when a host has a lot of child objects and we are working on a UI redesign to accommodate large numbers of child objects.

Thanks!

gshulegaard avatar Aug 22 '17 16:08 gshulegaard

@gshulegaard

I think it's ok we have so many master FPM :)

Our deploy looks like:

  1. Create new FPM pool configuration
  2. Run new FPM pool
  3. Change nginx configuration to use new FPM pool
  4. Reload nginx configuration
  5. Stop unused FPM pool
  6. Remove configuration of stoped FPM pool

We have two problems:

  1. Dead FPM pools don't dissapear automatically.
  2. All FPM pools have PHP-FPM name so we are not able to find the currently running pools.

kukulich avatar Aug 24 '17 09:08 kukulich

Hi @kukulich,

Thanks for the use case! Having a large number of FPM master processes is definitely a challenge when it comes to proper identification.

As mentioned above, we do remove non-reporting objects after a week...this applies to all inventory objects (FPM masters as well as pools). I believe we will work on allowing users to configure this "dead" period from accounts.

Just to re-iterate:

We have two problems:

  1. Dead FPM pools don't dissapear automatically.
  2. All FPM pools have PHP-FPM name so we are not able to find the currently running pools.

These are not pools, but rather FPM master processes. Because you are changing the config name, we believe we have found a "new" FPM master and create a new FPM master object.

Now if you are only running a single pool per master you might consider doing graceful reloads with FPM instead:

https://stackoverflow.com/questions/16890855/can-we-reload-one-of-the-php-fpm-pool-without-disturbing-others

But regardless, we will investigate adding configurable "dead" periods and update here when we know more.

Thanks!

gshulegaard avatar Aug 28 '17 19:08 gshulegaard

It's not possible to do graceful FPM reload for us unfortunately. We prefill the opcache before we change the nginx configuration. Graceful restart would empty the opcache.

kukulich avatar Aug 30 '17 19:08 kukulich

Interesting! Well I will update when we have more news about configurable removal bounds that will hopefully suffice for your use case.

Thanks again for the report.

gshulegaard avatar Aug 30 '17 20:08 gshulegaard