unit icon indicating copy to clipboard operation
unit copied to clipboard

Not terminated php process after timeout

Open bashkarev opened this issue 5 years ago • 21 comments

Not terminated php process after timeout,

conf.json

{
  "listeners": {
    "*:9000": {
      "application": "index"
    }
  },
  "applications": {
    "index": {
      "type": "php",
      "user": "www-data",
      "group": "www-data",
      "processes": {
        "max": 10,
        "spare": 1
      },
      "root": "/var/www/html/public",
      "index": "index.php"
    }
  },
  "settings": {
    "http": {
      "idle_timeout": 10,
      "send_timeout": 10
    }
  }
}

index.php

<?php

$stdout = fopen('php://stderr', 'wb');
$i = 0;
while (true) {
    fwrite($stdout, $i . '> connection_status: ' . connection_status() . PHP_EOL);
    sleep(1);
    ++$i;
}

bashkarev avatar Mar 11 '20 23:03 bashkarev

Hello,

  1. According to Settings documentation, idle_timeout and send_timeout does not limits the request processing time. Use timeout option in application limits group to limit client wait time (see Request Limits).
  2. Currently Unit tries only to stop applications gracefully and does not kill it with signal if it hangs. Try to avoid infinite loops for now.

mar0x avatar Mar 12 '20 13:03 mar0x

Are you planning to add signal controls? for :

  • frozen process
  • client disconnection

bashkarev avatar Mar 12 '20 13:03 bashkarev

Yes

mar0x avatar Mar 12 '20 13:03 mar0x

Are you planning to add signal controls? for :

  • frozen process
  • client disconnection

Hopefully soon. Thank you!

teodorescuserban avatar Oct 20 '22 07:10 teodorescuserban

@mar0x, I tried to find out what exactly means "Currently Unit tries only to stop applications gracefully" to make sure I change the application so it will actually stop. This would be great to have until you guys are able to implement the kill.

Thanks!

teodorescuserban avatar Oct 26 '22 10:10 teodorescuserban

@teodorescuserban Hi, I'll work on a more detailed explanation of what exactly happens there.

ghost avatar Nov 15 '22 12:11 ghost

@teodorescuserban Hi, I'll work on a more detailed explanation of what exactly happens there.

@artemkonev any news on this? Thank you!

teodorescuserban avatar Feb 28 '23 06:02 teodorescuserban

Hi @teodorescuserban , I'm about to post an update in a week or two.

ghost avatar Mar 01 '23 15:03 ghost

@teodorescuserban hi! Seems the answer is, unfortunately, 'not much.' I added what little details there are here:

https://unit.nginx.org/configuration/#request-limits

ghost avatar May 08 '23 15:05 ghost

@artemkonev I am sorry to hear there is no way to notify the worker that unit is cancelling the request. I guess the worker will have to manage its subsequent processes and make sure it doesn't go over the unit timeout threshold. Hopefully in the future unit will send some signal to the worker and in that case it will be up to the worker to react on it and kill whatever child it has. Thank you for looking into this.

teodorescuserban avatar May 12 '23 07:05 teodorescuserban

Hi, are there any standard solution for that problem?

I understand, that it seems obvious that you should just "Try to avoid infinite loops for now", but imagine, that the problem is not in "infinite loops", but in the infinitely hanging SQL requests.

For example our configuration

"admin": {
    "type": "php",
    "root": "/var/www/app/backend/web/",
    "script": "index.php",
    "processes": 8,
    "limits": {
        "requests": 10000,
        "timeout": 30
    }
}

In our case

  • in the admin panel we had a poorly optimized SQL query, that executes 20-30 minutes
  • by our configuration nginx-unit returns 503 status code to the client after 30 seconds
  • but the php process still executes and after 8 such requests, our "admin" application becomes unresponsive, because it exceeded the processes limit of the nginx-unit

It would be perfect to somehow be able to force kill such hanging processes.

malsatin avatar May 31 '24 12:05 malsatin

Set a max_execution_time in your php.ini .

jeffdafoe avatar May 31 '24 12:05 jeffdafoe

@jeffdafoe We have tried, it didn't help - the process is still executes after duration, specified in max_execution_time.

Did it work in your case?

malsatin avatar May 31 '24 12:05 malsatin

@malsatin

From here

Note: The set_time_limit() function and the configuration directive max_execution_time only affect the execution time of the script itself. Any time spent on activity that happens outside the execution of the script such as system calls using system(), stream operations, database queries, etc. is not included when determining the maximum time that the script has been running. This is not true on Windows where the measured time is real.

So it looks like it won't abort your database query...

If your using MySQL (I guess this also applies to MariaDB and other DBs may have similar) then it seems you can do something like

SELECT 
/*+ MAX_EXECUTION_TIME(30000) */
*
FROM table;

The timeout value is in milliseconds.

ac000 avatar May 31 '24 15:05 ac000

Thanks, @ac000, great point!

We will check out that feature.

malsatin avatar May 31 '24 23:05 malsatin

The solution for now, as far as I was able to help implement on our side, is for the backend script (in our case a python api) to have its own timeout and if reached, make sure it does the cleaning (kill the child processes, abort connections to the database etc). This is not an easy task and depends on what you have in the backend. Unfortunately, because we are trying to have a decent timeout (for the heavier parts of the api), we would have to wait longer to do the cleanup for every failed closed connection made from the client to the nginx unit. This at least seems to make the backend script vulnerable to DoS when you make a lot of requests and close them immediately. While nginx will cut the connection off from nginx unit immediately, the backend script will just keep ruuning thousands of database operations. Hopefully this feature is implemented at some point. Until then... ban per IP/subnet/ASN :)

teodorescuserban avatar Jun 03 '24 09:06 teodorescuserban

I guess what I am trying to say is that it would be great if nginx unit would send some signal (e.g. HUP or USR2) to the backend script and let it figure it out what needs to be done to be stopped. That should make any script cleanup solution work timely and precise.

teodorescuserban avatar Jun 03 '24 10:06 teodorescuserban

@jeffdafoe We have tried, it didn't help - the process is still executes after duration, specified in max_execution_time.

Did it work in your case?

It stops the script but the query continues to execute on the database.

jeffdafoe avatar Jun 03 '24 12:06 jeffdafoe

@jeffdafoe We have tried, it didn't help - the process is still executes after duration, specified in max_execution_time. Did it work in your case?

It stops the script but the query continues to execute on the database.

You need to make the script abort whatever queries before shutting down and you should be okay.

teodorescuserban avatar Jun 04 '24 05:06 teodorescuserban

In my experiment this ini parameter didn't stop the script and it executed until the query was aborted in the DBMS.

malsatin avatar Jun 04 '24 20:06 malsatin

It depends on whether your PHP is compiled with or without threads support, apparently. See https://github.com/php/php-src/issues/14769 .

jeffdafoe avatar Jul 03 '24 14:07 jeffdafoe