state-threads icon indicating copy to clipboard operation
state-threads copied to clipboard

a performance issue for epoll idle

Open lihuiba opened this issue 8 years ago • 35 comments

I came across a performance issue in epoll mode, when there were thousands concurrent connections. Profiling shows that _st_epoll_dispatch() consumed a lot of CPU.

After reviewing the function, I think I've found the reason: there's a loop that enumerates ALL threads in the I/O queue.

for (q = _ST_IOQ.next; q != &_ST_IOQ; q = q->next) {

As I'm using one thread per connection model, I believe this loop make epoll mode degraded effectively to select mode.

lihuiba avatar Apr 23 '17 06:04 lihuiba

I find the code in https://github.com/ossrs/state-threads/blob/d04dcb534fbce34739eb1e52adacc4f0bcd192c2/event.c#L1281

When there are some epoll events(the nfd >0), it will iterate the _ST_IOQ, a loop link list. The _ST_IOQ is the waiting queue, fd will push in queue in st_poll in https://github.com/ossrs/state-threads/blob/d04dcb534fbce34739eb1e52adacc4f0bcd192c2/sched.c#L81

I don't know why need to iterate all blocked fds? How to fix this?

winlinvip avatar Apr 25 '17 03:04 winlinvip

It seems to find the thread which is waiting on fd, and switch to the thread.

winlinvip avatar Apr 25 '17 04:04 winlinvip

Yes it does, but I don't think it's necessary.

lihuiba avatar Apr 25 '17 07:04 lihuiba

I believe we can store thread info in _st_epoll_data[], and resume the threads without iterating the IOQ.

lihuiba avatar Apr 25 '17 07:04 lihuiba

I do think there aren't many users now, so nobody cares.

xiaosuo avatar Jun 02 '17 13:06 xiaosuo

I'm a heavy user of ST. Please make sure there is a problem, and I'll arrange a patch.

lihuiba avatar Jun 03 '17 02:06 lihuiba

Yes. It is a issue. But I am afraid that your way doesn't work. There maybe more threads waiting on a fd, and there isn't any reference to the monitoring fds. I think you'd better extend _epoll_fd_data, and record _st_pollq in _epoll_fd_data when inserting it into IOQ.

BTW: Maybe EPOLLONESHOT and EPOLLET can be used to get more better performance.

xiaosuo avatar Jun 03 '17 04:06 xiaosuo

For performance issue, please write a test example and do some benchmarks, we SHOULD NEVER guess about it.

winlinvip avatar Jun 03 '17 09:06 winlinvip

FYI: I just cooked a patch for this issue, and will submit a PR after testing.

@winlinvip Is there any unit testing case?

xiaosuo avatar Jun 03 '17 10:06 xiaosuo

See #5 @lihuiba @winlinvip

xiaosuo avatar Jun 03 '17 16:06 xiaosuo

@xiaosuo Thanks for your work and PR, that's great. And can you write a example code, which illustrate the problem? For example, we can write a program, it use 90% CPU or there is a performance report indicates the bottleneck. Then we patch ST with your PR, it use less CPU or the performance report finger it out.

Please write a benchmark to prove the PR really fixed the problem.

winlinvip avatar Jun 04 '17 03:06 winlinvip

@lihuiba reported this issue. I think he will test the PR with his workload.

I tested the server in the example directory, and ab reported slight improvements on both QPS and latency.

xiaosuo avatar Jun 04 '17 03:06 xiaosuo

@winlinvip @xiaosuo I found the issue in my production server, where there were lots of idle connections. The server consumed several times more CPU resource than expected. I believe the situation could be easily reproduced, and I'll try the patch in a test environment, before launching it to my production server.

lihuiba avatar Jun 04 '17 06:06 lihuiba

@lihuiba Please test it, thanks~ If you can write a simple test example, it will be better, because it's necessary for any performance issue.

winlinvip avatar Jun 04 '17 07:06 winlinvip

@lihuiba what is the result?

xiaosuo avatar Jun 08 '17 13:06 xiaosuo

@xiaosuo I'm on sth else these days, and I think I'll do the tests next week.

lihuiba avatar Jun 09 '17 05:06 lihuiba

@lihuiba 👍 #5

winlinvip avatar Jun 10 '17 00:06 winlinvip

the patched ST is much slower than before, due to the added functions, as the figure below:

image

lihuiba avatar Jun 14 '17 07:06 lihuiba

there were 10,000 idle connections (from a custom tool) in the test, and only 1 active connection from ab.

lihuiba avatar Jun 14 '17 07:06 lihuiba

@lihuiba Are many coroutines monitoring a FD?

Is a coroutine monitoring lots of FDs?

xiaosuo avatar Jun 14 '17 08:06 xiaosuo

@xiaosuo No. One coroutine is responsible for a single connection.

lihuiba avatar Jun 15 '17 03:06 lihuiba

@lihuiba Would you provide the testing codes?

xiaosuo avatar Jun 15 '17 03:06 xiaosuo

1. diff of server.c:

@@ -942,7 +942,7 @@ void handle_session(long srv_socket_index, st_netfd_t cli_nfd)
    struct in_addr *from = st_netfd_getspecific(cli_nfd);
  
 -  if (st_read(cli_nfd, buf, sizeof(buf), SEC2USEC(REQUEST_TIMEOUT)) < 0) {
 +  if (st_read(cli_nfd, buf, sizeof(buf), ST_UTIME_NO_TIMEOUT) < 0) {
      err_sys_report(errfd, "WARN: can't read request from %s: st_read",
  		   inet_ntoa(*from));
      return;

2. idle connections:

#include <sys/types.h>
#include <sys/socket.h>
#include <fcntl.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <netdb.h>
#include <sys/time.h>
#include <string.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdarg.h>

int Socket(const char *host, int clientPort)
{
    int sock;
    unsigned long inaddr;
    struct sockaddr_in ad;
    struct hostent *hp;
    
    memset(&ad, 0, sizeof(ad));
    ad.sin_family = AF_INET;

    inaddr = inet_addr(host);
    if (inaddr != INADDR_NONE)
        memcpy(&ad.sin_addr, &inaddr, sizeof(inaddr));
    else
    {
        hp = gethostbyname(host);
        if (hp == NULL)
            return -1;
        memcpy(&ad.sin_addr, hp->h_addr, hp->h_length);
    }
    ad.sin_port = htons(clientPort);
    
    sock = socket(AF_INET, SOCK_STREAM, 0);
    if (sock < 0)
        return sock;
    if (connect(sock, (struct sockaddr *)&ad, sizeof(ad)) < 0)
        return -1;
    return sock;
}

int  main(int argc, char **argv)
{   
    int idleTime = atoi(argv[4]);
    int numberOfConnection = atoi(argv[3]);
    int clientPort = atoi(argv[2]);
    for (int i = 0; i < numberOfConnection; ++i) {
        int s = Socket(argv[1], clientPort);
        if (s < 0)
            printf("error connect to %d\n", clientPort);
    }
    sleep(idleTime);
    exit(0);
}

3. ab:

ab -n 1000  -c 1  localhost:8080

lihuiba avatar Jun 15 '17 06:06 lihuiba

It is strange. I ran your code, and got an opposite conclusion.

./server -t 1 -p 1 -b 127.0.0.1:8888 -l . -i

Could you run your code again?

xiaosuo avatar Jun 15 '17 08:06 xiaosuo

I found it quite slow to create large number of connections to the examples/server.c, about 1 second per connection. So I wrote a minimal server myself:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <time.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/socket.h>
#include <sys/wait.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <netdb.h>
#include <fcntl.h>
#include <signal.h>
#include <pwd.h>
#include "st.h"

void* handle_connection(void* s_)
{
  st_netfd_t s = s_;

  char buf[512];
  st_read(s, buf, sizeof(buf), ST_UTIME_NO_TIMEOUT);

  static char resp[] =
    "HTTP/1.0 200 OK\r\n"
    "Content-type: text/html\r\n"
    "Connection: close\r\n"
    "\r\n"
    "<H2>It worked!</H2>\n";
  st_write(s, resp, sizeof(resp) - 1, ST_UTIME_NO_TIMEOUT);

  st_netfd_close(s);
}

int main()
{
  st_set_eventsys(ST_EVENTSYS_ALT);
  st_init();
  st_randomize_stacks(1);

  int ret, sock = socket(PF_INET, SOCK_STREAM, 0);

  struct sockaddr_in serv_addr;
  memset(&serv_addr, 0, sizeof(serv_addr));
  serv_addr.sin_family = AF_INET;
  serv_addr.sin_addr.s_addr = INADDR_ANY;
  serv_addr.sin_port = htons(8086);
  ret = bind(sock, (struct sockaddr *)&serv_addr, sizeof(serv_addr));
  ret = listen(sock, 16);

  st_netfd_t server = st_netfd_open_socket(sock);

  while(1)
  {
    st_netfd_t c = st_accept(server, NULL, 0, ST_UTIME_NO_TIMEOUT);
    st_thread_create(handle_connection, c, 0, 0);
  }
}

I created 10,000 idle connections in the test. and I run ab as:

ab -n 10000  -c 1  http://localhost:8086/

And I got a very promising result:

181161 root      20   0  765968  56244    408 R  84.2  0.1   0:21.25 server
Time taken for tests:   4.679 seconds

and

184781 root      20   0  766140  56460    408 S  45.7  0.1   0:02.50 server-patched
Time taken for tests:   0.561 seconds

The result showed that the server with pathed ST was 8.3 times (4.679/0.561) faster than before, and consumed much less CPU resource.

Greate job!

lihuiba avatar Jun 15 '17 11:06 lihuiba

perf top of the patched:

image

lihuiba avatar Jun 15 '17 11:06 lihuiba

perf top of the original one: image

lihuiba avatar Jun 15 '17 11:06 lihuiba

Cheers!

xiaosuo avatar Jun 15 '17 11:06 xiaosuo

👍 So the patch works well when there are lots of idle connections, right? Could you please write more use scenarios, such as:

  1. Please use large connections, like 10k or 30k connections, with only a small amount alive connections. It's like your example, but please use more connections.
  2. All connections are active, for example, there are 10k connections. And please test about the timeout and cond feature, for example, some connections are sleeping while others are busy.

Maybe you should let ST to allocate memory in heap instead of stack for large amount of corotines:

make EXTRA_CFLAGS="-DMALLOC_STACK" linux-debug

winlinvip avatar Jun 28 '17 03:06 winlinvip

👍 @xiaosuo @lihuiba It's awesome, really great job!

winlinvip avatar Jun 28 '17 03:06 winlinvip