kapua icon indicating copy to clipboard operation
kapua copied to clipboard

Why my RESTful API performance so slow

Open Beritra opened this issue 6 years ago • 15 comments

Am I doing something wrong?

It's my post code:

GET /v1/AQ/accounts?offset=0&limit=50 HTTP/1.1 Host: 172.16.0.151:8081 Content-Type: application/json Accept: application/json Authorization: Bearer eyJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJodHRwczovL3d3dy5lY2xpcHNlLm9yZy9rYXB1YSIsImlhdCI6MTUyMTc3MjI1OSwiZXhwIjoxNTIxNzc0MDU5LCJzdWIiOiJBUSIsInNJZCI6IkFRIn0.oPwse_iPKttHOjCdb9Wm-dt_Du-ceLg-7i6W5NS-GgaXF4uK0zTOwEmYefsZfJZZ2iYWmQlc8wU3sWIF0EPCdx71kwKB2-Yz3p0xQlxtM8CKPpSzvV_pRbVBvrhuYS9MRdy6WXGn3VEU4aRU-yX-euAuLjfTpOxwvw_bWBKleffRXtIN3UFSQSrCt0PiQasUwI8W-eSNhvvV5XGngJqIXqwrYLVWoIILTz5c7PQ1f40uHyG_9xJfipYPnIZ76IFYBNGi4mGqNnhQw7qw-BjOXIM0wPPqodFqDxFux5DHRKMPBHC5FVhDRLY-t92DrJz-hYLNARIv1uVbbcuqVxRXx0LQ4bF4ha2LmXTdQc4U1vjTdajhlppOLiq1IN3DpLCPvp2v4jBs-WBD9mABnU9bhWb5RiWi13OwpEovLK7l06GLo5-AnivjcDk2Yd49Xug23vKgnTzvjr9Xj1eqplV1tNMx_r7SL_H7mo33qcjgpdudFWukmXrtyPdQffgkkrwRr52JObshqN7KBci_V9wXIBj4QZc-_EuOANnXsWjjXEBslCoArBBoMSKlQLCeedrXQ0Y-KSPEK68mWH31wvzsdo1d9T4ysndYGgpXmG54-44TuW6jL_dbAnY-iFmWcX3QYxnD_fKf5vXa50CcG4ExT5R0_sG-W-V8ZRr_WAYmndI Cache-Control: no-cache Postman-Token: f023ed89-ed24-4030-9d10-6a91a293a8be

and I get response after 16 seconds.I tried several times and always 15 or 16 seconds. it's response:

{ "type": "accountListResult", "limitExceeded": false, "size": 0, "items": [] }

Beritra avatar Mar 23 '18 02:03 Beritra

Hi @Beritra,

I don't see any issue in this call, so I'd say you're doing it correctly. How about sharing some details on your account setup? How many child account does this scope have? Some details on your hardware and your deployment could also help.

lorthirk avatar Mar 23 '18 08:03 lorthirk

@lorthirk Thanks for helping. I just want to try the api,so I chose the simple function like accounts or devices. In Kapua Console website page,I can see many devices and accounts,and it appears very fast after I click a button. But when I use RESTful api through postman or curl command , always need a long time to get reponse. My Kapua was run in docker and my cpu is i7 7700,16g RAM,I think my hardware performance will not be the cause of slow operation. My account role is admin,and I grant all permissions,and no child account. In order to exclude the impact of the network,I tried curl command in kapua-api container,still as slow as usual. It's very confusing.

Beritra avatar Mar 26 '18 10:03 Beritra

Hi @Beritra ,

just tested with a local Docker deployment.

Here are my results:

  • POST http://localhost:8081/v1/authentication/apikey: 465ms, 1.66KB
  • GET http://localhost:8081/v1/_/accounts: 189ms, 589B
  • GET http://localhost:8081/v1/_/users: 128ms, 1.21KB

Only the very first call to http://localhost:8081/v1/authentication/apikey took more time (around 3sec) but is normal since it needs to initialize few things.

My machine has:

  • 3.1 GHz Intel Core i7
  • 16 GB 1867 MHz DDR3

BTW I don't think your machine is the cause.

The only computational-heavy operation is the login, which uses BCrypt to verify the user password (or API Key).

How much resources you gave to Docker? Here is my config:

  • CPUs: 3
  • Memory: 6 GBs
  • Swap: 1 GBs

Can you please check those parameters?

Regards,

Alberto

Coduz avatar Mar 29 '18 08:03 Coduz

@Coduz My resources was

  • CPUs:8
  • Total Memory: 15.57GBs
  • Swap: 2GBs

I tried again today and it became slower。

  • post http://172.16.0.151:8081/v1/authentication/apikey :200 ok 379ms 1.7kb
  • get http://172.16.0.151:8081/v1/AQ/devices?offset=0&limit=50 :200 ok 62140ms 18.91kb
  • get http://172.16.0.151:8081/v1/AQ/devices?offset=0&limit=50 without Authorization:401 Unauthorized 25ms 177B
  • get http://172.16.0.151:8081/v1/AQ/devices?offset=0&limit=50 with a shorter、wrong Authorization :401 Unauthorized 56443ms 177B

Beritra avatar Apr 02 '18 03:04 Beritra

@Coduz I am in China,due to the Great Firewall of China,many network resources are very slow to access even can't access.So I tried again on a U.S. server,it response very fast.

I want to know if there is a process of loading some network resources in the background, leading to very slow speed.

Beritra avatar Apr 02 '18 09:04 Beritra

If you are using Docker deployment, there is only the processing made by the Jetty Server to handle the incoming request and the processing of our application which uses the SQL database running in the kapua-sql container.

Coduz avatar Apr 03 '18 07:04 Coduz

Closing for inactivity. Feel free to reopen the issue if you need more help.

Regards,

Alberto

Coduz avatar Apr 09 '18 12:04 Coduz

I've found the same issue. Running some performance analysis it turned out it was the cause of the delay was a token search in the ATHT_ACCESS_TOKEN table in H2 database. It affected to all the requests except for the login ones. I purged it of the invalid tokens and it started work properly.

At the beginning it had 50.000 rows and it lasted around 1500 ms in each request. Once I removed most of the rows in that table, the requests started to last around 140 ms. The login request remained as exceptions with a duration of 500 ms both before and after the database purge.

It was discussed in gitter while the tests were run.

To improve the performance of this check, I've thought on including in the where clause the invalidated_on is not null and expiration_date < now and create indexes for these fields, so that, most of the tokens are cleared out of the sql request before starting the full-string search of the access_token.

pintify avatar Jan 29 '20 16:01 pintify

When performing a REST API call we always do a login in the first place under the hood, by passing to the AuthenticationService the JWT with the Authorization: Bearer HTTP header. This is of course needed because REST APIs are stateless and we have to recreate the session every time.

By only providing the JWT we have no other choice than do a full text search on the token_id column in order to look for the correct Access Token entry in the DB, check its expiration time and compare it with the current time to see if the session is already expired. While che table grows, such search would take more and more time because the string is quite long. Creating an index on such column would be sub-optimal because of characteristics of the JWT string (quite long, and with the first part that would be the same for all Access Tokens).

In order to improve performances we could take several paths:

  • We could bake the id value of the Access Token inside the JWT payload. This way, at login, we just decode the JWT and then get the correct entry by looking for the correct ID
  • We could add another field that contains a substring of the JWT, in order to narrow down entries before doing the actual full text search. Hopefully only few results would match such substring, and the full text search on the remaining one should be quick
  • The same approach could be pursued by filtering first for non expired, non invalidated tokens and then for the token we want. Probably it would be a "coarser" filtering than the one with the substring, but we would avoid the new field
  • We could think of an housekeeping task that runs once every X minutes that purges the non expired, non invalidated tokens (we should check that the refresh token is expired as well before purge the entry).

Honestly I think that the first path would be the better approach. Comments and feedbacks are welcome!

lorthirk avatar Jan 29 '20 17:01 lorthirk

I was thinking again about this, since I went to implement the first option mentioned in the comment above.

Unfortunately, my reasoning fell short immediately: when we're creating the JWT we still have no ID, of course, since the row hasn't been written yet, so we cannot insert it in the JWT. My proposal is to create another field, solely for indexing purpose, that contains a random UUID generated when the JWT is created.

lorthirk avatar Feb 21 '20 13:02 lorthirk

Hi, we're experiencing repeated issues on this matter as the use of the platform is increased, is there any advance?

pintify avatar May 28 '20 09:05 pintify

Hi all,

we are just about to release Kapua 1.2.4, which contains a fix for this issue. When you'll upgrade the version, can you please recheck if this issue is solved?

Thanks,

Alberto

Coduz avatar Jul 15 '20 07:07 Coduz

I'll let you know as soon as we update the version, although we will have to wait some time to verify that the performance is not degraded over the time

pintify avatar Jul 15 '20 07:07 pintify

No problem, we will keep this issue open for that time!

Coduz avatar Jul 15 '20 08:07 Coduz

The new version has been running for one month and the performance of the API REST is still adequate. The issue seems to be solved. Thanks for the dedication!

pintify avatar Aug 24 '20 07:08 pintify