pgweb icon indicating copy to clipboard operation
pgweb copied to clipboard

Large results crash the browser

Open Esya opened this issue 7 years ago • 6 comments

For our use case, we often have to run large queries and export the results to CSV. We would love to do that through pgweb, but currently when running a query that returns +10k rows, the browser slows down heavily, and with +100k rows, the tab just crashes.

This seems to be because there is no pagination on the results, and pgweb just adds those 100k rows to the DOM (or at least tries to) where here we mostly just want to export it.

Do you have any solution for this? Thanks

Edit: We have one workaround for now that might work which is to just truncate the displayed results to the first 1000 rows

Esya avatar Mar 23 '17 12:03 Esya

You can try hitting the "Export" button before running the actual query.

sosedoff avatar Mar 23 '17 16:03 sosedoff

@sosedoff I did not even think about it, I assumed we had to execute the query first.

My go skills are limited so the solution I implemented is really just truncating the result array before sending it back to the browser, but maybe some sort of pagination, or truncating but telling the user that the results are truncated would be a better option?

Esya avatar Mar 23 '17 16:03 Esya

Yeah, you can type the query and hit export, it will not load anything in the browser. Regarding the pagination you mentioned, its true, theres a need for something like that, but i dont have any solutions since i use psql for big data dumps.

On Mar 23, 2017, at 9:48 AM, Tristan F. [email protected] wrote:

@sosedoff I did not even think about it, I assumed we had to execute the query first.

My go skills are limited so the solution I implemented is really just truncating the result array before sending it back to the browser, but maybe some sort of pagination, or truncating but telling the user that the results are truncated would be a better option?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

sosedoff avatar Mar 23 '17 17:03 sosedoff

Most of the results I've seen are paginated by default. Does your query have its own LIMIT or something on it?

That number of results is going to cause quite a massive result set. Unless we separate pagination into a front-end piece in certain contexts I don't know of any other way to solve this. The problem with doing pagination in the front-end though, is then you have literal DB pagination that is done and front-end level. Which is difficult from a user-interface perspective to handle right.

I think we could improve by perhaps getting the count of returned rows back, and on any calls that will try to display more than like 15k rows or something (we can play around to find a decent number that most systems can handle) then show an alert to the dev that things can be slow and if their goal is to export they could just export it directly without rendering it.

Garbee avatar Mar 23 '17 19:03 Garbee

Quick update on this issue:

  • I've been working on getting https://clusterize.js.org/ integrated into pgweb. It offers a way better experiment on any large datasets.
  • Limiting the result dataset will likely result in faster page loads. We could add a limit to any custom query passed by a user, and maybe add more results to the table dynamically. Still experimenting.

sosedoff avatar May 11 '17 03:05 sosedoff

@sosedoff Did you get anywhere with Clusterize.js? I'd be keen to have a crack at implementing it.

tavva avatar Sep 24 '20 13:09 tavva

It has been a while and i lost all my work on clusterize.js, so no, i have not gotten anywhere with that. At this point im not even considering adding into the project simply because the whole frontend code needs to be refactored/rewritten, and that opens up doors for all sorts of projects (react? etc).

sosedoff avatar Dec 07 '22 03:12 sosedoff