504 Nginx error on page refresh any ideas?
Hi I'm running version 0.9 using docker and every once and awhile I get a 504 Nginx Error when I reload tube-sync in the browser and then I have to restart the docker container. My theory is there's something going on with the Docker engine but I'm still learning how docker works. What do you think it could be? Thanks for such a great tool and I do understand its in preview.
Hi, what's in your container logs? Anything causing errors like that should result in a relatively verbose in the logs. Like with:
docker logs tubesync
The TubeSync container bundles nginx which reverse proxies to the Python application. A 504 error from nginx (assuming you're not running anything else infront of TubeSync) would imply that the nginx front end web server timed out talking to the TubeSync back end app.
I'm getting the following error from the logs.
18:19:08 [error] 447#447: *24 upstream prematurely closed connection while reading response header from upstream, client: 172.17.0.1, server: _, request: "GET /tasks HTTP/1.1", upstream: "http://127.0.0.1:8080/tasks", host: "localhost:4848", referrer: "http://localhost:4848/sources"
I'm not doing anything fancy I'm running TubeSync as is.
Interesting. Have you added some very large channels or a large number of sources? Is the server you're running TubeSync on a bit slow? That error would imply that the sheer number of tasks being listed from the database is taking over 60 seconds to load resulting in the timeout.
Just wanted to pitch in and state I've encountered the same on latest
Cheers. Is this just on /tasks or other URLs as well?
Basically whenever the backend is working heavily, web is unreachable for me on any page. Assuming it has to do with DB reads while locked and I also assumed during some issues I've had that web accessing the DB interrupted other tasks, so I've stopped trying and just monitor via docker logs until it goes quiet, then go back to the interface.
Sounds a reasonable explanation. The DB should only really lock for writes and reads should in theory be fine, but I'm leaning towards that'll start being "use Postgres" territory if you get serious slowdowns. I'll make a task to paginate or otherwise summarise the tasks page though as I didn't expect people to be able to queue up >10k tasks and it lists them all on one page at the moment which is a bit mad.
There are obvious speed improvements to be made as the way signals are used to update everything in a recursive loop is pretty slow (worked fine for my initial use case of 6 channels and 1000 videos...). Currently saving a source with 1000 media items results in up to 1001 UPDATE queries on clicking the save button so that could be drastically refactored as that would indeed lock the SQLite database for a pretty long time.
Most of this should be covered as refactoring work and other improvements are rolled out and most of these are already on the current 1.0 and 1.1 roadmaps.
I'll track the tasks page issue specifically in #80 - the general speed improvements are spread over the various refactoring issues.