recipes
recipes copied to clipboard
Performance raspberry pi
Tandoor Version
1.3.1-raspi-raspi
Setup
Docker / Docker-Compose
Reverse Proxy
No reverse proxy
Other
No response
Bug description
Currently running tandoor on a raspberry pi 3b+. as soon as we added 5-6 recipes we noticed how slow the app was.
I did a bit of digging and i think adding gunicorn workers seems to help a little. I added 4 workers and 4 threads to the boot file in the container and it seems to handle multiple requests far better as the slow speed seems to be mainly caused by multiple api requests on each page which i think the single threaded gunicorn is blocking. This seems to knock off at least 1 second from a general 7-9 second page load. Not a lot but its something. Definitely makes the page load feel snappier. as it's only a specific api call which is the rest of the load time.
Looking at the one api call which takes the longest this is the '/api/food' url.
I added django-silk to the project and it seems this api call is hitting the database over 200 times with our number of food items at 55. spending 2 seconds alone in the database with most requests being around 8-9 seconds.
with a quick glance it seems File "[/opt/recipes/cookbook/serializer.py](http://recipes.local/silk/request/34d77842-18ab-4fa5-a994-334c742b9912/sql/516/?pos=5&file_path=/opt/recipes/cookbook/serializer.py&line_num=526#c5)", line 526, in get_substitute_onhand might be a the culprit.
I've not spent too long on this, I might try to do a bit more digging this week and see if i can help improve it.
one last thing that i haven't looked into is the ttfb on the initial request seems to generally be over a second too generally closer to two seconds.
I understand that the project has generally prioritized features over performance. I'm pretty new to tandoor and to python sites too, coming more from php. Hopefully I can help find something unless anyone can shed any more light on it.
thanks.
Relevant logs
No response
thanks for the report, we will need to do a performance update but i hjave not yet been able to work on it, will leave this issue open as one of the places to start when working on the performance
Just thinking about the gunicorn workers/threads issue. I was thinking we could add 2 environment variables to the .env possibly NO_OF_WORKERS and another for threads set as default 1.
This way when firing up a docker instance we can pass these to the gunicorn command in boot.sh with a reasonable performance boost.
How does that sound?
awesome, sometimes you just get blind when maintaining something, i have added an .env variable for the next release !