Hosting on heroku
Pythonanywhere had a very bad performance in the region I'm living (South Asia) so I had to deploy this on heroku for a better performance. The API request time for pythonanywhere ranged from 2-4 seconds, while heroku gives me sub 500ms response times. The problem is that heroku doesn't support persistent filesystem in free tier so I had to patch the code to work with postgres as it is available in heroku free tier.
My fork with postgres settings is here: https://github.com/zubair-temi/hit-counter
Should I open a pull request to this repo to merge these changes? Maybe you can keep it in a separate branch. What do you think?
I'd also update the README with heroku + postgres deployment instructions if you're willing to merge the changes.
@zubair-temi the problem is that python by itself is very slow, try this uwsgi+nxing https://github.com/brentvollebregt/hit-counter/pull/13
@fluential Well, then I'd better not use this at all. I'm getting reasonable API response time (on average sub-500ms) on heroku which works for me now. If I'd need more performance, I'd port it to golang.
stats: {
totalElapsed: 3918.5201469659805,
main: {
meter: {
mean: 19.664531856285347,
count: 77,
currentRate: 29.67619221666306,
'1MinuteRate': 0,
'5MinuteRate': 0,
'15MinuteRate': 0
},
histogram: {
min: 288.20108902454376,
max: 1496.9496248960495,
sum: 37145.473836660385,
variance: 149351.9639121301,
mean: 482.40875112545956,
stddev: 386.4608180813808,
count: 77,
median: 346.6487469673157,
p75: 373.0026630163193,
p95: 1488.0008405685426,
p99: 1496.9496248960495,
p999: 1496.9496248960495
}
}
}
@zubair-temi Most likely you are right. In terms of python for yet better performance consider using ASGI frameworks instead of WSGI. Out of the box try running this image https://github.com/tiangolo/meinheld-gunicorn-flask-docker this is as fast as wsgi flask can go.
@zubair-temi If you are crazy about performance you should go for rust instead :)
@fluential "Go for rust". Pun intended? 😄
In terms of the speed, I know pythonanywhere is slow, but for my original use case, I didn't mind it. Python is another factor to the speed but I don't this it's anything comparable to the delay from pythonanywhere.
I mentioned in #13, maybe I could use peewee to fix the database issues we are facing - which will allow us to then use MySQL, Postgresql, CockroachDB and the default SQLite.
@zubair-temi you could try running under https://github.com/tiangolo/meinheld-gunicorn-flask-docker and give use some numbers ?;)
@brentvollebregt Comparing default python to any other wsgi server is basically not even comparable. So if you need a reliable anything python please stop running via default python webserver :) More reading https://flask.palletsprojects.com/en/1.0.x/deploying/uwsgi/
PS: In the original Docker image for test I did use native pathon, but in this MR you will see that uwsgi image is now being used correctly to expose app via nginx uwsgi.
@zubair-temi @brentvollebregt this https://github.com/brentvollebregt/hit-counter/blob/2897edb100e094641d66b60acdd2d151c53e45f2/server.py#L52 is another problem, as I've mentioned in the #13 you would want to flush data periodically instead of inserting every time request hits the server. The simplest you could implement is flush ever X requests.
I suggest you run some benchmarks with current code using:
- native python (using currently)
- nginx+uwsgi (use #13)
- meinheld (use https://github.com/tiangolo/meinheld-gunicorn-flask-docker)
Make a change and flush every 100000 requests, comapre you might get blown away.
https://github.com/brentvollebregt/hit-counter/blob/2897edb100e094641d66b60acdd2d151c53e45f2/server.py#L52
is another problem, as I've mentioned in the #13 you would want to flush data periodically instead of inserting every time request hits the server. The simplest you could implement is flush ever X requests.
Yep, I understand this is an issue also. A simple option would be to store deltas in an internal dictionary and flush every x requests as you said.
@brentvollebregt Comparing default python to any other wsgi server is basically not even comparable.
I was trying to say the speed decrease from Python is nothing compared to what I am currently loosing on python anywhere; I can see:
The API request time for pythonanywhere ranged from 2-4 seconds