LxL
LxL
Any log?
You can put the egg files here before deploying the Scrapyd app on heroku. https://github.com/my8100/scrapyd-cluster-on-heroku-scrapyd-app/tree/master/eggs Scrapyd config: https://github.com/my8100/scrapyd-cluster-on-heroku-scrapyd-app/blob/b117252bd8bcabd73b01967ea7da46036e9ce3d1/scrapyd.conf#L3
What’s the value of DATABASE_URL? How did you install scrapydweb, via pip or git?
It’s better to configure the timezone of your server instead, then restart ScrapydWeb.
Related issue from repo tzlocal: https://github.com/regebro/tzlocal/issues/73 What’s the OS and timezone of your host?
Thanks for your suggestion! Before this feature is supported, you can edit the SQLite db file directly, or set up the DATABASE_URL option to use other backends like MySQL. https://github.com/my8100/scrapydweb/blob/fbb0b42f94a0d52308ba27e97d1d04f1662c52aa/scrapydweb/default_settings.py#L348-L358
They are saved in `scrapydweb/data/stats` You can also set `BACKUP_STATS_JSON_FILE` to False. https://github.com/my8100/scrapydweb/blob/fbb0b42f94a0d52308ba27e97d1d04f1662c52aa/scrapydweb/default_settings.py#L107-L112
No stats files would be saved when `BACKUP_STATS_JSON_FILE = False`
1. Click the history button on the timer tasks page, then post the related log. 2. Run scrapydweb without gunicorn&nginx and try again.
Could you debug with the following steps first? 1. Run scrapydweb without gunicorn&nginx and try again. 2. Run scrapydweb with gunicorn and try again. 3. Run scrapydweb with nginx and...