nitter
nitter copied to clipboard
nitter docker contaner don't response user profile request after running for a while
I meet similar issue with #518.
The nitter docker don't response user profile request /<username> , but response other request like /<username>/status/<status_id> , / , /search after running for a while.
The request of /<username> always timeout.
nitter version: 2023.01.09-d38b63f redis version:6.2.8
docker-compose.yml
version: "3"
services:
redis:
image: redis:6-alpine
restart: unless-stopped
volumes:
- redis-data:/var/lib/redis
nitter:
image: zedeus/nitter:latest
restart: unless-stopped
depends_on:
- redis
ports:
- "13500:8080"
volumes:
- ./nitter.conf:/src/nitter.conf
volumes:
redis-data:
nitter.conf
[Server]
address = "0.0.0.0" # Change this?
port = 8080 # Change this?
https = true # disable to enable cookies when not using https
staticDir = "./public"
title = "nitter" # Change this?
hostname = "nitter.bgme.bid" # Change this
[Cache]
listMinutes = 3000 # how long to cache list info (not the tweets, so keep it high)
rssMinutes = 10 # how long to cache rss queries
redisHost = "redis"
redisPort = 6379
redisConnections = 20 # connection pool size
redisMaxConnections = 30
# max, new connections are opened when none are available, but if the pool size
# goes above this, they're closed when released. don't worry about this unless
# you receive tons of requests per second
[Config]
hmacKey = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # Regenerate e.g. `head -c 25 </dev/urandom | sha256sum`
base64Media = true # use base64 encoding for proxied media urls
tokenCount = 10
# minimum amount of usable tokens. tokens are used to authorize API requests,
# but they expire after ~1 hour, and have a limit of 187 requests.
# the limit gets reset every 15 minutes, and the pool is filled up so there's
# always at least $tokenCount usable tokens. again, only increase this if
# you receive major bursts all the time
# Change default preferences here, see src/prefs_impl.nim for a complete list
[Preferences]
theme = "twitter_dark" # Change this?
replaceTwitter = "nitter.bgme.bid" # Use the same "hostname" as above
replaceYouTube = ""
replaceInstagram = ""
proxyVideos = true
hlsPlayback = true
infiniteScroll = true
docker compose logs
redis_1 | 1:C 10 Jan 2023 20:35:58.243 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
nitter_1 | Starting Nitter at https://nitter.bgme.bid
redis_1 | 1:C 10 Jan 2023 20:35:58.243 # Redis version=6.2.8, bits=64, commit=00000000, modified=0, pid=1, just started
nitter_1 | Connected to Redis at redis:6379
nitter_1 | Starting Nitter at https://nitter.bgme.bid
nitter_1 | Connected to Redis at redis:6379
nitter_1 | InternalError: https://api.twitter.com/2/timeline/profile/1447804806974689280.json?include_profile_interstitial_type=0&include_blocking=0&include_blocked_by=0&include_followed_by=0&include_want_retweets=0&include_mute_edge=0&include_can_dm=0&include_can_media_tag=1&skip_status=1&cards_platform=Web-12&include_cards=1&include_composer_source=false&include_reply_count=1&tweet_mode=extended&include_entities=true&include_user_entities=true&include_ext_media_color=false&send_error_codes=true&simple_quoted_tweet=true&include_quote_count=true&userId=1447804806974689280&include_tweet_replies=false&ext=mediaStats&include_ext_alt_text=true&include_ext_media_availability=true&count=20&cursor=HBaCwNHd%2FfidqSwAAA%3D%3D%3Fcursor%3DHBaEgNS9puLg1isAAA%3D%3D
nitter_1 | InternalError: https://api.twitter.com/2/timeline/profile/1551539565663440896.json?include_profile_interstitial_type=0&include_blocking=0&include_blocked_by=0&include_followed_by=0&include_want_retweets=0&include_mute_edge=0&include_can_dm=0&include_can_media_tag=1&skip_status=1&cards_platform=Web-12&include_cards=1&include_composer_source=false&include_reply_count=1&tweet_mode=extended&include_entities=true&include_user_entities=true&include_ext_media_color=false&send_error_codes=true&simple_quoted_tweet=true&include_quote_count=true&userId=1551539565663440896&include_tweet_replies=false&ext=mediaStats&include_ext_alt_text=true&include_ext_media_availability=true&count=20&cursor=HBaCwNL9mOOQ0iwAAA%3D%3D%3Fcursor%3DHBaEwNG1lbX9zCwAAA%3D%3D
nitter_1 | InternalError: https://api.twitter.com/2/timeline/profile/1551539565663440896.json?include_profile_interstitial_type=0&include_blocking=0&include_blocked_by=0&include_followed_by=0&include_want_retweets=0&include_mute_edge=0&include_can_dm=0&include_can_media_tag=1&skip_status=1&cards_platform=Web-12&include_cards=1&include_composer_source=false&include_reply_count=1&tweet_mode=extended&include_entities=true&include_user_entities=true&include_ext_media_color=false&send_error_codes=true&simple_quoted_tweet=true&include_quote_count=true&userId=1551539565663440896&include_tweet_replies=false&ext=mediaStats&include_ext_alt_text=true&include_ext_media_availability=true&count=20&cursor=HBaCwNL9mOOQ0iwAAA%3D%3D%3Fcursor%3DHBaEwNG1lbX9zCwAAA%3D%3D
redis_1 | 1:C 10 Jan 2023 20:35:58.243 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | 1:M 10 Jan 2023 20:35:58.244 * monotonic clock: POSIX clock_gettime
redis_1 | 1:M 10 Jan 2023 20:35:58.244 * Running mode=standalone, port=6379.
redis_1 | 1:M 10 Jan 2023 20:35:58.244 # Server initialized
redis_1 | 1:M 10 Jan 2023 20:35:58.244 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1 | 1:M 10 Jan 2023 20:35:58.245 * Ready to accept connections
I 'solved' this on my private instance by changing healthcheck and running an autoheal container to watch it https://github.com/willfarrell/docker-autoheal
nitter:
# image: zedeus/nitter:latest
build: https://github.com/AlyoshaVasilieva/nitter.git#upd
container_name: nitter
ports:
- "127.0.0.1:8487:8080"
volumes:
- ./nitter.conf:/src/nitter.conf:ro
depends_on:
- nitter-redis
restart: unless-stopped
healthcheck:
test: curl -o /dev/null --connect-timeout 5 -m 15 http://127.0.0.1:8080/jack || exit 1
interval: 1800s
timeout: 5s
retries: 2
labels:
autoheal: true
Notes:
- I switched from wget to curl because the wget healthcheck seemed to always read unhealthy.
- using my 'fork' which is ubuntu based and not automatically updated; I don't know if this healthcheck works with the normal docker image. (the switch to ubuntu was an earlier failed attempt to fix this same issue just in case it was caused by alpine)
- interval is very long to avoid accidentally hitting some rate limit
- it seems to be able to detect unhealthiness and restart the container, but no guarantees
Experiencing the same issue on docker
Having the same issue
Update: my fix was issuing a ssl cert using https://letsencrypt.org and then enabling https in nitter.conf
Too many different issues/non-issues discussed here, please reopen if there is a recurring issue, with more information about how to reproduce.