500 - Internal Server Error and all entries show "Movie Not Found"
Description
Running within docker on Ubuntu for more than a year. I have watchtower running which automatically updates containers. A few days ago I connected, and saw this issue occurring.
I've cleaned my /app/config and recreated container. Same issue.
Side-note, I've recently installed Jellyseerr as well. Same config - it does work well.
Version
1.34.0
Steps to Reproduce
- In Discover screen all movies show:
Movie Not FoundFor example:
Movie Not Found
TMDB ID
746036
- In Movies or Series screen it shows:
500 - Internal Server Error
Screenshots
Logs
1. Logs show errors connecting:
2025-05-24T11:51:19.367Z [debug][API]: Something went wrong retrieving movie {"errorMessage":"[TMDB] Failed to fetch movie details: connect ENETUNREACH 2600:9000:2024:8200:c:174a:c400:93a1:443","movieId":"929590"}
2025-05-24T11:51:23.495Z [debug][API]: Something went wrong retrieving popular series {"errorMessage":"[TMDB] Failed to fetch discover TV: connect ENETUNREACH 2600:9000:2024:8200:c:174a:c400:93a1:443"}
2025-05-24T11:51:50.951Z [debug][API]: Something went wrong retrieving popular series {"errorMessage":"[TMDB] Failed to fetch discover TV: connect ENETUNREACH 2600:9000:2024:4c00:c:174a:c400:93a1:443"}
2025-05-24T11:51:55.559Z [debug][API]: Something went wrong retrieving regions {"errorMessage":"[TMDB] Failed to fetch countries: connect ENETUNREACH 2600:9000:2024:4c00:c:174a:c400:93a1:443"}
2025-05-24T11:51:55.560Z [debug][API]: Something went wrong retrieving languages {"errorMessage":"[TMDB] Failed to fetch langauges: connect ENETUNREACH 2600:9000:2024:4c00:c:174a:c400:93a1:443"}
2. More errors in logs:
2025-05-24T11:51:11.112Z [error][Plex.TV Metadata API]: Failed to retrieve watchlist items {"errorMessage":"connect ENETUNREACH 2606:4700:4400::ac40:97cd:443"}
3. Occasionally logs show success:
2025-05-24T11:55:00.019Z [info][Plex Scan]: Scan starting {"sessionId":"cc7e9a6f-277d-47bf-96af-0db9009210c3"}
2025-05-24T11:55:00.034Z [debug][Download Tracker]: Found 4 item(s) in progress on Sonarr server: sonarr-hassvm
2025-05-24T11:55:00.043Z [info][Plex Scan]: Beginning to process recently added for library: Movies {"lastScan":1748087400071}
2025-05-24T11:55:00.055Z [info][Plex Scan]: Beginning to process recently added for library: TV Shows {"lastScan":1748087400086}
2025-05-24T11:55:00.075Z [info][Plex Scan]: Recently Added Scan Complete
Platform
desktop
Device
NUC
Operating System
Ubuntu 22.04
Browser
Chrome
Additional Context
docker-compose:
overseerr:
image: sctx/overseerr:latest
container_name: overseerr
hostname: overseerr.${DOMAINNAME}
<<: *common-env-restart
volumes:
- ${DOCKERDIR}/overseerr:/app/config:Z
#- ${DATADIR}/media:/data/media:Z
ports:
- 5055:5055
profiles:
- media
labels:
- "traefik.enable=true"
## HTTP Routers
- "traefik.http.routers.overseerr-rtr.entrypoints=websecure"
- "traefik.http.routers.overseerr-rtr.rule=Host(`overseerr.${DOMAINNAME}`)"
- "traefik.http.routers.overseerr-rtr.tls=true"
## Middlewares
- "traefik.http.routers.overseerr-rtr.middlewares=chain-oauth@file"
## HTTP Services
- "traefik.http.routers.overseerr-rtr.service=overseerr-svc"
- "traefik.http.services.overseerr-svc.loadbalancer.server.port=5055"
networks:
traefik_proxy:
ipv4_address: 172.19.0.22
healthcheck:
test: nc -z localhost 5055
<<: *healthchecks
networks:
traefik_proxy:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "true"
com.docker.network.bridge.name: br-d-traefik
ipam:
driver: default
config:
- subnet: 172.19.0.0/16
gateway: 172.19.0.1
- subnet: "2001:3984:3989::/64"
gateway: "2001:3984:3989::1"
Code of Conduct
- [x] I agree to follow Overseerr's Code of Conduct
OK, I think I found the problem, the ENETUNREACH 2606:4700:4400::ac40:97cd:443 was a hint.
I commented out the IPv6 definitions in docker-compose and recreated the stack:
networks:
traefik_proxy:
driver: bridge
driver_opts:
#com.docker.network.enable_ipv6: "true"
com.docker.network.bridge.name: br-d-traefik
ipam:
driver: default
config:
- subnet: 172.19.0.0/16
gateway: 172.19.0.1
#- subnet: "2001:3984:3989::/64"
#gateway: "2001:3984:3989::1"
Now all is working well!
Need to fix the dual-stack configuration.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
are there any plans to fix it?
same issue and disabling ipv6 fixed it