Bazarr failures
Seems to be some issues with the bazarr exporter, here is a log file that has been reduced but it's like 10x this long. It seems that the api requests it's attempting is wayyyy too big.
https://pastebin.com/
docker config:
bazarr-exporter:
image: ghcr.io/onedr0p/exportarr:latest
container_name: bazarr-exporter
command: ["bazarr"]
environment:
PORT: 9719
URL: "https://bazarr.REDACTED.cc"
APIKEY: "REDACTED"
ports:
- "9719:9719"
restart: unless-stopped
networks:
- saltbox
labels:
com.github.saltbox.saltbox_managed: true
prom config:
- job_name: "bazarr-exporter"
static_configs:
- targets: ["bazarr-exporter:9719"]
Pinging @phyzical, also the link you wanted to share for the logs is not correct.
Apologies, seems the log I tried to post on pastebin was too long and didn't realize haha
https://pastebin.com/PTTsfmrm
That paste also doesn't work.
This page is no longer available. It has either expired, been removed by its creator, or removed by one of the Pastebin staff.
Oh my gosh lol, apologies. Seems pastebin does not like me. Here's the full file directly: https://transfer.sh/ENm5Un9KOV/log.txt Please tell me this works hahah
Looking through this logfile, it looks like all the seriesid entries are unique in each URL -- this looks like there's ~9k series registered in this instance of Bazaar. If that's the case, it's possible this particular call just won't work as is -- we'll probably have to break the call up into smaller calls which we stitch back together (manually paginate).
I'm putting together a fix for this now, but I'll likely need someone to test it for me, as I don't currently have a bazarr deployment, and while I could spin one up, I won't have the volume of serieses needed to really test the fix.
I'm putting together a fix for this now, but I'll likely need someone to test it for me, as I don't currently have a bazarr deployment, and while I could spin one up, I won't have the volume of
seriesesneeded to really test the fix.
I can test, let me know what I need to do
this looks like there's ~9k series registered in this instance of Bazaar.
That's quite the sonarr instance. I'm pretty sure there's going to be a point where we need to implement querying the database directly to gather stats. It would be much faster. Maybe that can happen when postgres has more adoption because doing that with sqlite is quite janky (sharing the volume with exportarr).
It would be better to keep it API if only these apps implemented and exposed a stats endpoint in json for us to use.
@kaizensh - there is a test image in the PR (ghcr.io/rtrox/exportarr:episodes-concurrency), can you try this image out and see if it fixes your problem?
@onedr0p totally agree, though I'd like to stick with the API as long as we can, since we lose the interface guarantees as soon as we start accessing the DB directly.
@kaizensh - there is a test image in the PR (
ghcr.io/rtrox/exportarr:episodes-concurrency), can you try this image out and see if it fixes your problem?
2023-10-29T00:58:45.832Z INFO Starting exportarr {"app_name": "exportarr", "version": "development", "buildTime": "", "revision": ""}
series-batch-size must be greater than zero.
gah, it's not picking up the defaults for some reason. give me a few minutes.
@kaizensh OK, I resolved the config issue, and updated the test container -- you may need to pin the sha256 hash to bypass your local cached container:
ghcr.io/rtrox/exportarr:episodes-concurrency@sha256:d295cbe4022762b507b2656894a15a5ced1f3ea0661b4780e34849becb20c1c8
@kaizensh OK, I resolved the config issue, and updated the test container -- you may need to pin the sha256 hash to bypass your local cached container:
ghcr.io/rtrox/exportarr:episodes-concurrency@sha256:d295cbe4022762b507b2656894a15a5ced1f3ea0661b4780e34849becb20c1c8
this seems to have solved the errors, not seeing the dashboard for some reason though in grafana but i assume that is expected?
@kaizensh - I'm not sure what you mean? The docker container does nothing with grafana -- you'll need to create a dashboard, or import one of the dashboards we have in the examples/dashboards directory.
@kaizensh - I'm not sure what you mean? The docker container does nothing with grafana -- you'll need to create a dashboard, or import one of the dashboards we have in the
examples/dashboardsdirectory.
Yes, I have prometheus set up with grafana working perfectly for dashboard2, all exportarrs appear working, i have 3 bazarr instances but 2 of them only show, the "bad" instance we are dealing with now seems to be the one not showing, no clue if it's related or not.
But I understand these aren't related, just figured I'd share in case there are other issues maybe with the dashboard. Your solution seems to have fixed the purpose of this issue though.
while I am here, @onedr0p, does exportarr intend to stop making attempts if the arr clients are down at some point in time? I noticed my logs slamming bazarr when I shut it down for a bit. Or is this not something handled at all?
Exportarr doesn’t control the rate of polling, Prometheus does. Prometheus is the initiator of the monitoring requests, and the rate at which exportarr is polled (and as a result the rate exportarr polls bazarr) is controlled by your Prometheus scrape config or ServiceMonitor.
You can control the timeout there too, which is something you might want to check on that monster sonarr instance. I imagine even paginated those requests may still take a while to complete.
and here i was thinking my instance would be a good test bed at 100k+ episodes 😆