viseron icon indicating copy to clipboard operation
viseron copied to clipboard

Recordings playback after moved from first tier

Open SherifMEldeeb opened this issue 6 months ago • 2 comments

Greetings @roflcoopter,

Description:

Every thing works fine at boot how ever after first tier limits gets triggered segments should move to next tier however I would like to get a clarification for the following: 1- its not clear weather the web UI would be able to retrieve playback from both tiers or just the first one that? 2- Currently the timeline shows the Recordings from all tiers how ever when trying to playback I get HTTP Error 404 NotFound. Iam not sure what is the correct behavior here?

Expected Behavior:

  • Whether the timeline show only recordings from first tmp tier.
  • Or playback all recording tiers fine.

Actual Behavior:

  • Playback segments from first /tmp tier is working fine. After a while
  • Playback old segments resolves to HTTP Error 404 NotFound.

Image

with the following log:

2025-06-14 06:37:03.213 [WARNING ] [tornado.access] - 404 GET /files/tmp/segments/garage/1749869772.m4s (172.19.0.1) 1053.63ms
2025-06-14 06:37:04.304 [WARNING ] [tornado.access] - 404 GET /files/tmp/segments/garage/1749869777.m4s (172.19.0.1) 1063.04ms

Current Setup:

Docker image Viseron - dev - f9f593b

Current Config file:

storage:
 recorder:
   tiers:
     - path: /tmp/
       move_on_shutdown: true
       continuous:
         max_size:
           mb: 1024
     - path: /recordings 
       continuous:
         max_age:
           days: 14
         max_size:
           gb: 100

ffmpeg:
  camera:
    garage:
      name: Garage
      host: !secret garage_ip
      port: !secret garage_port
      path: /media/video1
      username: !secret garage_username
      password: !secret garage_password
      stream_format: rtsp
      codec: h264
      fps: 25
nvr:
  garage:

Docker Compose:

name: Viseron
services:
  viseron:
    image: roflcoopter/viseron:dev
    container_name: viseron
    restart: always
    shm_size: "1024mb"
    volumes:
      - ${SNAPSHOTS_LOCATION}:/snapshots
      - ${THUMBNAILS_LOCATION}:/thumbnails
      - ${EVENTCLIPS_LOCATION}:/event_clips
      - ${RECORDINGS_LOCATION}:/recordings
      - ${CONFIG_LOCATION}:/config
      - /etc/localtime:/etc/localtime:ro
    ports:
      - 8888:8888
    environment:
      - PUID=1000
      - PGID=1000
    tmpfs:
      - /tmp

Thank you so much for your efforts 🌹.

SherifMEldeeb avatar Jun 14 '25 03:06 SherifMEldeeb

@roflcoopter Any update on this ?

SherifMEldeeb avatar Jun 18 '25 06:06 SherifMEldeeb

Recordings are playable from all tiers, so something else is going on here. Are any of your tiers on a network mount by any chance?

roflcoopter avatar Jun 19 '25 12:06 roflcoopter

Recordings are playable from all tiers, so something else is going on here. Are any of your tiers on a network mount by any chance?

Nope at all actually both tiers on the same HDD just to demonstrate the mechanism @roflcoopter

SherifMEldeeb avatar Jun 21 '25 21:06 SherifMEldeeb

I'm seeing similar behavior, once a segment gets moved from the tier_0 ramdisk to the tier_1 nas it's no longer viewable from the timeline

The files do successfully move to the nas and can be read again from within the viseron docker context but the thick light blue on the activity line moves up after the migration

bitspill avatar Jun 30 '25 04:06 bitspill

Could you enable debug logging for storage, let it run until such a move occurs and then attach the logs here?

Would be helpful if you could try the latest dev release as well to see if the issues persists there

logger:
  default_level: info
  logs:
    viseron.components.storage: debug
    viseron.helpers.subprocess_worker: debug

roflcoopter avatar Jun 30 '25 12:06 roflcoopter

Could you enable debug logging for storage, let it run until such a move occurs and then attach the logs here?

Would be helpful if you could try the latest dev release as well to see if the issues persists there

logger: default_level: info logs: viseron.components.storage: debug viseron.helpers.subprocess_worker: debug

It looks like the tier handler is failing to initialize, I get this log shortly after startup then no migrations take place after exceeding the threshold. (Set to 100mb for testing, let it run all the way up to 500mb)

running with image roflcoopter/viseron:dev@sha256:9f33a9e82b3d76545f72bd1066b35111e76f0d29e7425849287cf2929c4ebd1f

[ERROR   ] [viseron.components.storage.storage_subprocess.subprocess] - Error processing command: 'dict' object has no attribute 'cmd'
[ERROR   ] [viseron.components.storage.storage_subprocess.subprocess] - 'dict' object has no attribute 'error'
[ERROR   ] [viseron.components.storage.storage_subprocess.subprocess] - worker.work_input(job)
[ERROR   ] [viseron.components.storage.storage_subprocess.subprocess] - = str(e)
[ERROR   ] [viseron.components.storage.storage_subprocess.subprocess] - "/src/viseron/components/storage/check_tier.py", line 208, in work_input
[ERROR   ] [viseron.components.storage.storage_subprocess.subprocess] - "/src/viseron/components/storage/storage_subprocess.py", line 200, in worker_task
[ERROR   ] [viseron.components.storage.storage_subprocess.subprocess] - (most recent call last):
[ERROR   ] [viseron.components.storage.storage_subprocess.subprocess] - handling of the above exception, another exception occurred:
[ERROR   ] [viseron.components.storage.storage_subprocess.subprocess] - 'dict' object has no attribute 'cmd'
[ERROR   ] [viseron.components.storage.storage_subprocess.subprocess] - "/src/viseron/components/storage/check_tier.py", line 197, in work_input
[ERROR   ] [viseron.components.storage.storage_subprocess.subprocess] - item.cmd == "check_tier":
[ERROR   ] [viseron.components.storage.storage_subprocess.subprocess] - (most recent call last):
[ERROR   ] [viseron.components.storage.storage_subprocess.subprocess] - Error in worker thread: 'dict' object has no attribute 'error'

bitspill avatar Jul 01 '25 07:07 bitspill

Odd, i just pulled dev and it is working fine. If you check the footer in the GUI, or the logs at restart, what version of Viseron is displayed?

Try to pull dev again and see if that helps

roflcoopter avatar Jul 01 '25 07:07 roflcoopter

Footer states Viseron - dev - ce9c781

Startup logs till the stack trace from above https://gist.github.com/bitspill/96288426916ece2381d87bcf3baab382

Testing config

ffmpeg:
  camera:
    camera_1:
      name: camera_1
      host: 192.168.0.102
      port: 554
      path: /ch01/0
      width: 2592
      height: 1520
      fps: 30
      username: admin
      password: admin
      substream:
        port: 554
        width: 1280
        height: 720
        fps: 30
        path: /ch01/1


darknet:
  object_detector:
    cameras:
      camera_1:
        fps: 1
        labels:
          - label: person
            confidence: 0.7
          - label: car
            require_motion: true
          - label: truck
            require_motion: true

mog2:
  motion_detector:
    cameras:
      camera_1:
        fps: 1
        trigger_event_recording: true


logger:
  default_level: info
  logs:
    viseron.components.storage: debug
    viseron.helpers.subprocess_worker: debug

nvr:
  camera_1:

storage:
  snapshots: # per camera
    tiers:
      - path: /mnt/ssd
        max_size:
          mb: 100
      - path: /mnt/nas
        min_age:
          days: 30
        max_size:
          gb: 100
  recorder:
    tiers:
      - path: /mnt/ramdisk
        move_on_shutdown: true
        continuous:
          max_size:
            mb: 100
        events:
          max_size:
            mb: 100
      - path: /mnt/nas
        continuous:
          max_size: # per camera
            gb: 2048 # 8tb
        events:
          min_age: # If max_size is hit, keep at least 30 days
            days: 30
          max_size: # per camera
            gb: 1024 # 4tb

bitspill avatar Jul 01 '25 08:07 bitspill

Did you pull dev again and made sure to recreate the container? I just did and it still works fine

roflcoopter avatar Jul 01 '25 09:07 roflcoopter

Did you pull dev again and made sure to recreate the container? I just did and it still works fine

yea, I'm running via a Portainer managed docker stack and select "Re-pull image and redeploy" for each test

Ran it again once more, ENV has VISERON_GIT_COMMIT as ce9c7817c735297f7978278ffbf39f9777333a91 matching my webui footer value and I'm still seeing the same error in check_tier.py

Perhaps it's because I'm launching with a fresh install environment, I delete everything except config.yaml

bitspill avatar Jul 02 '25 05:07 bitspill

I have no explanation to why you are having this issue. The items sent to the workers are no longer a dict, but a dataclass. Everything is typed as well so there is no way there could be a dict in the queue without linting issues.

I run the latest dev which has had changes since ce9c7817c735297f7978278ffbf39f9777333a91 and its working perfectly

roflcoopter avatar Jul 13 '25 10:07 roflcoopter

Alrighty, Monday I'll nuke the whole LXC and do a fully fresh docker install and then a fresh viseron on top. Somethings gotta be cached weird.

bitspill avatar Jul 13 '25 11:07 bitspill

You are using docker and LXC? If so have you tried with docker only?

john- avatar Jul 13 '25 11:07 john-

@roflcoopter I feel like docker is a common ground here did you confirm the test on the docker image it might reveal something?

SherifMEldeeb avatar Jul 14 '25 20:07 SherifMEldeeb

I found the issue that is causing the tier check exception. Looking for a fix asap.

https://github.com/roflcoopter/viseron/issues/1068#issuecomment-3148709811

roflcoopter avatar Aug 03 '25 21:08 roflcoopter

I found the issue that is causing the tier check exception. Looking for a fix asap.

#1068 (comment)

oh awesome, hopefully it's not too complicated to fix.

Sorry was too busy never made it back to further test on "Monday" like I said 3 weeks ago

bitspill avatar Aug 03 '25 21:08 bitspill

No worries, i know it can be hard to find time sometimes!

v3.2.1 is released, builds will take some time, you can follow along here: https://dev.azure.com/jespernilsson93/Viseron%20Pipelines/_build/results?buildId=840&view=results

roflcoopter avatar Aug 03 '25 22:08 roflcoopter

@bitspill, @SherifMEldeeb can you confirm the original issue still persists in dev when you get the time? Files not playable when moved from the first tier.

roflcoopter avatar Aug 03 '25 22:08 roflcoopter

@roflcoopter Thank you for your efforts. I have been very busy since I opened this issue. But I will try my best to test it this weekend.

SherifMEldeeb avatar Aug 04 '25 22:08 SherifMEldeeb

I have just tested the latest build 1641c14 and I confirm the software is working as expected. Thank you so much for supporting this I can finally begin my home project. @roflcoopter

SherifMEldeeb avatar Aug 15 '25 15:08 SherifMEldeeb

Hi all, I have run into similar issue, but related to migration from plain segments to tier1.

  1. Set up bare minimum config, with no storage section in the config. Confirmed that the live stream worked.
  2. After a few minutes, set up storage with tiers. At this point, timeline works too.
  3. After few more minutes timeline does not work any more. The log gives a ton of 404 messages about files/segments/front_doorbell/init.mp4.

At this point, there is no init.mp4 by this path, but there is init.mp4 under tier1/...

Later, init.mp4 appeared under /segments/ too, but I still kept getting 404.

Relevant parts from my config:

storage:
  recorder:
    tiers:
      - path: /tier1
        continuous:
          max_size:
            gb: 10
          max_age:
            days: 1
        events:
          max_size:
            gb: 1
          max_age:
            days: 1
      - path: /tier2
        continuous:
          max_size:
            gb: 100
          max_age:
            days: 30
        events:
          max_size:
            gb: 100
          max_age:
            days: 365
  snapshots:
    tiers:
      - path: /tier1
        max_size:
          gb: 100
        max_age:
          days: 30

And from docker compose:

    volumes:
      - ${VOLUMES}/viseron/segments:/segments
      - ${VOLUMES}/viseron/snapshots:/snapshots
      - ${VOLUMES}/viseron/thumbnails:/thumbnails
      - ${VOLUMES}/viseron/event_clips:/event_clips
      - ${VOLUMES}/viseron/config:/config
      - ${VOLUMES}/viseron/tier1:/tier1
      - ${VOLUMES}/viseron/tier2:/tier2

Version: 3.2.3 - f9ebd37

I then installed viseron from scratch while keeping the config, and everything works fine so far.

Given all that my wild guess would be that something is off with migrating from no-tiers setup to the one with tiers.

matvey00z avatar Sep 01 '25 20:09 matvey00z

Thank you for the report and your thurough investigation! That is indeed a scenario i have not tested very well (changing tier paths)

Will break your comment out to a new issue and track that properly there.

roflcoopter avatar Sep 02 '25 13:09 roflcoopter