frigate
frigate copied to clipboard
[Support]: Failed to load delegate from libedgetpu.so.1.0
Describe the problem you are having
Running the addon beta 0.11.0, fails to boot container.
Version
0.11.0-beta2
Frigate config file
mqtt:
host: core-mosquitto
topic_prefix: frigate
user: xxx
password: xxx
birdseye:
enabled: false
objects:
track:
- person
# - bicycle
- car
- truck
- cat
- dog
# filters:
# person:
# min_area: 10000
# max_area: 1000000
# threshold: 0.92
detectors:
coral:
type: edgetpu
device: usb
ffmpeg:
hwaccel_args: []
input_args:
- -avoid_negative_ts
- make_zero
- -fflags
- nobuffer+genpts+discardcorrupt
- -flags
- low_delay
- -strict
- experimental
- -analyzeduration
- 1000M
- -probesize
- 1000M
- -rw_timeout
- "5000000"
cameras:
kitchen:
ffmpeg:
inputs:
- path: rtmp://192.168.1.213/bcs/channel0_main.bcs?channel=0&stream=0&user=xxx&password=xxx
roles:
- record
- rtmp
- path: rtmp://192.168.1.252/bcs/channel0_sub.bcs?channel=0&stream=0&user=xxx&password=xxx
roles:
- detect
record:
enabled: True
retain:
days: 0
events:
pre_capture: 5
post_capture: 5
retain:
default: 14
snapshots:
enabled: True
timestamp: True
bounding_box: True
driveway:
ffmpeg:
inputs:
- path: rtmp://192.168.1.248/bcs/channel0_main.bcs?channel=0&stream=0&user=xxx&password=xxx
roles:
- record
- rtmp
- path: rtmp://192.168.1.248/bcs/channel0_sub.bcs?channel=0&stream=0&user=xxx&password=xxx
roles:
- detect
zones:
the_driveway_shadow:
coordinates: 636,120,779,116,873,116,984,220,428,236,530,132
the_driveway:
coordinates: 1280,720,1280,618,1231,534,1147,419,1068,304,984,220,428,236,267,426,176,559,82,720
front_left_yard:
coordinates: 0,218,94,196,210,172,385,140,528,126,538,145,497,172,345,331,285,404,171,574,103,720,0,720
front_right_yard:
coordinates: 1145,325,1240,316,1193,316,1280,324,1280,153,1187,134,1066,121,950,116,870,117,942,181,1097,350
street:
coordinates: 204,167,355,144,526,126,701,118,863,115,1027,119,1177,131,1280,156,1280,0,0,0,0,220
snapshots:
enabled: True
timestamp: True
bounding_box: True
record:
enabled: True
retain:
days: 14
events:
pre_capture: 5
post_capture: 5
backyard:
ffmpeg:
inputs:
- path: rtmp://192.168.1.142/bcs/channel0_main.bcs?channel=0&stream=0&user=xxx&password=xxx
roles:
- record
- rtmp
- path: rtmp://192.168.1.142/bcs/channel0_sub.bcs?channel=0&stream=0&user=xxx&password=xxx
roles:
- detect
snapshots:
enabled: True
timestamp: True
bounding_box: True
record:
enabled: True
retain:
days: 14
events:
pre_capture: 5
post_capture: 5
zones:
patio:
coordinates: 0,720,1050,720,1075,677,1099,642,1116,594,1117,562,1093,523,1057,472,1003,409,968,382,935,354,860,296,832,248,785,220,663,203,497,263,457,0,0,0
back_right_yard:
coordinates: 1280,720,1042,720,1072,672,1102,630,1122,595,1114,558,1095,525,1053,471,1003,413,930,349,855,293,832,242,778,217,668,198,502,256,448,0,1280,0
foyer:
ffmpeg:
inputs:
- path: rtmp://192.168.1.215/bcs/channel0_main.bcs?channel=0&stream=0&user=xxx&password=xxx
roles:
- record
- rtmp
- path: rtmp://192.168.1.215/bcs/channel0_sub.bcs?channel=0&stream=0&user=xxx&password=xxx
roles:
- detect
record:
enabled: True
retain:
days: 0
events:
pre_capture: 5
post_capture: 5
retain:
default: 14
snapshots:
enabled: True
timestamp: True
bounding_box: True
garage:
ffmpeg:
inputs:
- path: rtmp://192.168.1.112/bcs/channel0_main.bcs?channel=0&stream=0&user=xxx&password=xxx
roles:
- record
- rtmp
- path: rtmp://192.168.1.112/bcs/channel0_sub.bcs?channel=0&stream=0&user=xxx&password=xxx
roles:
- detect
record:
enabled: True
retain:
days: 0
events:
pre_capture: 5
post_capture: 5
retain:
default: 14
snapshots:
enabled: True
timestamp: True
bounding_box: True
floating:
ffmpeg:
inputs:
- path: rtmp://192.168.1.213/bcs/channel0_main.bcs?channel=0&stream=0&user=xxx&password=xxx
roles:
- record
- rtmp
- path: rtmp://192.168.1.213/bcs/channel0_sub.bcs?channel=0&stream=0&user=xxx&password=xxx
roles:
- detect
record:
enabled: True
retain:
days: 0
events:
pre_capture: 5
post_capture: 5
retain:
default: 14
snapshots:
enabled: True
timestamp: True
bounding_box: True
Relevant log output
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] done.
[services.d] starting services
[services.d] done.
[2022-05-26 18:37:42] frigate.app INFO : Starting Frigate (0.11.0-d2c3cdc)
[2022-05-26 18:37:42] frigate.app INFO : Creating directory: /tmp/cache
Starting migrations
[2022-05-26 18:37:42] peewee_migrate INFO : Starting migrations
There is nothing to migrate
[2022-05-26 18:37:42] peewee_migrate INFO : There is nothing to migrate
[2022-05-26 18:37:42] detector.coral INFO : Starting detection process: 224
[2022-05-26 18:37:42] frigate.app INFO : Output process started: 226
[2022-05-26 18:37:42] ws4py INFO : Using epoll
[2022-05-26 18:37:42] frigate.edgetpu INFO : Attempting to load TPU as usb
[2022-05-26 18:37:42] frigate.app INFO : Camera processor started for kitchen: 232
Process detector:coral:
[2022-05-26 18:38:08] frigate.edgetpu ERROR : No EdgeTPU was detected. If you do not have a Coral device yet, you must configure CPU detectors.
[2022-05-26 18:37:42] frigate.app INFO : Camera processor started for driveway: 235
[2022-05-26 18:37:42] frigate.app INFO : Camera processor started for backyard: 236
[2022-05-26 18:37:43] frigate.app INFO : Camera processor started for foyer: 238
[2022-05-26 18:37:43] frigate.app INFO : Camera processor started for garage: 240
[2022-05-26 18:37:43] frigate.app INFO : Camera processor started for floating: 241
[2022-05-26 18:37:43] frigate.app INFO : Capture process started for kitchen: 243
[2022-05-26 18:37:43] frigate.app INFO : Capture process started for driveway: 246
[2022-05-26 18:37:43] frigate.app INFO : Capture process started for backyard: 251
[2022-05-26 18:37:43] frigate.app INFO : Capture process started for foyer: 258
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/tflite_runtime/interpreter.py", line 160, in load_delegate
delegate = Delegate(library, options)
File "/usr/lib/python3/dist-packages/tflite_runtime/interpreter.py", line 119, in __init__
raise ValueError(capture.message)
ValueError
[2022-05-26 18:37:43] frigate.app INFO : Capture process started for garage: 266
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
[2022-05-26 18:37:43] frigate.app INFO : Capture process started for floating: 274
File "/opt/frigate/frigate/edgetpu.py", line 135, in run_detector
object_detector = LocalObjectDetector(
File "/opt/frigate/frigate/edgetpu.py", line 43, in __init__
edge_tpu_delegate = load_delegate("libedgetpu.so.1.0", device_config)
File "/usr/lib/python3/dist-packages/tflite_runtime/interpreter.py", line 162, in load_delegate
raise ValueError('Failed to load delegate from {}\n{}'.format(
ValueError: Failed to load delegate from libedgetpu.so.1.0
[2022-05-26 18:37:43] ws4py INFO : Using epoll
[2022-05-26 18:38:13] frigate.watchdog INFO : Detection appears to have stopped. Exiting frigate...
[2022-05-26 18:38:13] frigate.app INFO : Stopping...
[2022-05-26 18:38:13] ws4py INFO : Closing all websockets with [1001] 'Server is shutting down'
[2022-05-26 18:38:13] frigate.watchdog INFO : Exiting watchdog...
[2022-05-26 18:38:13] frigate.record INFO : Exiting recording cleanup...
[2022-05-26 18:38:13] frigate.events INFO : Exiting event cleanup...
[2022-05-26 18:38:13] frigate.object_processing INFO : Exiting object processor...
[2022-05-26 18:38:13] frigate.stats INFO : Exiting watchdog...
[2022-05-26 18:38:13] frigate.events INFO : Exiting event processor...
[2022-05-26 18:38:14] frigate.record INFO : Exiting recording maintenance...
[2022-05-26 18:38:14] peewee.sqliteq INFO : writer received shutdown request, exiting.
[2022-05-26 18:38:14] root INFO : Waiting for detection process to exit gracefully...
[cmd] python3 exited 0
/usr/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 25 leaked shared_memory objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] waiting for services.
[s6-finish] sending all processes the TERM signal.
FFprobe output from your camera
NA, camera's worked in Beta1
Frigate stats
Crashes before I have access to this.
Operating system
Debian
Install method
HassOS Addon
Coral version
USB
Network connection
Wired
Camera make and model
reolink
Any other information that may be helpful
camera name -> reolink camera model garage -> RLC-410-5mp floating -> C2-Pro foyer -> RLC-410-5mp kitchen -> RLC-422 driveway -> RLC-423 backyard -> RLC-423
So, I rolled back to the Full Access addon and it works perfectly. The beta2 is definitely the culprit.
Are you using the full access version of the beta?
Yes, sorry, should have specified.
What architecture is the host?
Operating System Version | 5.10.0-11-amd64 |
---|---|
CPU Architecture | x86_64 |
I just looked at the logs this moment and I noticed it's not seeing the Coral USB anymore. That's most likely the root cause as I don't have a CPU configuration as a backup.
Well, I spoke too soon. Going to the current non-beta Full Access Addon worked for roughly 3 hours before consuming all ram and crashing. I'm running it again to see if it occurs again. Mind you, 0.11.0beta1 ran for seemingly months without this occurring.
Receiving this error on startup, however the container continues to run. Will monitor.
Exception in thread event_processor:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/peewee.py", line 3129, in execute_sql
cursor.execute(sql, params or ())
sqlite3.IntegrityError: NOT NULL constraint failed: event.retain_indefinitely
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/opt/frigate/frigate/events.py", line 67, in run
Event.replace(
File "/usr/local/lib/python3.8/dist-packages/peewee.py", line 1898, in inner
return method(self, database, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/peewee.py", line 1969, in execute
return self._execute(database)
File "/usr/local/lib/python3.8/dist-packages/peewee.py", line 2730, in _execute
return super(Insert, self)._execute(database)
File "/usr/local/lib/python3.8/dist-packages/peewee.py", line 2466, in _execute
return self.handle_result(database, cursor)
File "/usr/local/lib/python3.8/dist-packages/peewee.py", line 2739, in handle_result
return database.last_insert_id(cursor, self._query_type)
File "/usr/local/lib/python3.8/dist-packages/peewee.py", line 3218, in last_insert_id
return cursor.lastrowid
File "/usr/local/lib/python3.8/dist-packages/playhouse/sqliteq.py", line 88, in lastrowid
self._wait()
File "/usr/local/lib/python3.8/dist-packages/playhouse/sqliteq.py", line 63, in _wait
raise self._exc
File "/usr/local/lib/python3.8/dist-packages/playhouse/sqliteq.py", line 178, in execute
cursor = self.database._execute(obj.sql, obj.params, obj.commit)
File "/usr/local/lib/python3.8/dist-packages/peewee.py", line 3136, in execute_sql
self.commit()
File "/usr/local/lib/python3.8/dist-packages/peewee.py", line 2902, in __exit__
reraise(new_type, new_type(exc_value, *exc_args), traceback)
File "/usr/local/lib/python3.8/dist-packages/peewee.py", line 185, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.8/dist-packages/peewee.py", line 3129, in execute_sql
cursor.execute(sql, params or ())
peewee.IntegrityError: NOT NULL constraint failed: event.retain_indefinitely
This is because you upgraded and your database was migrated for the new version. You will either need to restore an old database from a backup you took or delete your database and let it get recreated.
@blakeblackshear would this lead to the memory leak that I'm seeing?
after 16 hours, deleting the database does seem to remove the memory leak. For now, my production system is up and running. It is troubling that there seems to be a few similar issues related to the new beta not finding the coral usb. If any other data is needed, I can get it from the current release as the beta does not work. Please let me know what other information would be needed.
Seems odd that it would have an issue with HASS os, my usb coal is running fine.
I haven't seen any changes to the coral dependencies in the docker overhaul so no idea why. Blake will have a better understanding of what's going on here
I'm not running HassOS, I'm running debian bullseye. Updating from beta1 to beta2 caused the issue with the addon. Nothing was updated OS wise.
I'm not running HassOS, I'm running debian bullseye. Updating from beta1 to beta2 caused the issue with the addon. Nothing was updated OS wise.
Interesting, why are you running hassos addon as install type then? Have you tried the normal docker container to see if that works?
I'm running a supervised version of home assistant. I can manage the OS if needed. I'm running this way so I can mount network drives that HA can access. It's one of the supported installation methods.
I have not tried the normal docker install because that will cause home assistant to mark the installation as unsupported, which may cause other issues.
To be clear I meant specifically just running frigate outside hass, I don't see how that would affect Home Assistant at all
I don't have any other hardware lying around that I could do that with? I have window's pc's, and I'd have to add that into the equation. It can run docker, and I can move the coral USB, but that really wont help the situation and I'm unwilling to wipe this pc to install debian.
I can't install it on the machine running debian, as adding containers to it would cause home assistant to become 'unsupported'. So I'm in a position where I cannot help outside running the addon.
Interesting, didn't realize running containers outside the supervised install would affect it to be unsupported install type
Yeah, it's a hot button with people using that installation method. Basically, the supervisor handles the entire docker network and anyone who muddles in it is bad in the supervisors eyes. If they ever give users the ability to mount network drives in the HA eco system, I'll gladly move to HassOS as I can still access the OS through some external addons.
I have similar issue. running frigate as a Docker container on Unraid. this is what is see on the logs.
[2022-05-31 21:54:07] frigate.app INFO : Camera processor started for garden: 253
[2022-05-31 21:54:07] frigate.app INFO : Capture process started for front: 259
[2022-05-31 21:54:07] frigate.app INFO : Capture process started for parking: 266
[2022-05-31 21:54:07] frigate.app INFO : Capture process started for patio: 269
[2022-05-31 21:54:07] frigate.app INFO : Capture process started for garden: 274
[2022-05-31 21:54:07] ws4py INFO : Using epoll
[2022-05-31 21:54:38] frigate.watchdog INFO : Detection appears to be stuck. Restarting detection process...
[2022-05-31 21:54:38] root INFO : Waiting for detection process to exit gracefully...
[2022-05-31 21:55:08] root INFO : Detection process didnt exit. Force killing...
[2022-05-31 21:55:20] detector.coral_pci INFO : Starting detection process: 575
[2022-05-31 21:55:20] frigate.edgetpu INFO : Attempting to load TPU as pci
Process detector:coral_pci:
[2022-05-31 21:55:33] frigate.edgetpu ERROR : No EdgeTPU was detected. If you do not have a Coral device yet, you must configure CPU detectors.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/tflite_runtime/interpreter.py", line 160, in load_delegate
delegate = Delegate(library, options)
File "/usr/lib/python3/dist-packages/tflite_runtime/interpreter.py", line 119, in __init__
raise ValueError(capture.message)
ValueError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/frigate/frigate/edgetpu.py", line 136, in run_detector
object_detector = LocalObjectDetector(
File "/opt/frigate/frigate/edgetpu.py", line 44, in __init__
edge_tpu_delegate = load_delegate("libedgetpu.so.1.0", device_config)
File "/usr/lib/python3/dist-packages/tflite_runtime/interpreter.py", line 162, in load_delegate
raise ValueError('Failed to load delegate from {}\n{}'.format(
ValueError: Failed to load delegate from libedgetpu.so.1.0
[2022-05-31 21:55:40] frigate.watchdog INFO : Detection appears to have stopped. Exiting frigate...
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] waiting for services.
[2022-05-31 21:55:40] frigate.video ERROR : patio: Unable to read frames from ffmpeg process.
[2022-05-31 21:55:40] frigate.video ERROR : patio: ffmpeg process is not running. exiting capture thread...
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.
** Press ANY KEY to close this window **
running the latest version of blakeblackshear/frigate:stable-amd64 with a coral mini-pcie
is there any way to fix this ?
@narayanvs Have you downloaded the coral drivers from the unraid community store?
Yes i have it running @NickM-27
My Unraid server can see the coral device but frigte doesn't ?
[1ac1:089a] 04:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU
The coral driver shows as below
Coral TPU 1: | Status: SHUTDOWN Temperature: SHUTDWON Operating Frequency: SHUTDOWN Driver Version: 1.1 Framework Version: 1.1.2 Set Temperatur Limits: Interrupt Temperature: 99.80 °CStatus: DISABLED Shutdown Temperature: 104.80 °CStatus: ENABLED Trottle Temperatures: 84.80 °C - 89.80 °C - 94.80 °C
-- | --
and there is some error in the Unraid logs as well
May 31 21:54:07 Fusion kernel: x86/PAT: frigate.detecto:1244 map pfn RAM range req uncached-minus for [mem 0x2a60b8000-0x2a60bbfff], got write-back
May 31 21:54:30 Fusion flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
May 31 21:55:20 Fusion kernel: apex 0000:04:00.0: RAM did not enable within timeout (12000 ms)
May 31 21:55:33 Fusion kernel: apex 0000:04:00.0: RAM did not enable within timeout (12000 ms)
May 31 21:55:33 Fusion kernel: apex 0000:04:00.0: Error in device open cb: -110
it was working for couple of days, this issue started today.
here is what the server sees

here is what the coral driver sees
I would recommend creating your own issue at this point, as this is unrelated to the original issues which has to do with a USB coral not working on the latest beta
ok. will do. thanks.
When using the full access addon, did you disable protection mode? I believe you need to.
Just wanted to add that I'm seeing a similar issue on a RPi4 running Debian 11 bullseye with frigate beta 2 with a USB coral on a separate powered USB hub. My docker container frequently fails (usually multiple times), then eventually it will start up and once its started it seems stable indefinitely. It hasn't caused me to much problem yet since docker just restarts frigate and eventually it comes up successfully so I haven't looked into it to much yet.
Process detector:coral: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/tflite_runtime/interpreter.py", line 160, in load_delegate delegate = Delegate(library, options) File "/usr/lib/python3/dist-packages/tflite_runtime/interpreter.py", line 119, in init raise ValueError(capture.message) ValueError
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/opt/frigate/frigate/edgetpu.py", line 135, in run_detector object_detector = LocalObjectDetector( File "/opt/frigate/frigate/edgetpu.py", line 43, in init edge_tpu_delegate = load_delegate("libedgetpu.so.1.0", device_config) File "/usr/lib/python3/dist-packages/tflite_runtime/interpreter.py", line 162, in load_delegate raise ValueError('Failed to load delegate from {}\n{}'.format( ValueError: Failed to load delegate from libedgetpu.so.1.0
[2022-06-03 11:39:51] frigate.watchdog INFO : Detection appears to have stopped. Exiting frigate... [cont-finish.d] executing container finish scripts...
Just to update this thread, after running for multiple days stable on my RPI4/coral, I restarted today to upgrade to beta4. The container immediately stopped with the same error from the above post. Docker immediately restarted the container and it came back up and appears to be working now after a few minutes. I'm a little unsure as to how to troubleshoot this. I suppose as long as it only happens on a restart and docker restarts it automatically it doesn't matter a ton. I've never seen/noticed this happen in my setup except on a restart.