deleter: inconsistency after AGNOS11 update (NVMe related)
Describe the bug
I suspect that the deleter after AGNOS11 update is no more consistent. I haven't tested it yet to confirm, but I've already caught four people who had to delete realdata/* manually (because a noEnter "out of storage" event)
Provide a route where the issue occurs
dongleIDs: 0b54d0594d924cd9, 075b133b6181e058, a0e5673b89222047
075b133b6181e058/000000cc--7d1fff3f36/42
openpilot version
13760968bbdec44c2bf234fde79f8ef0e5bbe254
Additional info
Not confirmed
Got any routes or dongles?
Got any routes or dongles?
only some dongs, added to description
Got this error on this route:
https://connect.comma.ai/fe18f736cb0d7813/0000040d--3a202cd3b9
It actually increased by 1% after I couldn't engage anymore either. Logs/data are uploading.
I pretty much only run master-ci so this was surprising.
got one segment when it's happened: 075b133b6181e058/000000cc--7d1fff3f36/42
Interestingly when I connected to the cell phone hotspot to ssh and run the deleter test script, the deleter worked the moment the connection was established without me running any command and the out of storage event cleared, when I get home I upload the route
My routes don't appear in connect so can't really help there, but got it with dongle a954cbd0682c58bd
Yup mine's doing the same too. Hoping the new build of AGNOS in master will address...
My dongle ID is likely useless as I don't have a data connection anymore with my C3.
dc30ef2ad294a732
The new AGNOS doesn't address this. Just looking into this now.
So far seems like it's all devices with NVMe drives, so likely a race condition (or failure in) mounting it. Have you guys gotten the "failed to mount" alert at all?
Thanks for confirming! I personally have not seen an error like that
I have not seen that error.
I also have not seen that error too.
grab more one: a0e5673b89222047/0000003d--2ac5e373e1/8 or in a0e5673b89222047/0000003c--0133a69a87/3 according user description
Not sure if this is related but my C3X wasn't uploading anything. After deleting realdata it started uploading again.
Just started looking into this:
075b133b6181e058/000000cc--7d1fff3f36/42
- eventually recovers towards the end of the drive, almost like the deleter was stuck
- no rlogs
- bootlog for this route is missing, not on the NVMe?
fe18f736cb0d7813/0000040d--3a202cd3b9
- doesn't recover like the other route did
- thanks for uploading all the logs @nelsonjchen!
- has the bootlog, and NVMe is mounted for that bootlog
I missed it when I did the AGNOS flash in the sound issue test
got some routes:
075b133b6181e058/0000003e--67e218c522/1 075b133b6181e058/0000003d--2e512616b9/3 i think 0000003f have something interesting too
075b133b6181e058/0000003a--3043e775b6/25
Posting links to relevant code here:
The code that disables at 7% free:
https://github.com/commaai/openpilot/blob/75bf7568938da87233ae2f496e0baa1b23df199a/selfdrive/selfdrived/selfdrived.py#L189-L190
The code that should have started deleting stuff when things reach 10% free:
https://github.com/commaai/openpilot/blob/master/system/loggerd/deleter.py
I've made a somewhat pinned branch of master-ci that makes deleter log like hell with rotation and an extra logging to see if deleter dies to something that's not an OSError to somewhere in /data verbosely and intend to "daily it":
https://github.com/nelsonjchen/openpilot/tree/33993_master-ci
(I'll rebase it onto a newer master-ci whenever I feel like it.)
It'll be logging onto the SOM's storage, but that's good since it'll be separate from the NVMe drive.
That, combined with fallocate -l 400G /data/media/dummy.dat to "trigger" it the storage being "full-ish" so it'll be in the persistently deleting state on my 1TB C3 SSD, should help trace the issue when it breaks again. I had to put this weight back on since I cleared some of the the SSD to get it working again ASAP.
Anyway, just mentioning my methodology if anyone else wants to follow along.
Looked into it a bit more, and it's not actually the NVMe. Seems to be related to internet connectivity like @AlexandreSato mentioned. Curious what you find @nelsonjchen.
Ooh, OK. I've added a bit more logging around sus stuff with "cloud" in the name like cloudlog then.
I can't admit I've driven too much but so far, no hits on the issue. Maybe I need to try a full shutdown/power up away from wifi/internets.
@AlexandreSato , anybody who can reliably hit this? maybe they can run the hacky logging patch?
@AlexandreSato , anybody who can reliably hit this? maybe they can run the hacky logging patch?
I think the best way would be to use replay, recreating logs in a loop. Lately, I've been connecting to my phone's hotspot to work around the issue.
Unable to reproduce it so far. I guess my habits just aren't compatible. Hey, those who can reproduce it consistently, please cherry-pick my commit and share over some logs. 🤞
I don't care if you're clone, fork, uploads ignored, or whatever. Just help.
Should be very easy to reproduce, happens to me 100% of the time if there is no network connection. I'll try the branch.
Thanks @korhojoa . Let me know if you need any changes or modifications to the branch as well.
41f4ae6d3b79a9b1 is my dongle ID.
i see this issue consistently. I have added a menu item that will show my remaining storage from the developer menu. I observed that the storage went to 38 and stayed there for the next couple hours until i stopped again, then it stopped at 21, then at 10. All of the routes that i drove appeared to be updated, but they were not being deleted from the c3. I do not have comma prime and take lots of long drives.
When i play with the deleter script to make it start deleting some stuff, it seems to try to delete only one route/seg then stops? there were LOTS of routes here.
Since that image, i sent an update with a syntax error and for some reason could not ssh back in, some service not started? ip changed? im not sure. i had to reset. I am confident that i will see this again. Ill monitor it closely and improve the logging as i go.
Ive started a thread in the discord: https://discord.com/channels/469524606043160576/1322679694889324616
it seems to try to delete only one route/seg then stops? there were LOTS of routes here.
Once the available space reaches 10% or 5GB, it will delete 1 or 2 segments to free up space, but it won't delete more than that so that we keep as many logs as possible. It will only delete more once new segments are created (so while driving it will delete one every minute).
log.zip
Here's log.txt from /data/33993_log/.
It would be nice to have more information on how much it's filling up so drives wouldn't be blind until it suddenly is too full and stops in a little while.
It's happening due to a time warp, which can be seen in @korhojoa's logs. The clock is going back to 2015, trigger unknown but isn't supposed to matter. Threading event.wait() is supposed to use a monotonic clock and therefore be robust to that, but it isn't. @incognitojam found python/cpython#91394 which may describe the issue.
To reproduce on device, apply this patch:
diff --git a/system/loggerd/deleter.py b/system/loggerd/deleter.py
index 2a2bd5fe1..44e796a62 100755
--- a/system/loggerd/deleter.py
+++ b/system/loggerd/deleter.py
@@ -69,7 +69,8 @@ def deleter_thread(exit_event: threading.Event):
cloudlog.exception(f"issue deleting {delete_path}")
exit_event.wait(.1)
else:
- exit_event.wait(30)
+ cloudlog.info("Sleeping")
+ exit_event.wait(5)
Then launch deleter in one terminal:
comma@comma-2a02b9c8:/data/openpilot$ LOGPRINT=debug system/loggerd/deleter.py
Sleeping
Sleeping
Sleeping
Then tamper with the system time in another terminal:
sudo systemctl stop systemd-timesyncd
sudo date --set "1 Jan 2015 12:00:00"
date
You will stop seeing "Sleeping" output from deleter, but it will resume when the clock is fixed:
sudo systemctl start systemd-timesyncd
This explains why deleter recovers when we get Internet connectivity, because timesyncd can fix the time.
The correct fix is not yet clear to me. There are a number of hacks we could apply, but ideally if Python is supposed to be using a monotonic clock for Event.wait(), we should find out why it isn't and fix that.