openpilot icon indicating copy to clipboard operation
openpilot copied to clipboard

deleter: inconsistency after AGNOS11 update (NVMe related)

Open AlexandreSato opened this issue 1 year ago • 27 comments

Describe the bug

I suspect that the deleter after AGNOS11 update is no more consistent. I haven't tested it yet to confirm, but I've already caught four people who had to delete realdata/* manually (because a noEnter "out of storage" event)

Provide a route where the issue occurs

dongleIDs: 0b54d0594d924cd9, 075b133b6181e058, a0e5673b89222047

075b133b6181e058/000000cc--7d1fff3f36/42

openpilot version

13760968bbdec44c2bf234fde79f8ef0e5bbe254

Additional info

Not confirmed

image

AlexandreSato avatar Nov 12 '24 00:11 AlexandreSato

Got any routes or dongles?

adeebshihadeh avatar Nov 12 '24 00:11 adeebshihadeh

Got any routes or dongles?

only some dongs, added to description

AlexandreSato avatar Nov 12 '24 23:11 AlexandreSato

Got this error on this route:

https://connect.comma.ai/fe18f736cb0d7813/0000040d--3a202cd3b9

It actually increased by 1% after I couldn't engage anymore either. Logs/data are uploading.

I pretty much only run master-ci so this was surprising.

nelsonjchen avatar Nov 13 '24 04:11 nelsonjchen

got one segment when it's happened: 075b133b6181e058/000000cc--7d1fff3f36/42

Interestingly when I connected to the cell phone hotspot to ssh and run the deleter test script, the deleter worked the moment the connection was established without me running any command and the out of storage event cleared, when I get home I upload the route

AlexandreSato avatar Nov 13 '24 22:11 AlexandreSato

My routes don't appear in connect so can't really help there, but got it with dongle a954cbd0682c58bd

korhojoa avatar Nov 17 '24 11:11 korhojoa

Yup mine's doing the same too. Hoping the new build of AGNOS in master will address...

My dongle ID is likely useless as I don't have a data connection anymore with my C3. dc30ef2ad294a732

michaelhonan avatar Nov 18 '24 04:11 michaelhonan

The new AGNOS doesn't address this. Just looking into this now.

So far seems like it's all devices with NVMe drives, so likely a race condition (or failure in) mounting it. Have you guys gotten the "failed to mount" alert at all?

adeebshihadeh avatar Nov 18 '24 04:11 adeebshihadeh

Thanks for confirming! I personally have not seen an error like that

michaelhonan avatar Nov 18 '24 04:11 michaelhonan

I have not seen that error.

nelsonjchen avatar Nov 18 '24 04:11 nelsonjchen

I also have not seen that error too.

AlexandreSato avatar Nov 18 '24 10:11 AlexandreSato

grab more one: a0e5673b89222047/0000003d--2ac5e373e1/8 or in a0e5673b89222047/0000003c--0133a69a87/3 according user description

AlexandreSato avatar Nov 18 '24 12:11 AlexandreSato

Not sure if this is related but my C3X wasn't uploading anything. After deleting realdata it started uploading again.

pbassut avatar Nov 23 '24 22:11 pbassut

Just started looking into this:

075b133b6181e058/000000cc--7d1fff3f36/42

  • eventually recovers towards the end of the drive, almost like the deleter was stuck
  • no rlogs
  • bootlog for this route is missing, not on the NVMe?

fe18f736cb0d7813/0000040d--3a202cd3b9

  • doesn't recover like the other route did
  • thanks for uploading all the logs @nelsonjchen!
  • has the bootlog, and NVMe is mounted for that bootlog

adeebshihadeh avatar Nov 25 '24 23:11 adeebshihadeh

I missed it when I did the AGNOS flash in the sound issue test

AlexandreSato avatar Nov 25 '24 23:11 AlexandreSato

got some routes:

075b133b6181e058/0000003e--67e218c522/1 075b133b6181e058/0000003d--2e512616b9/3 i think 0000003f have something interesting too

075b133b6181e058/0000003a--3043e775b6/25

AlexandreSato avatar Nov 29 '24 22:11 AlexandreSato

Posting links to relevant code here:

The code that disables at 7% free:

https://github.com/commaai/openpilot/blob/75bf7568938da87233ae2f496e0baa1b23df199a/selfdrive/selfdrived/selfdrived.py#L189-L190

The code that should have started deleting stuff when things reach 10% free:

https://github.com/commaai/openpilot/blob/master/system/loggerd/deleter.py

nelsonjchen avatar Dec 04 '24 07:12 nelsonjchen

I've made a somewhat pinned branch of master-ci that makes deleter log like hell with rotation and an extra logging to see if deleter dies to something that's not an OSError to somewhere in /data verbosely and intend to "daily it":

https://github.com/nelsonjchen/openpilot/tree/33993_master-ci

(I'll rebase it onto a newer master-ci whenever I feel like it.)

It'll be logging onto the SOM's storage, but that's good since it'll be separate from the NVMe drive.

That, combined with fallocate -l 400G /data/media/dummy.dat to "trigger" it the storage being "full-ish" so it'll be in the persistently deleting state on my 1TB C3 SSD, should help trace the issue when it breaks again. I had to put this weight back on since I cleared some of the the SSD to get it working again ASAP.

Anyway, just mentioning my methodology if anyone else wants to follow along.

nelsonjchen avatar Dec 04 '24 08:12 nelsonjchen

Looked into it a bit more, and it's not actually the NVMe. Seems to be related to internet connectivity like @AlexandreSato mentioned. Curious what you find @nelsonjchen.

adeebshihadeh avatar Dec 05 '24 05:12 adeebshihadeh

Ooh, OK. I've added a bit more logging around sus stuff with "cloud" in the name like cloudlog then.

nelsonjchen avatar Dec 05 '24 17:12 nelsonjchen

I can't admit I've driven too much but so far, no hits on the issue. Maybe I need to try a full shutdown/power up away from wifi/internets.

nelsonjchen avatar Dec 09 '24 05:12 nelsonjchen

@AlexandreSato , anybody who can reliably hit this? maybe they can run the hacky logging patch?

nelsonjchen avatar Dec 09 '24 05:12 nelsonjchen

@AlexandreSato , anybody who can reliably hit this? maybe they can run the hacky logging patch?

I think the best way would be to use replay, recreating logs in a loop. Lately, I've been connecting to my phone's hotspot to work around the issue.

AlexandreSato avatar Dec 09 '24 19:12 AlexandreSato

Unable to reproduce it so far. I guess my habits just aren't compatible. Hey, those who can reproduce it consistently, please cherry-pick my commit and share over some logs. 🤞

I don't care if you're clone, fork, uploads ignored, or whatever. Just help.

nelsonjchen avatar Dec 15 '24 20:12 nelsonjchen

Should be very easy to reproduce, happens to me 100% of the time if there is no network connection. I'll try the branch.

korhojoa avatar Dec 16 '24 00:12 korhojoa

Thanks @korhojoa . Let me know if you need any changes or modifications to the branch as well.

nelsonjchen avatar Dec 17 '24 16:12 nelsonjchen

41f4ae6d3b79a9b1 is my dongle ID.

i see this issue consistently. I have added a menu item that will show my remaining storage from the developer menu. I observed that the storage went to 38 and stayed there for the next couple hours until i stopped again, then it stopped at 21, then at 10. All of the routes that i drove appeared to be updated, but they were not being deleted from the c3. I do not have comma prime and take lots of long drives.

When i play with the deleter script to make it start deleting some stuff, it seems to try to delete only one route/seg then stops? there were LOTS of routes here. image

Since that image, i sent an update with a syntax error and for some reason could not ssh back in, some service not started? ip changed? im not sure. i had to reset. I am confident that i will see this again. Ill monitor it closely and improve the logging as i go.

inquisitev avatar Dec 30 '24 14:12 inquisitev

Ive started a thread in the discord: https://discord.com/channels/469524606043160576/1322679694889324616

inquisitev avatar Dec 30 '24 15:12 inquisitev

it seems to try to delete only one route/seg then stops? there were LOTS of routes here.

Once the available space reaches 10% or 5GB, it will delete 1 or 2 segments to free up space, but it won't delete more than that so that we keep as many logs as possible. It will only delete more once new segments are created (so while driving it will delete one every minute).

incognitojam avatar Jan 09 '25 20:01 incognitojam

log.zip Here's log.txt from /data/33993_log/. It would be nice to have more information on how much it's filling up so drives wouldn't be blind until it suddenly is too full and stops in a little while.

korhojoa avatar Jan 12 '25 08:01 korhojoa

It's happening due to a time warp, which can be seen in @korhojoa's logs. The clock is going back to 2015, trigger unknown but isn't supposed to matter. Threading event.wait() is supposed to use a monotonic clock and therefore be robust to that, but it isn't. @incognitojam found python/cpython#91394 which may describe the issue.

To reproduce on device, apply this patch:

diff --git a/system/loggerd/deleter.py b/system/loggerd/deleter.py
index 2a2bd5fe1..44e796a62 100755
--- a/system/loggerd/deleter.py
+++ b/system/loggerd/deleter.py
@@ -69,7 +69,8 @@ def deleter_thread(exit_event: threading.Event):
           cloudlog.exception(f"issue deleting {delete_path}")
       exit_event.wait(.1)
     else:
-      exit_event.wait(30)
+      cloudlog.info("Sleeping")
+      exit_event.wait(5)

Then launch deleter in one terminal:

comma@comma-2a02b9c8:/data/openpilot$ LOGPRINT=debug system/loggerd/deleter.py  
Sleeping                               
Sleeping                               
Sleeping       

Then tamper with the system time in another terminal:

sudo systemctl stop systemd-timesyncd
sudo date --set "1 Jan 2015 12:00:00"
date

You will stop seeing "Sleeping" output from deleter, but it will resume when the clock is fixed:

sudo systemctl start systemd-timesyncd

This explains why deleter recovers when we get Internet connectivity, because timesyncd can fix the time.

The correct fix is not yet clear to me. There are a number of hacks we could apply, but ideally if Python is supposed to be using a monotonic clock for Event.wait(), we should find out why it isn't and fix that.

jyoung8607 avatar Jan 12 '25 12:01 jyoung8607