django-DefectDojo icon indicating copy to clipboard operation
django-DefectDojo copied to clipboard

Audit logs grows indefinitely (other suggested solutions didn't work for me)

Open lfama opened this issue 1 year ago • 9 comments

The auditlog_logentry.ibd grows forever (as already reported in #9153 ).

However, I tried both the solutions suggested (setting the DD_AUDITLOG_FLUSH_RETENTION_PERIOD environment variable didn't have any effect and running the docker exec -it django-defectdojo-uwsgi-1 ./manage.py auditlogflush --yes --before-date 2023-01-01 gets stuck).

My DefectDojo instance is running in docker deployment and version is v. 2.36.4 ( release mode ).

How can we delete old audit logs? Also, if we don't care about audit logs, can we just periodically delete the entire auditlog_logentry.ibd as a workaround ? (of course if this won't break anything..).

Any help is appreciated, thanks!

lfama avatar Sep 02 '24 09:09 lfama

This might be a good time to consider a migration to Postgres as I believe .idb is a MySQL database table and we've deprecated MySQL.

There's an in-depth discussion of migration at https://github.com/DefectDojo/django-DefectDojo/discussions/9480

If you do migrate to Postgres, you can run the auditflush command then do a Postgres vacuum to reclaim the space.

mtesauro avatar Sep 02 '24 20:09 mtesauro

Hi @mtesauro , yes we are actually going to install a new DD instance with the postgres-redis profile. However I'd like to not export the audit logs to the new instance, as we don't need them.

I tried to simply move the auditlog_logentry.ibd and create a new empty file with the same name and permission, but it looks like it breaks. I'm wondering if it's safe to manually delete rows in this table in order to reduce its size.

lfama avatar Sep 03 '24 08:09 lfama

I know in the past I've just dropped all those rows in the DB - something like delete * from auditlog_entry to clean up a cluttered install. It's been a long time since I've done that so don't just copy/paste what I remembered but it's something like that. I'd take a SQL dump of that DB or at least the table before you try anything just to be safe.

There's also a useful setting you might not be aware of: DD_AUDITLOG_FLUSH_RETENTION_PERIOD=(int, -1), see https://github.com/DefectDojo/django-DefectDojo/blob/d522bbe1b6edfec6612a29c005f85d3b8bbb2161/dojo/settings/settings.dist.py#L245

You can set that in your local_settings.py and it will survive upgrades.

mtesauro avatar Sep 03 '24 16:09 mtesauro

I've already tried (around 1 week ago) to set DD_AUDITLOG_FLUSH_RETENTION_PERIOD=2 (it shoud be 2 months) in my docker-compose.yml but didn't have any effect (after stopping and restarting all the containers). I've set the variables for all the relevant services (uwsgi, celeryworker and celerybeat) but nothing happened.

lfama avatar Sep 04 '24 12:09 lfama

Fair.

I've only used it after cleaning out that table so it did grow after I emptied auditlog_entry so haven't tried/seen if it can work to clean up an already full instance.

mtesauro avatar Sep 04 '24 15:09 mtesauro

You should be able to ignore the auditlog_entry table during migration to postgresql using pgloader. The log table is by far the biggest table to migrate and if you don't need, skipping it will speed up the migration by a lot.

https://github.com/dimitri/pgloader/blob/master/docs/ref/pgsql.rst#excluding-table-names-matching

DarkR0ast avatar Sep 11 '24 15:09 DarkR0ast

I've already tried (around 1 week ago) to set DD_AUDITLOG_FLUSH_RETENTION_PERIOD=2 (it shoud be 2 months) in my docker-compose.yml but didn't have any effect (after stopping and restarting all the containers). I've set the variables for all the relevant services (uwsgi, celeryworker and celerybeat) but nothing happened.

What's in your logs? The flush_auditlog task should be triggered and visible in the logs.

valentijnscholten avatar Sep 21 '25 18:09 valentijnscholten

What's in your logs? The flush_auditlog task should be triggered and visible in the logs.

I'm facing something similar, I defined it in our docker-compose, not sure if it has an effect. The table is still 75Go.

And the script hanged too.

de4Ru avatar Sep 25 '25 15:09 de4Ru

The linked PR will have some improvements to delete the entries in batches and a management command to remove entries manually. For those who want a solution NOW you can run the following SQL query:

DELETE FROM auditlog_logentry
WHERE id IN (
  SELECT id
  FROM auditlog_logentry
  WHERE timestamp < (NOW() - INTERVAL '6 months')
  ORDER BY id
  LIMIT 10000
);

Change the LIMIT line according to how brave you are. Change the 6 months interval according to your needs.

valentijnscholten avatar Sep 27 '25 18:09 valentijnscholten