FTL icon indicating copy to clipboard operation
FTL copied to clipboard

DNS dying if pihole-FTL.db gets too large

Open abraxas86 opened this issue 8 months ago • 10 comments

Versions

Core version is v6.1.4 (Latest: v6.1.4) Web version is v6.2.1 (Latest: v6.2.1) FTL version is v6.2.3 (Latest: v6.2.3)

Platform

  • OS and version: Raspbian GNU/Linux 12 (bookworm)
  • Platform: Raspberry Pi 3 Model B Plus Rev 1.3

Expected behavior

Some way to manage flooding to the pihole-FTL database, or way to manage applications or websites that may be flooding requests.

Actual behavior / bug

I have a program running on one of the computers on my network that is maybe very un-optimized in the way that it was designed. It is possible to have it absolutely pummel Twitch’s eventusb API, which crushes my FTL database. This in turn kills the CPU on my Raspberry Pi and takes down my entire network’s DNS lookup (and internet access) until the Pihole calms down. I think this opens up a door to abuse - if you write something that relies on advertisements and you don't get the result back, you could potentially flood the DB and take the user's entire internet connection down.

My pihole-FTL database was at 1.6gb. After I removed all the eventsub rows from the table and vacuumed it, the db was down to 234mb.

I’m not sure what the best route would be to fix this - if we have an ability to not log trusted domains then this could be used to send ad requests through a previously trusted domain making it harder to track down. Maybe if we had the way to offload stuff to smaller tables in another database? Something like a quarantine that isn't queried as often as FTL, and that won't peg the CPU at 100% if something floods it?

As a workaround, I'll have to run a script every few hours to purge the rows from the table and vacuum the database.

Steps to reproduce the behavior:

  1. Install something like Chatterino2, making sure Eventsub is enabled
  2. Add around 40 different chat panels and tabs
  3. Let it run for a day or two
  4. Watch your pihole throttle and take down your DNS lookups while it parses the FTL database

Debug Token

  • URL: https://tricorder.pi-hole.net/dqUNKy2T/

abraxas86 avatar Aug 16 '25 23:08 abraxas86

My pihole-FTL.db was 2.1 GB... After restarting the docker container it was up for ~2-3 minutes, then it was unhealty/down.

After deleting the pihole-FTL.db it seems to work.

https://github.com/pi-hole/pi-hole/issues/6395

BornToBeRoot avatar Aug 26 '25 11:08 BornToBeRoot

Just wanted to update, I've got a cronjob running this script every hour. Since then, my pihole has been running smoothly. It's not a great solution - it kinda screws up the reporting, but my network hasn't died since I put it into place.

Here's how I have it set up:

$ ls /opt/pihole/scripts/pihole-FTL-cleaner
clean-ftl.sh  remove-domains.txt

$ cat /opt/pihole/scripts/pihole-FTL-cleaner/remove-domains.txt 
eventsub.wss.twitch.tv
sockets.betterttv.net

Here's the script:

#!/bin/bash
# cleanup_domains.sh
# Removes specified domains from Pi-hole FTL DB, keeps backups for 24h
# Designed to be run as root

set -euo pipefail

# Full paths so cron doesn't mess up some relative path reference as root...
SQLITE3="/usr/bin/sqlite3"
SYSTEMCTL="/bin/systemctl"
PIHOLE="/usr/local/bin/pihole"  # adjust if different on your system
FIND="/usr/bin/find"
CP="/bin/cp"

# Ensure the script is run as root
if [[ "$EUID" -ne 0 ]]; then
    echo "[!] This script must be run as root."
    exit 1
fi

# Relevant paths:
DB="/etc/pihole/pihole-FTL.db"
BACKUP_DIR="/etc/pihole/backups"
DOMAIN_LIST="/opt/pihole/scripts/pihole-FTL-cleaner/remove-domains.txt"  # one domain per line
TIMESTAMP=$(date +'%Y%m%d-%H%M%S')
BACKUP_FILE="$BACKUP_DIR/pihole-FTL-$TIMESTAMP.db"
LOGFILE="/var/log/pihole-ftl-cleaner.log"

# Send output to log file
exec >> "$LOGFILE" 2>&1

echo "[+] Starting Pi-hole FTL cleanup at $(date)"

# Make backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"

# Check domain list exists
if [[ ! -f "$DOMAIN_LIST" ]]; then
    echo "[!] Domain list file not found: $DOMAIN_LIST"
    exit 1
fi

# Backup database in case I screwed something up
echo "[+] Backing up FTL database to $BACKUP_FILE"
$CP "$DB" "$BACKUP_FILE"

# Remove entries for each domain in the txt file
while IFS= read -r domain; do
    # Skip empty lines and comments
    [[ -z "$domain" || "$domain" =~ ^# ]] && continue

    # Escape single quotes for SQL
    safe_domain=${domain//\'/\'\'}

    echo "[+] Removing queries for $domain"
    id=$($SQLITE3 "$DB" "SELECT id FROM domain_by_id WHERE domain = '$safe_domain';")
    if [[ -n "$id" ]]; then
        $SQLITE3 "$DB" "DELETE FROM query_storage WHERE domain = $id;"
    else
        echo "[-] Domain not found in database: $domain"
    fi
done < "$DOMAIN_LIST"

# Vacuum database to reduce size
echo "[+] Vacuuming database"
$SQLITE3 "$DB" "VACUUM;"

# Restart FTL service to keep it clean. I found if I didn't restart it, 
# the numbers would still be messed up... maybe if I left it longer it would refresh?
echo "[+] Restarting Pi-hole FTL"
$SYSTEMCTL restart pihole-FTL

# Flush Pi-hole logs
echo "[+] Flushing Pi-hole logs"
$PIHOLE -f

# Remove backups older than 24 hours 
# (I can probably safely remove this, I've been running the script for a while - so far so good)
echo "[+] Cleaning up old backups (older than 24 hours)"
$FIND "$BACKUP_DIR" -type f -name "pihole-FTL-*.db" -mtime +0 -exec rm -f {} \;

echo "[+] Pi-hole FTL cleanup completed at $(date)"

abraxas86 avatar Aug 30 '25 07:08 abraxas86

This issue is stale because it has been open 30 days with no activity. Please comment or update this issue or it will be closed in 5 days.

github-actions[bot] avatar Sep 29 '25 08:09 github-actions[bot]

I don't think this has been addressed yet. Still have the script set to run every hour.

abraxas86 avatar Sep 29 '25 12:09 abraxas86

Had a similar situation right now. Stopped FTL, renamed the files, and started it again. Now everything works fine again.

-rw-r-----   1 pihole pihole  88K Oct  7 15:07 pihole-FTL.db
-rw-r-----   1 pihole pihole 5.3G Oct  7 15:01 pihole-FTL.db_
-rw-r-----   1 pihole pihole  32K Oct  7 15:08 pihole-FTL.db-shm
-rw-r-----   1 pihole pihole 7.8M Oct  7 15:04 pihole-FTL.db-shm_
-rw-r-----   1 pihole pihole  73K Oct  7 15:08 pihole-FTL.db-wal
-rw-r-----   1 pihole pihole 4.0G Oct  7 15:04 pihole-FTL.db-wal_

Is there a certain cleanup job not running/working correctly? Maybe a VACUUM; is needed?

LordSimal avatar Oct 07 '25 13:10 LordSimal

Thanks for the post and also just noticed this, my pihole-FTL.db and the WAL are taking up to ~650 mb combined after running for a few months, since upgrading to v6. Flushing logs doesn't seem to help.

I don't recall having this issue on v5 or earlier, is this some oversight on the new version?

-rw-r----- 1 hy-r2 hy-r2 334M Oct 11 13:28 pihole-FTL.db
-rw-r----- 1 hy-r2 hy-r2 640K Oct 11 13:28 pihole-FTL.db-shm
-rw-r----- 1 hy-r2 hy-r2 313M Oct 11 13:28 pihole-FTL.db-wal

hs-ye avatar Oct 11 '25 02:10 hs-ye

"I need to restart pihole-FTL service to get back blocked url live list to see.. in this case with debian x86 and pihole.. pihole-FTL.db just 1 day, next day, need to restart service then pihole-FTL.db auto delete/create new one. (all previous logs are gone)

Only way to get blocked list live or updated on refresh, is this. (I try to uninstall and re-install but still fail.. this happen since lastest update v6, with v5 or when change from v5 to v6, was working good...)

pd: sorry if my english is not too good."

ctuais avatar Oct 16 '25 19:10 ctuais

This issue is stale because it has been open 30 days with no activity. Please comment or update this issue or it will be closed in 5 days.

github-actions[bot] avatar Nov 16 '25 08:11 github-actions[bot]

This is still a possible vector to take down a Pihole server.

abraxas86 avatar Nov 16 '25 09:11 abraxas86

https://docs.pi-hole.net/ftldns/configfile/?h=rate#dnsratelimit

Rate limiting should help from the flooding perspective, i.e nothing should be able to hammer FTL so hard (with default configuration) that the database suddenly fills up and it's taken down.

Have you reported the insane number of lookups to the application you are using? FTL crashing is a symptom of the behaviour of this application. The cause is what should be fixed.

That said, I know we are undergoing some optimisation work on the database side of things currently, so maybe the cause wont matter if it no longer causes the symptom (but the cause should be fixed....)

PromoFaux avatar Nov 19 '25 18:11 PromoFaux