Dawarich crashes Raspi while importing huge amount of GPX files
OS & Hardware Raspi 5B, 8GB RAM, cat /etc/os-release shows
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
Version 0.24.1
Describe the bug
I install Dawarich from scratch using Portainer. I can provide the YML if required. I log-in using the demo user and rename that. I create a directory for the new user in the watched folder and populate that folder with ~2150 GPX tracks. Import takes some time. I can follow the progress on the stats page. After >2.000.000 geopoints are imported, the Raspi becomes unresponsive. Neither Dawarich nor Portainer nor any other service, such as SSL/putty is reachable.
In another attempt I split the imports: 1st I imported 856 GPX files, and after that successfully completed, I deleted the files from the watched folder and added the remaining 1295 files. Again, after some time, the Raspi became completely unresponsive. It appears to be the case that Dawarich does not cope with importing a huge amount of Geopoints via the watched folder.
Please note that I do not use any live update, such as Owntracks app. Geopoints are only added by means of import via the watched directory.
To Reproduce See above
Expected behavior All GPX files are imported successfully, no matter the overall amount of geopoints imported.
Screenshots n/a
Logs
From a service that frequently stores the Raspi's uptime into a database and creates a graph showing that, I can see that the Raspi stopped working properly around 18:20 Berlin time on Februar 24th, 2025. In the _dawarich_db_pg_17_logs I see the following:
2025-02-24 17:16:38.307 UTC [68] LOG: checkpoint starting: time
2025-02-24 17:16:47.609 UTC [68] LOG: checkpoint complete: wrote 96 buffers (0.6%); 0 WAL file(s) added, 0 removed, 0 recycled; write=9.233 s, sync=0.029 s, total=9.303 s; sync files=45, longest=0.008 s, average=0.001 s; distance=4902 kB, estimate=22256 kB; lsn=2/7A9BE8D8, redo lsn=2/7A9A9338
2025-02-24 17:21:46.987 UTC [72] WARNING: autovacuum worker took too long to start; canceled
2025-02-24 17:25:02.266 UTC [68] LOG: checkpoint starting: time
2025-02-24 18:22:05.636 UTC [96443] FATAL: sorry, too many clients already
2025-02-24 18:25:25.689 UTC [68] LOG: checkpoint complete: wrote 35 buffers (0.2%); 0 WAL file(s) added, 0 removed, 0 recycled; write=219.659 s, sync=55.831 s, total=3779.167 s; sync files=25, longest=4.615 s, average=1.765 s; distance=175 kB, estimate=20048 kB; lsn=2/7A9D52D0, redo lsn=2/7A9D5220
PostgreSQL Database directory appears to contain a database; Skipping initialization
2025-02-24 18:27:17.392 UTC [1] LOG: starting PostgreSQL 17.2 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 13.2.1_git20240309) 13.2.1 20240309, 64-bit
2025-02-24 18:27:17.392 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2025-02-24 18:27:17.392 UTC [1] LOG: listening on IPv6 address "::", port 5432
2025-02-24 18:27:17.566 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2025-02-24 18:27:17.702 UTC [29] LOG: database system was interrupted; last known up at 2025-02-24 18:24:20 UTC
2025-02-24 18:27:17.932 UTC [36] FATAL: the database system is starting up
2025-02-24 18:27:18.153 UTC [37] FATAL: the database system is starting up
2025-02-24 18:27:18.154 UTC [38] FATAL: the database system is starting up
2025-02-24 18:27:20.160 UTC [39] FATAL: the database system is starting up
2025-02-24 18:27:20.161 UTC [40] FATAL: the database system is starting up
2025-02-24 18:27:22.167 UTC [41] FATAL: the database system is starting up
2025-02-24 18:27:22.171 UTC [42] FATAL: the database system is starting up
2025-02-24 18:27:22.992 UTC [49] FATAL: the database system is starting up
2025-02-24 18:27:24.174 UTC [50] FATAL: the database system is starting up
2025-02-24 18:27:24.176 UTC [51] FATAL: the database system is starting up
2025-02-24 18:27:26.180 UTC [52] FATAL: the database system is starting up
2025-02-24 18:27:26.183 UTC [53] FATAL: the database system is starting up
2025-02-24 18:27:27.715 UTC [29] LOG: syncing data directory (fsync), elapsed time: 10.00 s, current path: ./base/5/19349
2025-02-24 18:27:28.045 UTC [60] FATAL: the database system is starting up
2025-02-24 18:27:28.184 UTC [61] FATAL: the database system is starting up
2025-02-24 18:27:28.189 UTC [62] FATAL: the database system is starting up
2025-02-24 18:27:29.488 UTC [29] LOG: database system was not properly shut down; automatic recovery in progress
2025-02-24 18:27:29.503 UTC [29] LOG: redo starts at 2/7A9D5220
2025-02-24 18:27:29.555 UTC [29] LOG: invalid record length at 2/7A9FCCB8: expected at least 24, got 0
2025-02-24 18:27:29.555 UTC [29] LOG: redo done at 2/7A9FCC78 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.05 s
2025-02-24 18:27:29.601 UTC [27] LOG: checkpoint starting: end-of-recovery immediate wait
2025-02-24 18:27:29.832 UTC [27] LOG: checkpoint complete: wrote 46 buffers (0.3%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.103 s, sync=0.092 s, total=0.245 s; sync files=37, longest=0.019 s, average=0.003 s; distance=158 kB, estimate=158 kB; lsn=2/7A9FCCB8, redo lsn=2/7A9FCCB8
2025-02-24 18:27:29.845 UTC [1] LOG: database system is ready to accept connections
Mind, these are the last entries in the log, I powercycled the raspi areound 21:05 Berlin time, so those logs cover the last few minutes of Dawarich being alive.
Here's the _dawarich_sidekiq_logs:
D, [2025-02-24T20:09:08.213571 #40] DEBUG -- : User Load (73.1ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:08.216972 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:5:in 'Stats::CalculateMonth#initialize'
D, [2025-02-24T20:09:08.283418 #40] DEBUG -- : User Load (99.1ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:08.299875 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:5:in 'Stats::CalculateMonth#initialize'
D, [2025-02-24T20:09:08.298856 #40] DEBUG -- : User Load (124.3ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:08.316354 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:5:in 'Stats::CalculateMonth#initialize'
D, [2025-02-24T20:09:08.294033 #40] DEBUG -- : User Load (114.2ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:08.323327 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:5:in 'Stats::CalculateMonth#initialize'
D, [2025-02-24T20:09:08.309192 #40] DEBUG -- : User Load (134.5ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:08.498146 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:5:in 'Stats::CalculateMonth#initialize'
D, [2025-02-24T20:09:08.309647 #40] DEBUG -- : User Load (134.8ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:08.524133 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:5:in 'Stats::CalculateMonth#initialize'
D, [2025-02-24T20:09:08.321432 #40] DEBUG -- : User Load (145.9ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:08.564273 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:5:in 'Stats::CalculateMonth#initialize'
D, [2025-02-24T20:09:08.309460 #40] DEBUG -- : User Load (135.2ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:08.573221 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:5:in 'Stats::CalculateMonth#initialize'
D, [2025-02-24T20:09:08.302723 #40] DEBUG -- : User Load (128.8ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:08.579951 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:5:in 'Stats::CalculateMonth#initialize'
D, [2025-02-24T20:09:08.300646 #40] DEBUG -- : User Load (129.2ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:08.581759 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:5:in 'Stats::CalculateMonth#initialize'
D, [2025-02-24T20:09:09.263431 #40] DEBUG -- : Point Exists? (182.6ms) SELECT 1 AS one FROM "points" WHERE "points"."user_id" = $1 AND "points"."timestamp" BETWEEN $2 AND $3 LIMIT $4 [["user_id", 1], ["timestamp", 1730419200], ["timestamp", 1732924800], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.265531 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:11:in 'Stats::CalculateMonth#call'
D, [2025-02-24T20:09:09.268874 #40] DEBUG -- : Point Exists? (146.4ms) SELECT 1 AS one FROM "points" WHERE "points"."user_id" = $1 AND "points"."timestamp" BETWEEN $2 AND $3 LIMIT $4 [["user_id", 1], ["timestamp", 1722470400], ["timestamp", 1725062400], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.270524 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:11:in 'Stats::CalculateMonth#call'
D, [2025-02-24T20:09:09.269229 #40] DEBUG -- : Point Exists? (55.7ms) SELECT 1 AS one FROM "points" WHERE "points"."user_id" = $1 AND "points"."timestamp" BETWEEN $2 AND $3 LIMIT $4 [["user_id", 1], ["timestamp", 1727740800], ["timestamp", 1730332800], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.280466 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:11:in 'Stats::CalculateMonth#call'
D, [2025-02-24T20:09:09.269493 #40] DEBUG -- : Point Exists? (261.0ms) SELECT 1 AS one FROM "points" WHERE "points"."user_id" = $1 AND "points"."timestamp" BETWEEN $2 AND $3 LIMIT $4 [["user_id", 1], ["timestamp", 1719792000], ["timestamp", 1722384000], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.283729 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:11:in 'Stats::CalculateMonth#call'
D, [2025-02-24T20:09:09.269655 #40] DEBUG -- : Point Exists? (259.6ms) SELECT 1 AS one FROM "points" WHERE "points"."user_id" = $1 AND "points"."timestamp" BETWEEN $2 AND $3 LIMIT $4 [["user_id", 1], ["timestamp", 1727740800], ["timestamp", 1730332800], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.298944 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:11:in 'Stats::CalculateMonth#call'
D, [2025-02-24T20:09:09.270722 #40] DEBUG -- : Point Exists? (77.9ms) SELECT 1 AS one FROM "points" WHERE "points"."user_id" = $1 AND "points"."timestamp" BETWEEN $2 AND $3 LIMIT $4 [["user_id", 1], ["timestamp", 1719792000], ["timestamp", 1722384000], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.303605 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:11:in 'Stats::CalculateMonth#call'
D, [2025-02-24T20:09:09.269378 #40] DEBUG -- : Point Exists? (180.3ms) SELECT 1 AS one FROM "points" WHERE "points"."user_id" = $1 AND "points"."timestamp" BETWEEN $2 AND $3 LIMIT $4 [["user_id", 1], ["timestamp", 1719792000], ["timestamp", 1722384000], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.304794 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:11:in 'Stats::CalculateMonth#call'
D, [2025-02-24T20:09:09.269746 #40] DEBUG -- : Point Exists? (186.3ms) SELECT 1 AS one FROM "points" WHERE "points"."user_id" = $1 AND "points"."timestamp" BETWEEN $2 AND $3 LIMIT $4 [["user_id", 1], ["timestamp", 1722470400], ["timestamp", 1725062400], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.306168 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:11:in 'Stats::CalculateMonth#call'
D, [2025-02-24T20:09:09.269047 #40] DEBUG -- : Point Exists? (177.1ms) SELECT 1 AS one FROM "points" WHERE "points"."user_id" = $1 AND "points"."timestamp" BETWEEN $2 AND $3 LIMIT $4 [["user_id", 1], ["timestamp", 1727740800], ["timestamp", 1730332800], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.306900 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:11:in 'Stats::CalculateMonth#call'
D, [2025-02-24T20:09:09.302449 #40] DEBUG -- : TRANSACTION (2.0ms) BEGIN
D, [2025-02-24T20:09:09.307966 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:30:in 'block in Stats::CalculateMonth#update_month_stats'
D, [2025-02-24T20:09:09.269583 #40] DEBUG -- : Point Exists? (277.2ms) SELECT 1 AS one FROM "points" WHERE "points"."user_id" = $1 AND "points"."timestamp" BETWEEN $2 AND $3 LIMIT $4 [["user_id", 1], ["timestamp", 1727740800], ["timestamp", 1730332800], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.310832 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:11:in 'Stats::CalculateMonth#call'
D, [2025-02-24T20:09:09.397977 #40] DEBUG -- : TRANSACTION (17.3ms) BEGIN
D, [2025-02-24T20:09:09.399213 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:30:in 'block in Stats::CalculateMonth#update_month_stats'
D, [2025-02-24T20:09:09.400193 #40] DEBUG -- : TRANSACTION (15.2ms) BEGIN
D, [2025-02-24T20:09:09.401180 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:30:in 'block in Stats::CalculateMonth#update_month_stats'
D, [2025-02-24T20:09:09.401626 #40] DEBUG -- : TRANSACTION (14.0ms) BEGIN
D, [2025-02-24T20:09:09.402448 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:30:in 'block in Stats::CalculateMonth#update_month_stats'
D, [2025-02-24T20:09:09.401270 #40] DEBUG -- : TRANSACTION (14.2ms) BEGIN
D, [2025-02-24T20:09:09.403553 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:30:in 'block in Stats::CalculateMonth#update_month_stats'
D, [2025-02-24T20:09:09.401708 #40] DEBUG -- : TRANSACTION (12.0ms) BEGIN
D, [2025-02-24T20:09:09.404537 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:30:in 'block in Stats::CalculateMonth#update_month_stats'
D, [2025-02-24T20:09:09.402582 #40] DEBUG -- : TRANSACTION (5.0ms) BEGIN
D, [2025-02-24T20:09:09.407090 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:30:in 'block in Stats::CalculateMonth#update_month_stats'
D, [2025-02-24T20:09:09.400291 #40] DEBUG -- : TRANSACTION (14.0ms) BEGIN
D, [2025-02-24T20:09:09.408785 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:30:in 'block in Stats::CalculateMonth#update_month_stats'
D, [2025-02-24T20:09:09.409294 #40] DEBUG -- : Stat Load (25.7ms) SELECT "stats".* FROM "stats" WHERE "stats"."year" = $1 AND "stats"."month" = $2 AND "stats"."user_id" = $3 LIMIT $4 [["year", 2024], ["month", 7], ["user_id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.409986 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:30:in 'block in Stats::CalculateMonth#update_month_stats'
D, [2025-02-24T20:09:09.408990 #40] DEBUG -- : Stat Load (28.5ms) SELECT "stats".* FROM "stats" WHERE "stats"."year" = $1 AND "stats"."month" = $2 AND "stats"."user_id" = $3 LIMIT $4 [["year", 2024], ["month", 8], ["user_id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.476216 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:30:in 'block in Stats::CalculateMonth#update_month_stats'
D, [2025-02-24T20:09:09.405425 #40] DEBUG -- : Stat Load (30.1ms) SELECT "stats".* FROM "stats" WHERE "stats"."year" = $1 AND "stats"."month" = $2 AND "stats"."user_id" = $3 LIMIT $4 [["year", 2024], ["month", 11], ["user_id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.490198 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:30:in 'block in Stats::CalculateMonth#update_month_stats'
D, [2025-02-24T20:09:09.475208 #40] DEBUG -- : Stat Load (87.4ms) SELECT "stats".* FROM "stats" WHERE "stats"."year" = $1 AND "stats"."month" = $2 AND "stats"."user_id" = $3 LIMIT $4 [["year", 2024], ["month", 8], ["user_id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.504166 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:30:in 'block in Stats::CalculateMonth#update_month_stats'
D, [2025-02-24T20:09:09.489485 #40] DEBUG -- : Stat Load (102.3ms) SELECT "stats".* FROM "stats" WHERE "stats"."year" = $1 AND "stats"."month" = $2 AND "stats"."user_id" = $3 LIMIT $4 [["year", 2024], ["month", 7], ["user_id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.509962 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:30:in 'block in Stats::CalculateMonth#update_month_stats'
D, [2025-02-24T20:09:09.503551 #40] DEBUG -- : Stat Load (113.8ms) SELECT "stats".* FROM "stats" WHERE "stats"."year" = $1 AND "stats"."month" = $2 AND "stats"."user_id" = $3 LIMIT $4 [["year", 2024], ["month", 10], ["user_id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.513288 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:30:in 'block in Stats::CalculateMonth#update_month_stats'
D, [2025-02-24T20:09:09.509372 #40] DEBUG -- : Stat Load (111.9ms) SELECT "stats".* FROM "stats" WHERE "stats"."year" = $1 AND "stats"."month" = $2 AND "stats"."user_id" = $3 LIMIT $4 [["year", 2024], ["month", 10], ["user_id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.514389 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:30:in 'block in Stats::CalculateMonth#update_month_stats'
D, [2025-02-24T20:09:09.510579 #40] DEBUG -- : Stat Load (124.3ms) SELECT "stats".* FROM "stats" WHERE "stats"."year" = $1 AND "stats"."month" = $2 AND "stats"."user_id" = $3 LIMIT $4 [["year", 2024], ["month", 7], ["user_id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.515243 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:30:in 'block in Stats::CalculateMonth#update_month_stats'
D, [2025-02-24T20:09:09.399408 #40] DEBUG -- : TRANSACTION (17.9ms) BEGIN
D, [2025-02-24T20:09:09.516406 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:30:in 'block in Stats::CalculateMonth#update_month_stats'
D, [2025-02-24T20:09:09.400335 #40] DEBUG -- : TRANSACTION (17.6ms) BEGIN
D, [2025-02-24T20:09:09.517199 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:30:in 'block in Stats::CalculateMonth#update_month_stats'
D, [2025-02-24T20:09:09.567978 #40] DEBUG -- : Stat Load (185.1ms) SELECT "stats".* FROM "stats" WHERE "stats"."year" = $1 AND "stats"."month" = $2 AND "stats"."user_id" = $3 LIMIT $4 [["year", 2024], ["month", 10], ["user_id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.568587 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:30:in 'block in Stats::CalculateMonth#update_month_stats'
D, [2025-02-24T20:09:09.570701 #40] DEBUG -- : Stat Load (189.1ms) SELECT "stats".* FROM "stats" WHERE "stats"."year" = $1 AND "stats"."month" = $2 AND "stats"."user_id" = $3 LIMIT $4 [["year", 2024], ["month", 10], ["user_id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.571469 #40] DEBUG -- : ↳ app/services/stats/calculate_month.rb:30:in 'block in Stats::CalculateMonth#update_month_stats'
D, [2025-02-24T20:09:09.602226 #40] DEBUG -- : CACHE User Load (2.3ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.607734 #40] DEBUG -- : ↳ app/models/stat.rb:25:in 'Stat#points'
D, [2025-02-24T20:09:09.613372 #40] DEBUG -- : CACHE User Load (0.0ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.668422 #40] DEBUG -- : ↳ app/models/stat.rb:25:in 'Stat#points'
D, [2025-02-24T20:09:09.677833 #40] DEBUG -- : CACHE User Load (0.0ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.704388 #40] DEBUG -- : ↳ app/models/stat.rb:25:in 'Stat#points'
D, [2025-02-24T20:09:09.607306 #40] DEBUG -- : CACHE User Load (0.0ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.782107 #40] DEBUG -- : ↳ app/models/stat.rb:25:in 'Stat#points'
D, [2025-02-24T20:09:09.675803 #40] DEBUG -- : CACHE User Load (0.0ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.800990 #40] DEBUG -- : ↳ app/models/stat.rb:25:in 'Stat#points'
D, [2025-02-24T20:09:09.606697 #40] DEBUG -- : CACHE User Load (1.0ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.809131 #40] DEBUG -- : ↳ app/models/stat.rb:25:in 'Stat#points'
D, [2025-02-24T20:09:09.610057 #40] DEBUG -- : CACHE User Load (0.0ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.865139 #40] DEBUG -- : ↳ app/models/stat.rb:25:in 'Stat#points'
D, [2025-02-24T20:09:09.609417 #40] DEBUG -- : CACHE User Load (0.0ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.875677 #40] DEBUG -- : ↳ app/models/stat.rb:25:in 'Stat#points'
D, [2025-02-24T20:09:09.664271 #40] DEBUG -- : CACHE User Load (47.9ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.881164 #40] DEBUG -- : ↳ app/models/stat.rb:25:in 'Stat#points'
D, [2025-02-24T20:09:09.703882 #40] DEBUG -- : CACHE User Load (0.0ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
D, [2025-02-24T20:09:09.894022 #40] DEBUG -- : ↳ app/models/stat.rb:25:in 'Stat#points'
To me it looks like sidekiq continued working, at least it continued writing to the log. Mind, this is the complete log that was provided by Portainer, the time around the death of the DB container is not covered.
Here is the last part of the _dawarich_redis_logs:
75315:C 24 Feb 2025 17:14:12.945 * DB saved on disk
75315:C 24 Feb 2025 17:14:12.952 * Fork CoW for RDB: current 3 MB, peak 3 MB, average 2 MB
1:M 24 Feb 2025 17:14:12.970 * Background saving terminated with success
1:M 24 Feb 2025 18:26:13.954 * 100 changes in 300 seconds. Saving...
1:M 24 Feb 2025 18:26:14.363 * Background saving started by pid 75424
75424:C 24 Feb 2025 18:26:20.953 * DB saved on disk
75424:C 24 Feb 2025 18:26:20.969 * Fork CoW for RDB: current 3 MB, peak 3 MB, average 2 MB
1:M 24 Feb 2025 18:26:21.066 * Background saving terminated with success
1:C 24 Feb 2025 18:27:13.169 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 24 Feb 2025 18:27:13.169 # Redis version=7.0.15, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 24 Feb 2025 18:27:13.169 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 24 Feb 2025 18:27:13.170 * monotonic clock: POSIX clock_gettime
1:M 24 Feb 2025 18:27:13.203 * Running mode=standalone, port=6379.
1:M 24 Feb 2025 18:27:13.203 # Server initialized
1:M 24 Feb 2025 18:27:13.203 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 24 Feb 2025 18:27:13.224 * Loading RDB produced by version 7.0.15
1:M 24 Feb 2025 18:27:13.224 * RDB age 59 seconds
1:M 24 Feb 2025 18:27:13.224 * RDB memory usage when created 1289.16 Mb
1:M 24 Feb 2025 18:27:26.819 * Done loading RDB, keys loaded: 1255, keys expired: 1662.
1:M 24 Feb 2025 18:27:26.819 * DB loaded from disk: 13.607 seconds
1:M 24 Feb 2025 18:27:26.819 * Ready to accept connections
1:M 24 Feb 2025 20:08:46.805 * 1 changes in 3600 seconds. Saving...
1:M 24 Feb 2025 20:08:46.811 * Background saving started by pid 33
33:C 24 Feb 2025 20:08:53.725 * DB saved on disk
33:C 24 Feb 2025 20:08:53.727 * Fork CoW for RDB: current 2 MB, peak 2 MB, average 1 MB
1:M 24 Feb 2025 20:08:53.737 * Background saving terminated with success
And, finally, here's the last part of the _dawarich_app_logs:
D, [2025-02-24T17:16:42.977714 #168] DEBUG -- : ↳ app/controllers/api_controller.rb:16:in 'ApiController#current_api_user'
I, [2025-02-24T17:16:42.978299 #168] INFO -- : {"method":"GET","path":"/api/v1/health","format":"*/*","controller":"Api::V1::HealthController","action":"index","status":200,"allocations":879,"duration":2.84,"view":0.13,"db":0.42}
D, [2025-02-24T17:16:53.221205 #168] DEBUG -- : User Load (0.3ms) SELECT "users".* FROM "users" WHERE "users"."api_key" IS NULL LIMIT $1 [["LIMIT", 1]]
D, [2025-02-24T17:16:53.221796 #168] DEBUG -- : ↳ app/controllers/api_controller.rb:16:in 'ApiController#current_api_user'
I, [2025-02-24T17:16:53.222076 #168] INFO -- : {"method":"GET","path":"/api/v1/health","format":"*/*","controller":"Api::V1::HealthController","action":"index","status":200,"allocations":879,"duration":3.1,"view":0.06,"db":0.25}
D, [2025-02-24T17:17:04.191770 #162] DEBUG -- : User Load (0.4ms) SELECT "users".* FROM "users" WHERE "users"."api_key" IS NULL LIMIT $1 [["LIMIT", 1]]
D, [2025-02-24T17:17:04.192922 #162] DEBUG -- : ↳ app/controllers/api_controller.rb:16:in 'ApiController#current_api_user'
I, [2025-02-24T17:17:04.193294 #162] INFO -- : {"method":"GET","path":"/api/v1/health","format":"*/*","controller":"Api::V1::HealthController","action":"index","status":200,"allocations":879,"duration":51.23,"view":0.08,"db":0.36}
[143] ! Terminating timed out worker (Worker 1 failed to check in within 3600 seconds): 168
[143] - Worker 1 (PID: 113388) booted in 0.08s, phase: 0
D, [2025-02-24T18:26:29.116227 #162] DEBUG -- : User Load (1073.4ms) SELECT "users".* FROM "users" WHERE "users"."api_key" IS NULL LIMIT $1 [["LIMIT", 1]]
D, [2025-02-24T18:26:29.116936 #162] DEBUG -- : ↳ app/controllers/api_controller.rb:16:in 'ApiController#current_api_user'
I, [2025-02-24T18:26:29.117379 #162] INFO -- : {"method":"GET","path":"/api/v1/health","format":"*/*","controller":"Api::V1::HealthController","action":"index","status":200,"allocations":4491,"duration":1261.65,"view":0.08,"db":1546.74}
D, [2025-02-24T18:26:39.340177 #162] DEBUG -- : User Load (0.7ms) SELECT "users".* FROM "users" WHERE "users"."api_key" IS NULL LIMIT $1 [["LIMIT", 1]]
D, [2025-02-24T18:26:39.341135 #162] DEBUG -- : ↳ app/controllers/api_controller.rb:16:in 'ApiController#current_api_user'
I, [2025-02-24T18:26:39.342531 #162] INFO -- : {"method":"GET","path":"/api/v1/health","format":"*/*","controller":"Api::V1::HealthController","action":"index","status":200,"allocations":879,"duration":3.94,"view":0.1,"db":0.71}
D, [2025-02-24T18:26:49.737371 #162] DEBUG -- : User Load (1.4ms) SELECT "users".* FROM "users" WHERE "users"."api_key" IS NULL LIMIT $1 [["LIMIT", 1]]
D, [2025-02-24T18:26:49.738561 #162] DEBUG -- : ↳ app/controllers/api_controller.rb:16:in 'ApiController#current_api_user'
I, [2025-02-24T18:26:49.739022 #162] INFO -- : {"method":"GET","path":"/api/v1/health","format":"*/*","controller":"Api::V1::HealthController","action":"index","status":200,"allocations":879,"duration":8.59,"view":0.08,"db":1.38}
⚠️ Starting Rails environment: development ⚠️
⏳ Waiting for database to be ready...
psql: error: connection to server at "dawarich_db_pg_17" (172.19.0.5), port 5432 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
Postgres is unavailable - retrying...
psql: error: connection to server at "dawarich_db_pg_17" (172.19.0.5), port 5432 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
Postgres is unavailable - retrying...
psql: error: connection to server at "dawarich_db_pg_17" (172.19.0.5), port 5432 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
Postgres is unavailable - retrying...
psql: error: connection to server at "dawarich_db_pg_17" (172.19.0.5), port 5432 failed: FATAL: the database system is starting up
Postgres is unavailable - retrying...
psql: error: connection to server at "dawarich_db_pg_17" (172.19.0.5), port 5432 failed: FATAL: the database system is starting up
Postgres is unavailable - retrying...
psql: error: connection to server at "dawarich_db_pg_17" (172.19.0.5), port 5432 failed: FATAL: the database system is starting up
Postgres is unavailable - retrying...
psql: error: connection to server at "dawarich_db_pg_17" (172.19.0.5), port 5432 failed: FATAL: the database system is starting up
Postgres is unavailable - retrying...
psql: error: connection to server at "dawarich_db_pg_17" (172.19.0.5), port 5432 failed: FATAL: the database system is starting up
Postgres is unavailable - retrying...
psql: error: connection to server at "dawarich_db_pg_17" (172.19.0.5), port 5432 failed: FATAL: the database system is starting up
Postgres is unavailable - retrying...
✅ PostgreSQL is ready!
PostgreSQL is ready. Running database migrations...
[dotenv] Set DATABASE_PORT
[dotenv] Loaded .env.development
D, [2025-02-24T20:09:02.775859 #54] DEBUG -- : (0.9ms) SELECT pg_try_advisory_lock(1212213197400985920)
D, [2025-02-24T20:09:02.800324 #54] DEBUG -- : ActiveRecord::SchemaMigration Load (10.6ms) SELECT "schema_migrations"."version" FROM "schema_migrations" ORDER BY "schema_migrations"."version" ASC
D, [2025-02-24T20:09:02.877597 #54] DEBUG -- : ActiveRecord::InternalMetadata Load (8.3ms) SELECT * FROM "ar_internal_metadata" WHERE "ar_internal_metadata"."key" = $1 ORDER BY "ar_internal_metadata"."key" ASC LIMIT 1 [[nil, "environment"]]
D, [2025-02-24T20:09:02.883727 #54] DEBUG -- : (1.0ms) SELECT pg_advisory_unlock(1212213197400985920)
D, [2025-02-24T20:09:02.892062 #54] DEBUG -- : ActiveRecord::SchemaMigration Load (1.4ms) SELECT "schema_migrations"."version" FROM "schema_migrations" ORDER BY "schema_migrations"."version" ASC
Running DATA migrations...
[dotenv] Set DATABASE_PORT
[dotenv] Loaded .env.development
D, [2025-02-24T20:09:11.188313 #94] DEBUG -- : (1.6ms) SELECT pg_try_advisory_lock(1212213197400985920)
D, [2025-02-24T20:09:11.243537 #94] DEBUG -- : DataMigrate::DataSchemaMigration Load (11.1ms) SELECT "data_migrations"."version" FROM "data_migrations" ORDER BY "data_migrations"."version" ASC
D, [2025-02-24T20:09:11.262093 #94] DEBUG -- : ActiveRecord::InternalMetadata Load (2.5ms) SELECT * FROM "ar_internal_metadata" WHERE "ar_internal_metadata"."key" = $1 ORDER BY "ar_internal_metadata"."key" ASC LIMIT 1 [[nil, "environment"]]
D, [2025-02-24T20:09:11.264993 #94] DEBUG -- : (1.3ms) SELECT pg_advisory_unlock(1212213197400985920)
D, [2025-02-24T20:09:11.266767 #94] DEBUG -- : DataMigrate::DataSchemaMigration Load (0.3ms) SELECT "data_migrations"."version" FROM "data_migrations" ORDER BY "data_migrations"."version" ASC
Running seeds...
[dotenv] Set DATABASE_PORT
[dotenv] Loaded .env.development
D, [2025-02-24T20:09:17.267918 #117] DEBUG -- : ActiveRecord::SchemaMigration Load (51.7ms) SELECT "schema_migrations"."version" FROM "schema_migrations" ORDER BY "schema_migrations"."version" ASC
D, [2025-02-24T20:09:17.785376 #117] DEBUG -- : User Exists? (2.6ms) SELECT 1 AS one FROM "users" LIMIT $1 [["LIMIT", 1]]
=> Booting Puma
=> Rails 8.0.1 application starting in development
=> Run `bin/rails server --help` for more startup options
[dotenv] Set DATABASE_PORT
[dotenv] Loaded .env.development
2025-02-24T20:09:22.395Z pid=133 tid=1gt INFO: Sidekiq 7.3.8 connecting to Redis with options {size: 10, pool_name: "internal", url: "redis://dawarich_redis:6379/0"}
I, [2025-02-24T20:09:22.398073 #133] INFO -- : Enqueued Cache::CleaningJob (Job ID: f2aeb8ad-1be0-4f7d-8936-53de30a0b05a) to Sidekiq(default)
I, [2025-02-24T20:09:22.398615 #133] INFO -- : ↳ config/environment.rb:12:in '<main>'
I, [2025-02-24T20:09:22.416602 #133] INFO -- : Enqueued Cache::PreheatingJob (Job ID: 82ca27f0-5a96-4668-a302-36722141ed88) to Sidekiq(default)
I, [2025-02-24T20:09:22.419481 #133] INFO -- : ↳ config/environment.rb:15:in '<main>'
[133] Puma starting in cluster mode...
[133] * Puma version: 6.6.0 ("Return to Forever")
[133] * Ruby version: ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +YJIT +PRISM [aarch64-linux-musl]
[133] * Min threads: 5
[133] * Max threads: 5
[133] * Environment: development
[133] * Master PID: 133
[133] * Workers: 2
[133] * Restarts: (✔) hot (✖) phased
[133] * Preloading application
[133] * Listening on http://[::]:3000
[133] Use Ctrl-C to stop
[133] - Worker 0 (PID: 160) booted in 0.01s, phase: 0
[133] - Worker 1 (PID: 164) booted in 0.01s, phase: 0
D, [2025-02-24T20:09:33.576308 #160] DEBUG -- : ActiveRecord::SchemaMigration Load (9.7ms) SELECT "schema_migrations"."version" FROM "schema_migrations" ORDER BY "schema_migrations"."version" ASC
D, [2025-02-24T20:09:34.809272 #160] DEBUG -- : User Load (1.5ms) SELECT "users".* FROM "users" WHERE "users"."api_key" IS NULL LIMIT $1 [["LIMIT", 1]]
D, [2025-02-24T20:09:34.843878 #160] DEBUG -- : ↳ app/controllers/api_controller.rb:16:in 'ApiController#current_api_user'
I, [2025-02-24T20:09:34.885464 #160] INFO -- : {"method":"GET","path":"/api/v1/health","format":"*/*","controller":"Api::V1::HealthController","action":"index","status":200,"allocations":8971,"duration":162.69,"view":0.39,"db":17.3}
Additional context
- Reverse geocoding of the 1st batch is still ongoing when populating the
watcheddirectory with the 2nd batch of GPX files. - I've seen the same behavior on older versions of Dawarich. The whole system runs 100% stable if
Dawarichis not installed or ifDawarichis not killed by a huge import. With 100% stable I mean that it runs for months without interruption, except the occasional restart when installing OS updates. The other docker containers are also frequently updated to the latest version. In particular I runImmich,Owntracks,rpi-monitor' and someapache2` container for hosting a little intranet page.
so,raspi is stable,but when running dawarich raspi is not stable...how about temperatures and memory+cpu utilisation?power supply ok?(i mean really ok,thick cable-copper,not plastic,beefy power supply)...dawarich is kinda resource hungry,not sure if you have reverse geocoding enabled,but that is another thing that can trigger big load(after import)...and another instance of photon might be too much for raspi so i'd run putty for serious problems (it usually dumps serious problems on console) and watch top maybe on second console what was latest before death
tldr:i don't think you can blame dawarich for raspi freeze,dawarich does not operate on hw level,check your hardware and kernel
Hi Korenchkin,
thank's for your response.
Yes, raspi is stable w/o dawarich. Issue exists also if all other containers are stopped. Power supply is OK (power supply, not cell phone charger) and wiring as well. Reverse geocoding is enabled but not operating on a self-hosted solution but using komoot instead.
Maybe "crash" is the wrong word. What I observe is that I am not able to connect to the raspi anylonger. Neither by putty/SSL nor by browser to ddwarich, Portainer or dawarich's sidekiq page. When I do a power cycle the raspi is responsive for a while but after some time, maybe 10 minutes or so, enters that state again. In that time window I can connect to Portainer and stop the dawarich stack. Then everything is fine, all services are running fine. Once I start the dawarich stack, the system will become unresponsive again. This is a clear indication that the root cause is connected to dawarich, and that the information that causes the entry into the undesired state is stored persistently.
My work around is to not fill too many GPX files into the watched directory, and, once the 1st batch is imported and some time has passed, delete the GPX files and add the next batch. If I do that, I am able to import all ~4M geopoints. So it might be connected to the way dawarich handles a big number of GPX files imported.
It might be due to resource usage being too high. Raspi is a low-performer, that's clear. Maybe it is possible to split a huge number of imports into smaller batches. I can provide my empirical information on what worked and what did not work, if this is desired.
Ah, yes: Hardware is fine, a new Raspi5, and kernel... well, it's the official rasperry pi OS running on it, the latest updates that were available at that time via apt-get were installed. Besides installing the updates I do not mess with the OS.
What kind of storage do you use? SD-card wouldn't do well, eMMC is suboptimal too. RaPi5 can nVME.
Why should an SD card not do well? I use an SD-card. Brand new, speed grade A2. Pretty fast. Besides, a Raspi 4 does not provide support for nVME. Is Dawarich not supposed to operate on a Raspi 4?
SD-card it not efficient on IO-workload. Have a look at processes in uninterruptible sleep.
ps -eo pid,stat,comm
Importing many points is IO-expensice and memory-expensive. If you have swap on, then memory-expensive becomes IO-expensive.
Raspi 4 does not provide support for nVME.
Your issue quotes Raspi 5B, 8GB RAM. Typo may be?
Pretty fast.
It is on sequential RW, but it struggles when IOPS needed by design. Note that they do have not that much RW cycles, you don't want them to serve as DB media or place for a swap. It's good for reading / and occasionally write logs and /tmp/.
Note that unlike RaPi5 RaPi4 has a network controller nserted into the USB. Although, USB SSD must be way better(testing needede) and it would be definetely much more reliable.
USB-C on RaPi4 is a power only, no data.
Raspi 4 does not provide support for nVME
It actually has PCIe 2.0x1 (USB controller lives there) and you can weld them nvme port there, but you shan't.
UPD: Since your Dawarich is containerised, you can run it on your PC/laptop for the import and when the DB is ready, respawn it on the RaPi with the same volumes.
One more thing, if you have more D-processes (uninterruptible sleep) than you have CPU cores, then you'll most likely experience system 'hanging' even the CPU doesn't really compute anything. You won't SSH to the host because the CPU waits for processes to finish their IO and you cannot stop them with any signal. If swap is in use, the problem would be even worse. You'll see load average going to the Moon (LA > CPU_CORES is suboptimal, LA >> CPU_CORES is catastrophic).
I know of quite a few of the disadvantages coming with an SD-card stated above. Since the SD-card is fairly new it should not be worn out yet. Nonetheless, during the last restore of a backup (yestereve) I did a check which yielded no errors.
I am using a Raspi5 for the Docker/Portainer environment running Dawarich. I just mentioned the Raspi4 because this does not come with the exposed I/F allowing to connect a NVME. If Dawarich is supposed to execute from an NVME, Raspi4 would be out of the compatibility list, and maybe that'd be worth being stated somewhere in the documentation.
What should I read from the output of ps -eo pid,stat,comm or how should I interpret thast output?
I still do not see why it should make a difference if I import 800 GPX files in one go and then 1200 GPX files instead of importing 2000 GPX files in one go. To me the explanations on SD-card disadvantages, while being true for themselves, do not make too much sense in that context.
Sd card is slow as hell for database operations(even the 'fast' ones),ssd over usb will help a TON(don't buy cheap ones/dramless),uasp usb to sata helps too,make sure it accepts trim(or do blkdiscard/trim on sata and then make partition smaller,keep several 10's of gbytes free(i would keep 50gb of 240gb free)...that you can even with trim support,this will solve storage problems
-might not be your problem in this situation,just step 1 that generally helps,but skip this if you are confident-
Step2 check what really breaks,either there is so much writes it just won't let you login(but it should connect on port 22,just it won't finish ssl handshake), or oom killer kills ssh(port 22 is not open,console on hdmi might help), or kernel froze(no ping) None of this is problem with dawarich
SD card has no cache, 1 data line, basic controller. That's how they made. They're not intended to be used as random access and write data storages, they're for the sequential operations like putting a photo in the camera. "Speed" and "Grade" are not directly related to IOPS. The difference between importing 700 GPX and 1200 GPX would be in memory consumption -> swap usage -> sd-card bottleneck. Or more parallel threads working with BD files. SD card, unlike SATA or nVME SSD can work with 1 block of data at a time. Swap should be OFF if it's on the SD-card.
What should I read from the output of ps -eo pid,stat,comm or how should I interpret thast output?
You need to look for processes in uninterruptible sleep. State 'D' afair.
If Dawarich is supposed to execute from an NVME, Raspi4 would be out of the compatibility list, and maybe that'd be worth being stated somewhere in the documentation.
You need to figure out the bottleneck for your workload first. RaPi4 is great for some applications, but it's really not an architectural pinnacle and has plenty of compromises. The biggest one, as I wrote, 1 PICe 2.0 lane that is used for the USB and USB manages everything else (NIC, etc). Rapi5 is better, I use OrangePi's with nVME SSD, importing 12 000 000 points was challenging. Though, you can always run import of the big PC and migrate it for read purposes on RaPi.
@shaman007 : Thanks for your explanation. I guess, we are leaving the point of this ticket. This is not about improving the performance of my Raspi5. This is about improving Dawarich such that it does not stall a system when importing a huge amount of GPX files. I understand that SD card does not offer optimum performance. I sstill do not see that this can be the root cause for the stall Dawarich causes. Your explanation that the higher the number of GPX files to be imported the higher the resource usage will be does not make sense to me. The GPX files are imported sequentially. Maybe 4 in parallel. So maximum resource usage will occur if 4 really big single GPX files are imported at the same time. Replace 4 by the true number, please. So there should not be a difference in importing 800, 1200 or 2000 GPX files in one go. I agree that an excessive resource usage may have the effect of the Raspi not being responsive any longer. To me it looks like the excessive resource usage is more likely to be found in the import procedure. I'd guess that some resources are not properly freed after an individual GPX file is imported but probably only after the whole batch import completed. That would be an explanation why it works with 800 and 1200 GPX files but not with 2000 GPX files. If I'd upgraded my system with a NVME this might have the effect that the import of 2000 files does not cause a stall. But that's mere masking the problem, not solving it. I also see that fixing this is not a low-hanging fruit and that there are other issues which need fixing with a way higher priority, in particular since there is a nice and easy work-around for this import trouble here.
we need to be clear on this:dawarich does not have this problem,you do,we are trying to help you,yet you still refuse,you don't even know what happens,but you raspi is stable,so either you provide something usefull,or i'm suggesting we close this to stop wasting time...
Hi Korenchkin,
I believe there is a misunderstanding here.
This is not a support request. I found a work-around that works fine for me an probably for anybody else.
I filed this report because I believe there is a point in Dawarich which is working sub-optimal since it leads to a non-intuitive behavior of the functionality of the software. I do not demand this being fixed. It is up to you, the maintainers, to decide, if you wish to look into this or dismiss the topic. I thought it might be valuable for you to see how people out in the wild handle Dawarich and which malfunctions they encounter when doing so, allowing you to harden Dawarich for some corener cases you might not yet have thought of, thus allowing an even broader audience to enjoy this fine product.
Since a folder is exposed whose content is subject to be imported into Dawarich, Dawarich should be able to handle any amount of files stored there. I found that this does not always work, and I found that this is 100% reproducible on the versions I attempted to do this. This might be a nearly ideal scenario for analysing the issue and engineering a fix.
For me, being an electronics engineer doing bare bone embedded software for a living for 20+ years now, this issue smells like a bug in Dawarich. If something works for a couple of 100 files but not for 2000, this is likely to be a flaw in the software. Maybe the root cause is an inefficient usage of resources. Maybe some resources are released later than possible, maybe only at the end of a batch processing job, thus the resource consumption starts piling up with each file imported and, if the batch job does not end before a critical level is reached, the system runs into that state where the machine gets inresponsive. Maybe the relatively poor performance of an SD-card is a catalyst.
I have little knowledge of the magic you work with such database oriented server software, so my nose might be wrong here.
If you feel that the issue is worth being pursued further, I'll happily contribute with whatever information you request me to provide (maybe you need to tell me where and how to extract it). If you feel this is not an issue worth looking into, this is also fine, then please close this issue.
I am fully aware that this is not a commercial product and you, who develop, improve and maintain that great project, do this most likely on a non-profit basis. I also fully understand that there might be more important things that require your attention. I am fully aware that I am not in a position to demand anything from you - I am not a customer. I am grateful that you are doing a software package that allows me to visualize the GPX files I recorded in the last few years and I am curious what improvements and features you are going to add in the future. I you feel I offended you, please accept my apologies - no offence was intended.
we do know that dawarich is a bit power hungry,maybe when features are stabilised,dev will try some optimizations,but still one thing needs to be said(again and again),when the device running dawarich crashes,it is device problem,not dawarich,you first have to research,why is your raspi dead,is it not enough ram or slow storage? those 2 problems can make poorly configured device crash(not specifically your problem,just generic default),and poorly i mean you control how much ram you give it and if disk io starves it hard(this is what we are trying to check),and if it is not those 2 problems,then your raspi is not stable..i dare you to find any other problem...
again,software cannot crash os if it is not touching hardware (ports/isa/whatewer) (for example if you see bsod on windows running cpu benchmark or any other cpu heavy app,it is hw or os problem,not your software
if you do not trust me,at least do one of theese 3 : you first need to find out what crashes your pi,use hdmi console and usb keyboard or serial console(3.3v,need kernel parameters maybe,raspi-config or something like it can set it?),you work with embedded,so you have serial converter within hands reach(i do) or use nmon (c d) over any console - search for it,c-cpu d-disk (and - for quicker updates if i remember correctly) over putty and watch what happens before it dies (really good utility)
and yes,from time to time i reimport my data from owntracks( owntrack because i do not trust dawarich that much and it is lightweight sw to log into,also i like to have some replayability),and i did it on version 0.24.x (i have i5-7600k on proxmox,so nothing groundbreaking) and it worked really well,it took more than 24h on my points,with my local komoot
also,i'm not offended and do not mean to offend,it is my take it or leave it,i'm not the dev here,just found some great software this world needed and giving community back by helping other level down from me :) so take care @solderdot72 and hopefully you'find out what happens,and don't be afraid to update when you do have any news
First, I want to thank you all, @solderdot72 @shaman007 and @Korenchkin. The issue is well described, logs are there, context is provided. Also, I actually learned some new stuff from this thread, so thank you, @shaman007, for that specifically.
It's important that @solderdot72 highlighted that this issue is not a demand to fix the problem, but a piece of information provided for consideration. I'm running my own instance on fairly strong hardware, so I usually don't encounter problems such as this one. Dawarich indeed is far from perfect in many aspects, but for me the most important right now is that it solves the problem. Many problems, actually, just in my case, and probably different people use it for different goals.
Eventually, I hope to bring Dawarich to a state where it's less resource intensive, more stable and in general well optimized. We're not there yet, but every month passing, I introduce something new so it could do its work better, faster and in more reliable way. We'll be there.
In the meantime, I want to thank you all one more time for a well-written problem report, technical insights into how Raspi works, and civilized dialog in general. This community rocks.
@Korenchkin : You are right, I do not know what is happening, so the term "crash" might be wrong. There might be better words for describing the state the Raspi enters. If it is clear that this is not caused by Dawarich then I suggest to close this ticket because it cannot be resolved by changing Dawarich. Providing support to people facing problems with their Raspis that are not related to Dawarich is not the point of this github space, this should be done elsewhere. Does that make sense to you?