Higher RAM/CPU usage of pocketbase after a few days
Hi, i have to say, that I'm not familiar with pockedbase as a db backend, but is it normal that it uses way more ram over time? like a memory leak? It also use around 30% consumption of 1 cpu all the time.
For the db container i don't see any error logs. The only spammy log is from Meilisearch. Is it normal that it fires multiple request per second the whole time (even if no user is active on the instance)
wanderer-search | 2025-09-08T18:33:41.840136Z INFO HTTP request{method=GET host="search:7700" route=/keys query_parameters= user_agent=Meilisearch Go (v0.29.0) status_code=200}: meilisearch: close time.busy=142µs time.idle=116µs
wanderer-search | 2025-09-08T18:33:44.358815Z INFO HTTP request{method=GET host="search:7700" route=/keys query_parameters= user_agent=Meilisearch Go (v0.29.0) status_code=200}: meilisearch: close time.busy=136µs time.idle=273µs
wanderer-search | 2025-09-08T18:33:44.361262Z INFO HTTP request{method=GET host="search:7700" route=/keys query_parameters= user_agent=Meilisearch Go (v0.29.0) status_code=200}: meilisearch: close time.busy=79.8µs time.idle=172µs
wanderer-search | 2025-09-08T18:33:44.361960Z INFO HTTP request{method=GET host="search:7700" route=/keys query_parameters= user_agent=Meilisearch Go (v0.29.0) status_code=200}: meilisearch: close time.busy=145µs time.idle=48.4µs
wanderer-search | 2025-09-08T18:33:45.630789Z INFO HTTP request{method=GET host="search:7700" route=/keys query_parameters= user_agent=Meilisearch Go (v0.29.0) status_code=200}: meilisearch: close time.busy=126µs time.idle=75.5µs
wanderer-search | 2025-09-08T18:33:45.633494Z INFO HTTP request{method=GET host="search:7700" route=/keys query_parameters= user_agent=Meilisearch Go (v0.29.0) status_code=200}: meilisearch: close time.busy=147µs time.idle=96.8µs
wanderer-search | 2025-09-08T18:33:45.648363Z INFO HTTP request{method=GET host="search:7700" route=/keys query_parameters= user_agent=Meilisearch Go (v0.29.0) status_code=200}: meilisearch: close time.busy=204µs time.idle=596µs
wanderer-search | 2025-09-08T18:33:49.415707Z INFO HTTP request{method=GET host="localhost:7700" route=/health query_parameters= user_agent=curl/8.11.0 status_code=200}: meilisearch: close time.busy=158µs time.idle=17.2µs
wanderer-search | 2025-09-08T18:33:49.886035Z INFO HTTP request{method=GET host="search:7700" route=/keys query_parameters= user_agent=Meilisearch Go (v0.29.0) status_code=200}: meilisearch: close time.busy=82.1µs time.idle=231µs
wanderer-search | 2025-09-08T18:33:49.888630Z INFO HTTP request{method=GET host="search:7700" route=/keys query_parameters= user_agent=Meilisearch Go (v0.29.0) status_code=200}: meilisearch: close time.busy=58.2µs time.idle=131µs
wanderer-search | 2025-09-08T18:33:49.890524Z INFO HTTP request{method=GET host="search:7700" route=/keys query_parameters= user_agent=Meilisearch Go (v0.29.0) status_code=200}: meilisearch: close time.busy=58.0µs time.idle=125µs
wanderer-search | 2025-09-08T18:33:50.069866Z INFO HTTP request{method=GET host="search:7700" route=/keys query_parameters= user_agent=Meilisearch Go (v0.29.0) status_code=200}: meilisearch: close time.busy=59.4µs time.idle=131µs
wanderer-search | 2025-09-08T18:33:50.276375Z INFO HTTP request{method=GET host="search:7700" route=/keys query_parameters= user_agent=Meilisearch Go (v0.29.0) status_code=200}: meilisearch: close time.busy=92.1µs time.idle=133µs
wanderer-search | 2025-09-08T18:33:50.281617Z INFO HTTP request{method=GET host="search:7700" route=/keys query_parameters= user_agent=Meilisearch Go (v0.29.0) status_code=200}: meilisearch: close time.busy=105µs time.idle=61.4µs
wanderer-search | 2025-09-08T18:33:50.455015Z INFO HTTP request{method=GET host="search:7700" route=/keys query_parameters= user_agent=Meilisearch Go (v0.29.0) status_code=200}: meilisearch: close time.busy=105µs time.idle=61.6µs
The current uptime of the containers is two weeks
Let me know if you need additional info's/logifes ^^
Version
v0.18.1
Oh i forgot the memory output:
free -h
total used free shared buff/cache available
Mem: 3.7Gi 3.0Gi 209Mi 5.0Mi 739Mi 770Mi
Swap: 0B 0B 0B
Could also be that my vps is just to small for wanderer, but its basically a private instance with currently 3 users ^^
No, it's definitely not normal and I had also noticed it on the demo instance. I haven't checked yet if meilisearch is also constantly indexing like in your case (which also not normal btw). I'll investigate further later this week.
Thanks and its nothing urgend, i'm quite releaved that i'm not the only one :D
I see same issue, after block Claude IP the issue is fixed.
Welcome in Fediverse :)
@CYBERNEURONES waits so its an llm scanner?! mhh i let caddy write an access.log now.. the first view requests are valid fedi request.. well i keep an eye on it. thanks for the hint ;)
or not.. but wth.. why these requests?
"level":"info","ts":1758028189.283021,"logger":"http.log.access.log0","msg":"handled request","request":{"remote_ip":"20.171.207.214","remote_port":"50092","proto":"HTTP/2.0","method":"GET","host":"irl.n0id.space","uri":"/profile/@[email protected]/users/followers","headers":{"X-Openai-Host-Hash":["956024041"],"Accept":["*/*"],"From":["gptbot(at)openai.com"],"User-Agent":["Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GPTBot/1.2; +https://openai.com/gptbot)"],"Accept-Encoding":["gzip, br, deflate"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","server_name":"irl.n0id.space"}},"user_id":"
ohhh i see.. if i call the url.. i get all users of that profile, even if its not my instance https://irl.n0id.space/profile/@[email protected]/users/followers i did censor the user name.. but yeah you can do this with any profile for yourself xD
@Flomp pleroma/akkoma have implemented sth like 'only authenticated users can call specific apis', would sth like that possible/feasible for wanderer too?
Here my filter : https://git.cyber-neurones.org/farias/iptablesApache2 . I drop lot of IP, and now I have normal traffic.
I also saw this behaviour on the demo instance. I resolved it with a simple robots.txt for now:
User-agent: *
Disallow: /profile/
Disallow: /api/
This seems to prevent most bots from accessing profiles.
i tried set the nobot tag in caddy now, and also i use this caddy approach to block known bots: https://darthvi.com/post/forbidden-for-robots/
Related and potential fix: #584
High load on my instance as well. Blocking bots cuts lots of traffic, but it doesn't help enough. My 2-minute probe still fails regularly.
High load on my instance as well. Blocking bots cuts lots of traffic, but it doesn't help enough. My 2-minute probe still fails regularly.
Serving mentioned robots.txt helps!
For those that can't wait for the next release, and are using Caddy, here are some fragments to put in your Caddyfile:
(robots) {
handle /robots.txt {
respond <<TXT
User-agent: *
Disallow: /profile/
Disallow: /api/
TXT 200
}
}
https://wanderer.example.com {
import robots
}
@erikvanoosten i also added a 403 for bots with specific useragents, cause many of them ignore robots.txt.. this helped me a lot:
@botForbidden header_regexp User-Agent "(?i)AdsBot-Google|Amazonbot|anthropic-ai|Applebot|Applebot-Extended|AwarioRssBot|AwarioSmartBot|Bytespider|CCBot|ChatGPT|ChatGPT-User|Claude-Web|ClaudeBot|cohere-ai|DataForSeoBot|Diffbot|FacebookBot|Google-Extended|GPTBot|ImagesiftBot|magpie-crawler|omgili|Omgilibot|peer39_crawler|PerplexityBot|YouBoto|semrush|babbar"
handle @botForbidden {
respond /* "Access denied" 403 {
close
}
i extend the list from time to time when i see sth in my logs
Perhaps this observation helps? Loading the wanderer homepage (on my instance) consistently takes about 2 seconds when logged in, it consistently takes about 10-11 seconds when not logged in.
Hi, since i got this problem again (crawlers circumvented my previous efforts and they tarn themself as valid browser requests now) i wrote a regex for caddy which blocks external profiles. local ones should still work:
@forbot path_regexp /profile.@(.*@.*)
handle @forbot {
respond /* "Access denied" 403 {
close
}
}