bitmagnet
bitmagnet copied to clipboard
Performence decrease after upgrading to 0.7.0
Have you checked the roadmap on the website, and existing issues, before opening a dupllcate issue? Yes
Describe the bug I've upgraded to bitmagnet 0.7.0 now bitmagnet is really slow, they UI is unresponsive (the loading "bar" keeps spinning), I can't use search(no response, always loading), and there is a lot of errors in logs.
To Reproduce
Run bitmagnet using docker compose up
using docker-compose.up
from docs: https://bitmagnet.io/setup/installation.html.
I have about 2 000 000 torrents indexed, everything was working fine until 0.7.0. I'm using Intel N100 with 16GB RAM.
Expected behavior Bitmagnet should work and be responsive.
General (please complete the following information):
- Bitmagnet version: 0.7.0
- OS and version: Ubuntu 22.04
- Browser and version (if issue is with WebUI): Firefox 122.0.1
Additional context Logs: bitmagnet.log
Hi, can I ask what version you upgraded from? If it was prior to 0.5.0 you may have the reindex job running that was part of this upgrade....
The /metrics endpoint will tell you if you have a lot in the queue...
Hi @mgdigital !
I've upgraded from 0.6.1.
PS. I've also set issue summary, I missed it for some reason, sorry :)
Could I ask you to go to the /metrics endpoint and paste here anything starting with bitmagnet_queue? I suspect you have jobs running in the queue that are slowing things.
Hi!
I've checked metrics endpoint, here is the output:
# HELP bitmagnet_dht_ktable_hashes_added Total number of hashes added to routing table.
# TYPE bitmagnet_dht_ktable_hashes_added counter
bitmagnet_dht_ktable_hashes_added 466
# HELP bitmagnet_dht_ktable_hashes_count Number of hashes in routing table.
# TYPE bitmagnet_dht_ktable_hashes_count gauge
bitmagnet_dht_ktable_hashes_count 466
# HELP bitmagnet_dht_ktable_hashes_dropped Total number of hashes dropped from routing table.
# TYPE bitmagnet_dht_ktable_hashes_dropped counter
bitmagnet_dht_ktable_hashes_dropped 0
# HELP bitmagnet_dht_ktable_nodes_added Total number of nodes added to routing table.
# TYPE bitmagnet_dht_ktable_nodes_added counter
bitmagnet_dht_ktable_nodes_added 465
# HELP bitmagnet_dht_ktable_nodes_count Number of nodes in routing table.
# TYPE bitmagnet_dht_ktable_nodes_count gauge
bitmagnet_dht_ktable_nodes_count 369
# HELP bitmagnet_dht_ktable_nodes_dropped Total number of nodes dropped from routing table.
# TYPE bitmagnet_dht_ktable_nodes_dropped counter
bitmagnet_dht_ktable_nodes_dropped 96
# HELP bitmagnet_dht_responder_query_concurrency Number of concurrent DHT queries.
# TYPE bitmagnet_dht_responder_query_concurrency gauge
bitmagnet_dht_responder_query_concurrency{query="find_node"} 0
bitmagnet_dht_responder_query_concurrency{query="get_peers"} 0
bitmagnet_dht_responder_query_concurrency{query="ping"} 0
# HELP bitmagnet_dht_responder_query_duration_seconds A histogram of successful DHT query durations in seconds.
# TYPE bitmagnet_dht_responder_query_duration_seconds histogram
bitmagnet_dht_responder_query_duration_seconds_bucket{query="find_node",le="0.1"} 1
bitmagnet_dht_responder_query_duration_seconds_bucket{query="find_node",le="0.15000000000000002"} 1
bitmagnet_dht_responder_query_duration_seconds_bucket{query="find_node",le="0.22500000000000003"} 1
bitmagnet_dht_responder_query_duration_seconds_bucket{query="find_node",le="0.3375"} 1
bitmagnet_dht_responder_query_duration_seconds_bucket{query="find_node",le="0.5062500000000001"} 1
bitmagnet_dht_responder_query_duration_seconds_bucket{query="find_node",le="+Inf"} 1
bitmagnet_dht_responder_query_duration_seconds_sum{query="find_node"} 0.000128385
bitmagnet_dht_responder_query_duration_seconds_count{query="find_node"} 1
bitmagnet_dht_responder_query_duration_seconds_bucket{query="get_peers",le="0.1"} 14
bitmagnet_dht_responder_query_duration_seconds_bucket{query="get_peers",le="0.15000000000000002"} 14
bitmagnet_dht_responder_query_duration_seconds_bucket{query="get_peers",le="0.22500000000000003"} 14
bitmagnet_dht_responder_query_duration_seconds_bucket{query="get_peers",le="0.3375"} 14
bitmagnet_dht_responder_query_duration_seconds_bucket{query="get_peers",le="0.5062500000000001"} 14
bitmagnet_dht_responder_query_duration_seconds_bucket{query="get_peers",le="+Inf"} 14
bitmagnet_dht_responder_query_duration_seconds_sum{query="get_peers"} 0.00142689
bitmagnet_dht_responder_query_duration_seconds_count{query="get_peers"} 14
bitmagnet_dht_responder_query_duration_seconds_bucket{query="ping",le="0.1"} 3
bitmagnet_dht_responder_query_duration_seconds_bucket{query="ping",le="0.15000000000000002"} 3
bitmagnet_dht_responder_query_duration_seconds_bucket{query="ping",le="0.22500000000000003"} 3
bitmagnet_dht_responder_query_duration_seconds_bucket{query="ping",le="0.3375"} 3
bitmagnet_dht_responder_query_duration_seconds_bucket{query="ping",le="0.5062500000000001"} 3
bitmagnet_dht_responder_query_duration_seconds_bucket{query="ping",le="+Inf"} 3
bitmagnet_dht_responder_query_duration_seconds_sum{query="ping"} 6.4249e-05
bitmagnet_dht_responder_query_duration_seconds_count{query="ping"} 3
# HELP bitmagnet_dht_responder_query_success_total A counter of successful DHT queries.
# TYPE bitmagnet_dht_responder_query_success_total counter
bitmagnet_dht_responder_query_success_total{query="find_node"} 1
bitmagnet_dht_responder_query_success_total{query="get_peers"} 14
bitmagnet_dht_responder_query_success_total{query="ping"} 3
# HELP bitmagnet_dht_server_query_concurrency Number of concurrent DHT queries.
# TYPE bitmagnet_dht_server_query_concurrency gauge
bitmagnet_dht_server_query_concurrency{query="find_node"} 5
bitmagnet_dht_server_query_concurrency{query="get_peers"} 24
bitmagnet_dht_server_query_concurrency{query="ping"} 10
bitmagnet_dht_server_query_concurrency{query="sample_infohashes"} 90
# HELP bitmagnet_dht_server_query_duration_seconds A histogram of successful DHT query durations in seconds.
# TYPE bitmagnet_dht_server_query_duration_seconds histogram
bitmagnet_dht_server_query_duration_seconds_bucket{query="find_node",le="0.1"} 320
bitmagnet_dht_server_query_duration_seconds_bucket{query="find_node",le="0.15000000000000002"} 379
bitmagnet_dht_server_query_duration_seconds_bucket{query="find_node",le="0.22500000000000003"} 419
bitmagnet_dht_server_query_duration_seconds_bucket{query="find_node",le="0.3375"} 501
bitmagnet_dht_server_query_duration_seconds_bucket{query="find_node",le="0.5062500000000001"} 511
bitmagnet_dht_server_query_duration_seconds_bucket{query="find_node",le="+Inf"} 514
bitmagnet_dht_server_query_duration_seconds_sum{query="find_node"} 61.92984066599999
bitmagnet_dht_server_query_duration_seconds_count{query="find_node"} 514
bitmagnet_dht_server_query_duration_seconds_bucket{query="get_peers",le="0.1"} 2162
bitmagnet_dht_server_query_duration_seconds_bucket{query="get_peers",le="0.15000000000000002"} 2646
bitmagnet_dht_server_query_duration_seconds_bucket{query="get_peers",le="0.22500000000000003"} 3148
bitmagnet_dht_server_query_duration_seconds_bucket{query="get_peers",le="0.3375"} 3552
bitmagnet_dht_server_query_duration_seconds_bucket{query="get_peers",le="0.5062500000000001"} 3659
bitmagnet_dht_server_query_duration_seconds_bucket{query="get_peers",le="+Inf"} 3704
bitmagnet_dht_server_query_duration_seconds_sum{query="get_peers"} 452.8172425899996
bitmagnet_dht_server_query_duration_seconds_count{query="get_peers"} 3704
bitmagnet_dht_server_query_duration_seconds_bucket{query="ping",le="0.1"} 212
bitmagnet_dht_server_query_duration_seconds_bucket{query="ping",le="0.15000000000000002"} 251
bitmagnet_dht_server_query_duration_seconds_bucket{query="ping",le="0.22500000000000003"} 296
bitmagnet_dht_server_query_duration_seconds_bucket{query="ping",le="0.3375"} 372
bitmagnet_dht_server_query_duration_seconds_bucket{query="ping",le="0.5062500000000001"} 382
bitmagnet_dht_server_query_duration_seconds_bucket{query="ping",le="+Inf"} 387
bitmagnet_dht_server_query_duration_seconds_sum{query="ping"} 54.32839968100002
bitmagnet_dht_server_query_duration_seconds_count{query="ping"} 387
bitmagnet_dht_server_query_duration_seconds_bucket{query="sample_infohashes",le="0.1"} 632
bitmagnet_dht_server_query_duration_seconds_bucket{query="sample_infohashes",le="0.15000000000000002"} 745
bitmagnet_dht_server_query_duration_seconds_bucket{query="sample_infohashes",le="0.22500000000000003"} 889
bitmagnet_dht_server_query_duration_seconds_bucket{query="sample_infohashes",le="0.3375"} 1159
bitmagnet_dht_server_query_duration_seconds_bucket{query="sample_infohashes",le="0.5062500000000001"} 1191
bitmagnet_dht_server_query_duration_seconds_bucket{query="sample_infohashes",le="+Inf"} 1204
bitmagnet_dht_server_query_duration_seconds_sum{query="sample_infohashes"} 177.20904398400006
bitmagnet_dht_server_query_duration_seconds_count{query="sample_infohashes"} 1204
# HELP bitmagnet_dht_server_query_error_total A counter of failed DHT queries.
# TYPE bitmagnet_dht_server_query_error_total counter
bitmagnet_dht_server_query_error_total{query="find_node"} 81
bitmagnet_dht_server_query_error_total{query="get_peers"} 97
bitmagnet_dht_server_query_error_total{query="ping"} 92
bitmagnet_dht_server_query_error_total{query="sample_infohashes"} 1170
# HELP bitmagnet_dht_server_query_success_total A counter of successful DHT queries.
# TYPE bitmagnet_dht_server_query_success_total counter
bitmagnet_dht_server_query_success_total{query="find_node"} 514
bitmagnet_dht_server_query_success_total{query="get_peers"} 3704
bitmagnet_dht_server_query_success_total{query="ping"} 387
bitmagnet_dht_server_query_success_total{query="sample_infohashes"} 1204
# HELP bitmagnet_meta_info_requester_concurrency Number of concurrent meta info requests.
# TYPE bitmagnet_meta_info_requester_concurrency gauge
bitmagnet_meta_info_requester_concurrency 130
# HELP bitmagnet_meta_info_requester_duration_seconds Duration of successful meta info requests in seconds.
# TYPE bitmagnet_meta_info_requester_duration_seconds histogram
bitmagnet_meta_info_requester_duration_seconds_bucket{le="0.005"} 0
bitmagnet_meta_info_requester_duration_seconds_bucket{le="0.01"} 0
bitmagnet_meta_info_requester_duration_seconds_bucket{le="0.025"} 0
bitmagnet_meta_info_requester_duration_seconds_bucket{le="0.05"} 0
bitmagnet_meta_info_requester_duration_seconds_bucket{le="0.1"} 4
bitmagnet_meta_info_requester_duration_seconds_bucket{le="0.25"} 18
bitmagnet_meta_info_requester_duration_seconds_bucket{le="0.5"} 47
bitmagnet_meta_info_requester_duration_seconds_bucket{le="1"} 62
bitmagnet_meta_info_requester_duration_seconds_bucket{le="2.5"} 85
bitmagnet_meta_info_requester_duration_seconds_bucket{le="5"} 92
bitmagnet_meta_info_requester_duration_seconds_bucket{le="10"} 97
bitmagnet_meta_info_requester_duration_seconds_bucket{le="+Inf"} 97
bitmagnet_meta_info_requester_duration_seconds_sum 110.69934851000001
bitmagnet_meta_info_requester_duration_seconds_count 97
# HELP bitmagnet_meta_info_requester_error_total Total number of failed meta info requests.
# TYPE bitmagnet_meta_info_requester_error_total counter
bitmagnet_meta_info_requester_error_total 2150
# HELP bitmagnet_meta_info_requester_success_total Total number of successful meta info requests.
# TYPE bitmagnet_meta_info_requester_success_total counter
bitmagnet_meta_info_requester_success_total 97
# HELP bitmagnet_process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE bitmagnet_process_cpu_seconds_total counter
bitmagnet_process_cpu_seconds_total 7.22
# HELP bitmagnet_process_max_fds Maximum number of open file descriptors.
# TYPE bitmagnet_process_max_fds gauge
bitmagnet_process_max_fds 1.048576e+06
# HELP bitmagnet_process_open_fds Number of open file descriptors.
# TYPE bitmagnet_process_open_fds gauge
bitmagnet_process_open_fds 144
# HELP bitmagnet_process_resident_memory_bytes Resident memory size in bytes.
# TYPE bitmagnet_process_resident_memory_bytes gauge
bitmagnet_process_resident_memory_bytes 2.22298112e+08
# HELP bitmagnet_process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE bitmagnet_process_start_time_seconds gauge
bitmagnet_process_start_time_seconds 1.70785552761e+09
# HELP bitmagnet_process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE bitmagnet_process_virtual_memory_bytes gauge
bitmagnet_process_virtual_memory_bytes 1.498509312e+09
# HELP bitmagnet_process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE bitmagnet_process_virtual_memory_max_bytes gauge
bitmagnet_process_virtual_memory_max_bytes 1.8446744073709552e+19
# HELP bitmagnet_queue_jobs_total Number of tasks enqueued; broken down by queue and status.
# TYPE bitmagnet_queue_jobs_total gauge
bitmagnet_queue_jobs_total{queue="process_torrent",status="failed"} 9
bitmagnet_queue_jobs_total{queue="process_torrent",status="pending"} 4
bitmagnet_queue_jobs_total{queue="process_torrent",status="processed"} 7
bitmagnet_queue_jobs_total{queue="process_torrent",status="retry"} 20
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 1.6664e-05
go_gc_duration_seconds{quantile="0.25"} 3.894e-05
go_gc_duration_seconds{quantile="0.5"} 6.841e-05
go_gc_duration_seconds{quantile="0.75"} 8.7219e-05
go_gc_duration_seconds{quantile="1"} 0.000182685
go_gc_duration_seconds_sum 0.000703693
go_gc_duration_seconds_count 10
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 726
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.22.0"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 1.26927616e+08
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 2.54656864e+08
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.541259e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 1.908716e+06
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 4.585464e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 1.26927616e+08
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 5.619712e+07
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 1.36052736e+08
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 582258
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 2.809856e+06
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 1.92249856e+08
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.7078555640643454e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 2.490974e+06
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 4800
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 15600
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 1.05616e+06
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 1.25664e+06
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 1.59517424e+08
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 1.086213e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 1.327104e+07
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 1.327104e+07
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 2.14006072e+08
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 12
However this is after restarting docker compose.
I also see a lot of slow logs in logs:
EDITED TO REMOVE LOGS CONTAINING CONTENT METADATA
It looks like every query is "slow".
I also see these errors:
bitmagnet | ERROR queue server/server.go:213 job failed {"queue": "process_torrent", "error": "job exceeded its 3m0s timeout: context deadline exceeded"}
bitmagnet | github.com/bitmagnet-io/bitmagnet/internal/queue/server.(*serverHandler).handleJob.func1
bitmagnet | /build/internal/queue/server/server.go:213
bitmagnet | github.com/bitmagnet-io/bitmagnet/internal/queue/server.(*serverHandler).handleJob.(*Query).Transaction.func2
bitmagnet | /build/internal/database/dao/gen.go:188
bitmagnet | gorm.io/gorm.(*DB).Transaction
bitmagnet | /go/pkg/mod/gorm.io/[email protected]/finisher_api.go:647
bitmagnet | github.com/bitmagnet-io/bitmagnet/internal/database/dao.(*Query).Transaction
bitmagnet | /build/internal/database/dao/gen.go:188
bitmagnet | github.com/bitmagnet-io/bitmagnet/internal/queue/server.(*serverHandler).handleJob
bitmagnet | /build/internal/queue/server/server.go:179
bitmagnet | github.com/bitmagnet-io/bitmagnet/internal/queue/server.(*serverHandler).start.func2
bitmagnet | /build/internal/queue/server/server.go:168
There doesn't seem to be any excessive CPU usage, 2-3% for bitmagnet worker run
, and 0% for postgress. Is it possible that using postgres as queue backend is causing some kind of congestion?
Are you using Docker and if so could you paste your docker-compose file? I'm wondering if you don't have the 'shm_size: 1g' on Postgres (that was recently added to the example) and if not that might make a difference....
Hi!
As I mentioned in issue description, I'm using docker compose from your docs. I downloaded it today, it has this new shm option for postgres.
Sorry I don't have access to this computer right now, but it is verbatim from your docs.
No probs. I've asked on the Discord if anyone is having similar issues as I can't see this myself. When you get the chance maybe try restarting everything, including your machine to see if that helps. Like you say weird that you have basically nothing in the queue but things have slowed down...
(Also sorry for asking questions you already answered, was on phone)
Hi @mgdigital !
I've rebooted my computer and the all containers, but I still see a lot of exception in logs:
EDITED TO REMOVE LOGS CONTAINING CONTENT METADATA
I can remove the DB, but this would side-stepping the problem IMHO.
Thanks for all your help!
Hi @garar , sorry you're still having issues. Based on the feedback I've had on Discord, this is not a general problem that people are experiencing, but I'd still like to get to the bottom of it (BTW you are welcome to join the Discord and others might have advice or ideas: https://discord.gg/6mFNszX8qM).
The only similar issue is with a user who has changed some settings from the defaults (namely dht_crawler.save_pieces
and dht_crawler.save_files_threshold
.) Can I confirm if you've changed any config values from the defaults?
It's also worth looking specifically at your Postgres logs. I don't think the Bitmagnet logs will tell us much, we can see that queries are taking longer than they should. Are there any clues or error messages in the Postgres logs from startup to shutdown?
As a last resort you could start with a fresh Postgres DB. You can take a backup and restore it into a fresh DB using the guide here: https://bitmagnet.io/tutorials/backup-restore-merge.html
Hi!
Unfortunately backup and delete and restore didn't help. Exceptions in logs persist. I see that wal write is slow:
2024-02-16 08:08:02.421 UTC [50] LOG: checkpoint complete: wrote 641 buffers (3.9%); 0 WAL file(s) added, 0 removed, 0 recycled; write=64.138 s, sync=0.017 s, total=64.165 s; sync files=46, longest=0.002 s, average=0.001 s; distance=3921 kB, estimate=6591 kB; lsn=20/9FE47008, redo lsn=20/9FE3BDD8
2024-02-16 08:11:58.520 UTC [50] LOG: checkpoint starting: time
2024-02-16 08:12:40.108 UTC [50] LOG: checkpoint complete: wrote 416 buffers (2.5%); 0 WAL file(s) added, 1 removed, 0 recycled; write=41.551 s, sync=0.020 s, total=41.588 s; sync files=43, longest=0.003 s, average=0.001 s; distance=2481 kB, estimate=6180 kB; lsn=20/A00A8408, redo lsn=20/A00A8388
2024-02-16 08:16:58.209 UTC [50] LOG: checkpoint starting: time
2024-02-16 08:18:28.974 UTC [50] LOG: checkpoint complete: wrote 902 buffers (5.5%); 0 WAL file(s) added, 0 removed, 0 recycled; write=90.735 s, sync=0.021 s, total=90.766 s; sync files=49, longest=0.003 s, average=0.001 s; distance=5393 kB, estimate=6102 kB; lsn=20/A05F8D40, redo lsn=20/A05ECA48
2024-02-16 08:21:59.074 UTC [50] LOG: checkpoint starting: time
from what I undestand it take 90 seconds to write data from write ahead log? Interesting, I'm not postgres expert, but this is an SSD drive, so there should be no problem. Also, why did it start happening right now :)
I will now delete database, and start fresh, and see if it starts happening again.
I probably should just analyze postgresql processes, but I a little to busy right now. I will report back if I see something good or bad in a week.
Thanks!
I can reproduce these performance problems with a fresh v0.7.1 instance and only ~700 torrents (12MB database) loaded from the /import
endpoint, and the DHT crawler disabled. Nothing unusual on the host running bitmagnet (disk I/O, RAM usage, Postgres WAL r/w rate... are all normal).
I cannot compare with previous versions as I've only started using bitmagnet recently.
Switching from 10 to 100 items per page in the web interface seems to reliably reproduce this problem. The browser tab where the bitmagnet web interface is opened, seems to consume 100% of one CPU core. It also makes the tab unresponsive to reload page, history back button, etc. So I would assume the problem is client side. Firefox ESR 115.7.0esr-1~deb12u1
Hi @nodiscc , the web app definitely needs some optimisation, Angular profiling etc. It does become sluggish when increasing the pagination limit (though I don't see it becoming completely unresponsive). I may look at frontend optimisations next, I'm sure there are some low hanging fruit there - unfortunately FE stuff isn't my area of experience!
OP's issue is a little more mysterious and is characterised by slow SQL queries - also there have been no web app changes since 0.6.0, and what I just described definitely predates this - and so I'm not convinced OP's issue is related to anything in the web app but they may be able to to correct me....
Just a few notes:
I originally installed 0.0.6, I think, and it and it ran fine until I had a server issue.
I cane back and installed it on the exact same config as last time but installed 0.7.0 as a clean install - it was slow, searches would mane the whole thing die and just in general it seemed to die randomly. After a day or 2 I only had ~1000 torrents indexed (using my own TMDB_API_KEY). I just about gave up but rolled back to 0.6.x and it ran fine.
I have since come back to :latest (after 0.7.4 or .5) and it is running just dandy with over 6M indexed