roles
roles copied to clipboard
Seednode Operator
This role is responsible for operating one or more Bisq seednodes.
Docs: none, other than the above Team: @bisq-network/seednode-operators
2018.04 report
Running 6 Bitcoin and 2 LTC instances. Digital ocean updates their servers frequently with security patches and that causes restarts which kills the seed node (no crone job for autostart). I am following the old email notifications and got alerted soon to start the seed node in such cases.
2018.05 report
Running 6 Bitcoin and 2 LTC instances.
2018.05 report
Running 1 Bitcoin instance.
bisq-network/compensation#76
2018.05 report
Running 1 Bitcoin instance hosting: Linode in docker container
bisq-network/compensation#80
2018.06 report
Running 1 seednode instance hosting: Linode in docker container
- fixed a docker bug where the linux had no language setup which crashed the container.
- Manfred found a restart bug, this was not noticed on my node because the docker container restarts automatically so it seems this is not relevant for my node.
bisq-network/compensation#83
2018.06 report
Running 1 Bitcoin instance
- Update to new Version
bisq-network/compensation#88
2018.06 report
Running 6 seednode instance Updated to new Version bisq-network/compensation#92
@Emzy @mrosseel You mixed that role up with the bitcoin operator role...
I've updated the description of this role issue and updated the @bisq-network/seednode-operators team to reflect current status.
2018.07 report
Running 6 seednode instance.
/cc bisq-network/compensation#93
2018.07 report
Running 1 Bitcoin seednode instance
/cc bisq-network/compensation#100
2018.07 report
Running 1 seednode instance hosting: Linode in docker container
After last month's docker fixes, no further issues were detected. Nothing to report
bisq-network/compensation#105
018.08 report
Running 1 Bitcoin seednode instance hosting: Hetzner VM on my dedicated server
/cc bisq-network/compensation#111
2018.08 report
Running 6 seednode instance.
/cc bisq-network/compensation#112
2018.08 report
Running 1 seednode instance hosting: Linode in docker container
Nothing to report
bisq-network/compensation#116
2018.09 report
Running 6 seednode instance.
/cc bisq-network/compensation#125
018.09 report
Running 1 Bitcoin seednode instance hosting: Hetzner VM on my dedicated server
/cc bisq-network/compensation#136
2018.09 report
Running 1 seednode instance hosting: Linode in docker container
Nothing to report
bisq-network/compensation#141
2018.10 report
Running 6 seednode instance.
/cc bisq-network/compensation#155
2018.10 report
Running 1 seednode instance hosting: Linode in docker container
Nothing to report
bisq-network/compensation#157
018.10 report
Running 1 Bitcoin seednode instance hosting: Hetzner VM on my dedicated server
/cc bisq-network/compensation#163
2018.11 report
Running 1 Bitcoin seednode instance hosting: Hetzner VM on my dedicated server
/cc bisq-network/compensation#175
2018.11 report
Running 6 seednode instance. Just started 2 new ones for testnet (DAO).
/cc bisq-network/compensation#180
2018.11 report
Running 1 seednode instance hosting: Linode in docker container
Nothing to report
bisq-network/compensation#181
2018.11 report
Running 6 mainnet nodes and 2 testnet nodes (DAO).
/cc bisq-network/compensation#189
2018.12 report
Running 1 Bitcoin seednode instance hosting: Hetzner VM on my dedicated server
/cc bisq-network/compensation#191
We had a severe incident yesterday with all seed nodes.
Reason was that I updated the --maxMemory program argument from 512 to 1024 MB. My servers have 4 GB RAM and run 2 nodes each, so I thought that should be ok. But was not. It caused out of memory errors and nodes became stuck (required kill -9 to stop them).
I increased the maxMemory setting because I saw that they restarted every 2-3 hours (earlier it was about once a day). The seed nodes check the memorey they consume and if it hits the maxMemory they automatically restart. That is a work-around for a potential memory leak which seems to occure only on Linux (and/or seed nodes). At least on OSX with normal Bisq app I never could reproduce it, i could even run the app with about 100 connections, which never worked on my Linux boxes. So I assume its some OS setting causing it. We researched a bit in the past but never found out what is the real reason (never dedicated enough effort - we should prioritize that old issue in the near future).
The situation was discovered late night as a user posted a GH issue that he has no arbitrators, checking the monitor page alerted me as all nodes have been without data basically and most not responsive. From stats on my hoster I saw that the situation somewhere in the last 12-24 hours.
The 2 nodes from Mike and Stephan have been responsive (as they did not change anything) but also were missing data (as they restart every few hours as well and therefor connect to other seeds to gather the data - as the other seeds lost data over time they also became corrupted).
It was a lesson that it is not a good idea to change too much and change all seeds at the same time! Good thing is that it could recover at the end quite quickly and the network is quite resilient even in case all seeds fail (as it was the case more or less).
To recover I started locally one seed and removed all other seed addresses (in the code), so it connected after a while to any persisted peer (normal Bisq apps). From those it got the data which are present in the network and then I used that seed as dedicated seed (using --seedNodes) for the other seeds to start up again. So my seeds all become filled with data again. Mikes and Stephans seeds needed a few hours until they got up to date again once they restarted (so the too fast restart interval was a benefit here).
I updated my servers to 8 GB (4GB / node) and will test now more carfully how far I can go with the --maxConnections and --maxMemory settings. Currently I run 4 nodes with --maxConnections=30 --maxMemory=1024 and 2 with --maxConnections=25 --maxMemory=750. Stephan told me he had anyway already 4 GB and --maxConnections=30 --maxMemory=1024 which seems a safe setting. Mike has not responded so far, but I assume he has lower settings as his node recovered quite fast (restarted faster).
What we should do:
-
Better Alert/Monitoring We need to get an alert from the monitoring in severe cases like that. Looking passively to the monitor page is not enough. Alerts have to be good enough to not have false positives (like the email alerts receive from out simple Tor connection monitoring which I tend to ignore as 99.9% there is nothing severe)
-
Improvements in code for more resiliance When a node starts up it connects to a few seed nodes for initital data, that was added for more resilience if one seed node is out of date. We should extend that to include normal peristed non-seed-node peer as well, so in case that the seeds are all failing (like that incident) the network still exchanges at startup the live data. Only first time users would have a problem then.
-
Investigate memory increase/limitations Investigate the reason for the memory increase (might be a OS setting like limit of some network resources)
I reread the issue https://github.com/bisq-network/bisq/issues/599 , where a user reported also abnormal memory consumption under Ubuntu, and where I myself reported low memory consumption under Debian Stretch. @Emzy says he uses Debian Stretch with his seednode (and never reported a memory issue afaik)
So I wonder if this memory leakage issue could not be specific to Ubuntu ? (and could maybe simply be solved by running under Debian ?)
2019.01 report
We had issues with heap memory (see above) but it is resolved now and we added more vm arguments and increased the prog argument for maxMemory.
java -XX:+UseG1GC -Xms512m -Xmx4000m -jar /root/bisq/seednode/build/libs/seednode-all.jar --maxConnections=30 --maxMemory=3000 ...
The -XX:+UseG1GC argument tells the jvm to use another garbage collector which behaves better according to @freimair
Heap memory defined in -Xmx must be about 20-30% larger than the amount at maxMemory.
Started as well 2 more seed nodes for the DAO (4 in total).
/cc bisq-network/compensation#205
2019.01 report
Running 1 Bitcoin seednode instance hosting: Hetzner VM on my dedicated server
/cc bisq-network/compensation#212