machinaris
machinaris copied to clipboard
Not plots shown under farming
Everything is running fine, fresh machinaris install, changing from bare metal to docker environment, but the large node is not showing the plots.
Which log can i provide for you ? :)
Hi, this can happen when you have duplicated plots, but other causes are possible. Please share a few configurations including your docker-compose.yml
and all details of your OS, etc.
Then please share the output of chia farm summary
from a shell inside the container. Then also provide the Server Log (aka apisrv.log) from the Machinaris controller.
Distributor ID: Ubuntu Description: Ubuntu 20.04.4 LTS Release: 20.04 Codename: focal Dual socket HPE DL385 with 4 Raid Controller and 8 D8000 JBOD with in total 848x 18TB Disks
gts-data@chia-archive01:~/.machinaris/mainnet/log$ sudo docker exec -it machinaris bash [sudo] password for gts-data: root@chia-archive01:/chia-blockchain# chia farm summary Farming status: Farming Total chia farmed: 85.270841009395 User transaction fees: 0.020841009395 Block rewards: 85.25 Last height farmed: 2517205 Local Harvester 112352 plots of size: 10.859 PiB Plot count for all harvesters: 112352 Total size of plots: 10.859 PiB Estimated network space: 22.039 EiB Expected time to win: 10 hours and 45 minutes Note: log into your key using 'chia wallet show' to see rewards for each key
gts-data@chia-archive01:~/machinaris$ cat docker-compose.yml
version: '3.7'
services:
machinaris:
image: ghcr.io/guydavis/machinaris:latest
container_name: machinaris
hostname: chia-archive01
restart: always
volumes:
- "/home/gts-data/.machinaris:/root/.chia"
- "/media/chia01:/plots1"
- "/media/chia02:/plots2"
- "/media/chia03:/plots3"
- "/media/chia04:/plots4"
- "/media/chia05:/plots5"
- "/media/chia06:/plots6"
- "/media/chia07:/plots7"
- "/media/chia08:/plots8"
- "/media/chia09:/plots9"
- "/media/chia10:/plots10"
- "/media/chia11:/plots11"
- "/media/chia12:/plots12"
- "/media/chia13:/plots13"
- "/media/chia14:/plots14"
environment:
- TZ=Europe/Berlin
- mode=fullnode
- worker_address=192.168.20.147
- plots_dir=/plots1:/plots2:/plots3:/plots4:/plots5:/plots6:/plots7:/plots8:/plots9:/plots10:/plots11:/plots12:/plots13:/plots14
- blockchains=chia
ports:
- 8926:8926
- 8927:8927
- 8444:8444
- 8447:8447
Okay, thanks for that log output. I've just enabled some additional debugging in the :develop
image so please try switching from :latest
to :develop
in your docker-compose.yml
and run that, watching for log output in the apisrv.log about duplicated plots on the same host.
i did and startet - takes a while with that number of plots.
checking the server console i see this:
As the server has tons of memory, is this maybe a docker config issue, not enough memory on the container ? Dual 64 core Epyc and 256GB memory
here the new log apisrv.log
After a reboot of the machine (just to make sure all memory is free) - same problem as after the fresh install. The plots are not loaded and it just stops somewhen and does not farm. It took several container restarts last time to be succesful farming. Bare metal installation was without that issue.
Understood. Please docker-compose pull
to get the most recent :develop
image just committed. Given the number of plots you have (over 100k) let's narrow the scope of the problem down rather than trying to load everything at once. Please edit your docker-compose.yml
to remove most of the Volumes listed above, start with just one Volume initially. Then restart the container, monitor with docker stats
from Host OS, and then:
- Wait at least 15 minutes for the container after launch to get running smoothly.
- Check that
chia farm summary
is correct for those Volumes mounted. Run that command inside the container. - Verify that Farming page lists those plots and that total count on table, matches
farm summary
. - Edit your
docker-compose.yml
to add a few more Volumes, restart the container and go back to step 1 above.
Repeat this process, incrementally adding more drives and plots, watching memory usage via docker stats
. This way we can see if there is a memory issue at some point. Please see the wiki for more details on scaling Machinaris to that many plots.
Update:I'd be more than happy to provide additional support for you on our Discord.
i reduced to just one plot folder and issue is the same apisrv.log apisrv-part2.log
Alright, in the 20 lines of logs you provided in your last, I find no issue. At this pace we could be corresponding for days which will be frustrating for you. When very large farmers such as yourself have needed assistance in the past, I have offered remote sessions on our Discord via TeamViewer. Let me know on Discord if you are interested.
I'd like to get a better picture of your setup as your Unraid post refers to another Unraid system as fullnode, yet you are also running this fullnode on Linux (?).
Hopefully an in-person session would let me understand why you're not using workers.
thanks for the offering, will do this soon. The unraid installation was my small one at home / the 16PB farmer is in the datacenter ;)
Hi! If you are still running Machinaris in your data center, please try the latest machinaris:develop
which makes use of bulk record insert for plot status. I've also instrumented memory usage logging in this section of the code.
Looking forward to your feedback.
EDIT: Here's a useful log search for the new output:
grep "PLOT STATUS" /root/.chia/machinaris/logs/apisrv.log
Thanks for the hint - i just restartet the container to pull the new version.
Hi! I created a test harness to simulate storing 120000 plot status records into the Sqlite3 database at /root/.chia/machinaris/dbs/plots.db
. I'll never have a farm this big, but my test harness simulates tracking that many plots, similar to your farm. This "full insert" only happens on launch and infrequently thereafter, with most status updates only looking for newly arrived plots (by recent time).
I found that the full batch insert of that many rows, on a scheduled background thread, took one minute and used, at peak, just over 500 MB of memory (in that Machinaris Python process). The plots.db
file did grow to 25 MB on disk after a few tests. This test was done on my old HP tower (many years now) with a cheap SATA SSD.
Due to the server-side data table paging, the Farming page of Machinaris WebUI then loaded in a just a couple of seconds. Filtering and paging forward were also a few seconds to complete.
Thanks again for pushing me to test this scenario. I look forward to your feedback.
Following up on our Discord conversation to scale Machinaris for your very large farm, we discussed disabling Forktools in your docker-compose.yml
to optimize the first 15-30 minutes after container launch against more than 1000000 plots.
Suggested environment variables:
- bladebit_skip_build=true
- madmax_skip_build=true
- forktools_skip_build=true
- plots_check_analyze_skip=true
Based on Discord discussions with @grobalt, the improvements in Machinaris v0.8.4, combined with the Scaling guidance, seem to have addressed the issue. Thanks for the assistance in making Machinaris run better on such a huge farm!
this is amazing! went from 30-ish min to literally 2min to start now! thank you!!