ord
ord copied to clipboard
"ord --index-sats index" is incredibly slow
Running on macOS 13.2. M1 Max CPU, ord data on fast internal SSD. Running ord 0.5.0 with --index-sats seems to start fast but then slows exponentially; seems like it is on track to take multiple days or more, hard to know how slow it will be toward the end. Any way to make this any faster?
bitcoin.conf:
txindex=1 rest=1 server=1
--index-sats
can take up to 2 days, even on an M1 Max. The underlying reason is that the fragmentation of the sat ranges increases over time. We're still looking into what exactly the bottleneck is (memory or rpc calls) but at the moment there isn't anything you can do about it.
There are some open PR we haven't merged yet (#1516 and #1636), that people are claiming helps so you could pull those and try indexing with them.
There are some open PR we haven't merged yet (#1516 and #1636), that people are claiming helps so you could pull those and try indexing with them.
Yeah, those don't seem to apply to --index-sats
.
We're still looking into what exactly the bottleneck is (memory or rpc calls)
With #1516 --index-sats
uses zero RPC calls. The bottleneck I think is disk IO and memory. @raphjaph Any thoughts on https://github.com/casey/ord/issues/1630?
We're still looking into what exactly the bottleneck is (memory or rpc calls)
With #1516
--index-sats
uses zero RPC calls. The bottleneck I think is disk IO and memory. @raphjaph Any thoughts on #1630?
Answered on that issue
The bottleneck I think is disk IO and memory.
From an end-user perspective, and based on a cursory examination of Activity Monitor in macOS I think it's disk IO. I'm 30-some hours into a --index-sats
indexing and I see the ord
process using:
CPU: 50-70% RAM: 2 GB of 32 GB available Disk writes: 1.73 TB (!!) Disk reads: 450 GB
So after hearing of a Windows user with a substantially similarly specced machine who finished the whole --index-sats
process in less than a day, I decided to do some rough math. I'm clocking about 750 blocks/hour in the 475,000 block range. So I have about 300,000 blocks left to index. My napkin math says I have over 400 hours to go, and that's assuming the speed stays constant from here, which doesn't seem to be the case. So that's over two weeks.... I don't think this is working quite right!
What kind of hard drive are you using?
ord data is on the built-in SSD. bitcoin data on USB-C SSD.
Just searched the repo for tb to see if anyone else had noticed this very same thing. I restarted the indexing process (not the --index-sats index, the regular Ord index.) at around 450k blocks and its now at 763k. Just in that block range I have 1 TB written.
@Prestonsr for a normal sync can you please help test/benchmark my improvement branch? https://github.com/casey/ord/issues/1648#issuecomment-1426565987
Myself and some testers have managed to sync with it under an hour.
I observed the slowdown on Mac back in November, after I had sped up the indexing. There does seem to be a significant factor on Macs making indexing a lot slower than on somewhat comparable (in terms of RAM, disk, CPU speed) Linux (or even Windows) machines. May be related to unsolved #819
Trying to pin down regression. PR https://github.com/casey/ord/pull/703 is my last confirmed reference point before this performance regression on Mac.
If there's anything I can do to help with testing I'm happy to. I've aborted my long-running --index-sats
attempt for now.
It takes weeks for me on a VPS.
@so7ow I managed to sync with --index-sats
in just under 24 hours. Check out my branch https://github.com/andrewtoth/ord/tree/skip-values.
@so7ow I managed to sync with
--index-sats
in just under 24 hours. Check out my branch https://github.com/andrewtoth/ord/tree/skip-values.
So, I'm a little git-illiterate. Would this be the correct process to build?
git clone https://github.com/andrewtoth/ord cd ord git checkout skip-values cargo build --release
Looks right to me.
Ok, I have it running now. Went pretty quickly up to block 250,000 or so. Now appears to have slowed radically and still chugging toward 300,000. I'll let it run for now but if it gets to more than say ~2 days I'm going to kill it again!
Lol it's been 3 hours. Sounds like it's making good progress.
Sorry, I'm just jaded from my previous run! 🫠
@so7ow how is it looking today?
I'm approaching block 500,000 after ~21 hours.
Ahh dang. I was well into 600ks after 12 hours. You sure you are on the right commit?
Pretty sure....I shared my steps above and it compiled cleanly. ~~Is there a git command I can run to show you where I am?~~
ord % git branch master
- skip-values
ok... well was this further than you got after 21 hours last time at least?
Also, thank you for testing!
Oh yes, it's definitely faster. I didn't really track very carefully so I can't really quantify but up above I had been running for at least a full day and I was only up to around 475000. Progress! I'm just afraid it's going to slow exponentially toward the end and still take a week. But I'll try to be patient. :)
@andrewtoth I'm at ~33 hours and working through the 535,000s now.
@andrewtoth ~44 hours and we're in the 585,000s.
~56 hours - 620,000 blocks and counting
Dang. Now's the hard part 😬