utreexo icon indicating copy to clipboard operation
utreexo copied to clipboard

accumulator for bitcoin utxo set

Results 47 utreexo issues
Sort by recently updated
recently updated
newest added

I ran into this on the test server. ``` On block : 426901 On block : 427001 On block : 427101 ^CUser exit signal received. Exiting... Program timed out. Force...

Currently there are ~three~ two separate functions to retrieve a node from the pollard. It would be nice to just have one function that combines their functionality. [readPos](https://github.com/mit-dci/utreexo/blob/d335a124a99071547636559d3e4b3113d21049e0/accumulator/pollard.go#L341 ) gets...

Resuming when switching between cowforest and other forest types cause errors since they are essentially different forest types. I guess we can translate the forests during restarts but just making...

https://github.com/mit-dci/utreexo/blob/d335a124a99071547636559d3e4b3113d21049e0/accumulator/utils.go#L249 A quick ``` func TestInForest(t *testing.T) { if inForest(14, 5, 1) { panic("") } } ``` passes

For testnet3, the `proof.dat` file is around 13GBs. For mainnet, this is hundreds of gigabytes. It's not ideal having it as a one big file.

When we ctrl-break from genproofs, it can take more than 60 seconds. `saveBridgeNodeData()` calls `WriteForestToDisk()` in some cases, and `WriteMiscData()` which calls `close()` - those are the main candidates for...

Per @adiabat's suggestion, I collected the data on how many `swapNodes()` were called in each row. This was done by putting a println: ```fmt.Printf("swapNodes on row:%05d, forestRows:%05d\n", r, f.rows)``` above...

Current master isn't working with the server that we have up. Maybe setup a master server

was part of my experiments in #180. Simulation of pollard modification will be helpful to derive partial proofs without adding extra round trips for every block during IBD. This PR...

Right now the parent hashing function is just sha256(left, right) (where , means they're stuck together) Many attacks can be prevented by committing to more data in the hash. We...