Plutus icon indicating copy to clipboard operation
Plutus copied to clipboard

Others file - append to others.txt for all non match. Work has been done - so might as well build huge db

Open GoZippy opened this issue 1 year ago • 7 comments

In main function - below while loop - I added an else statement to track iterations and count and save (append) to an others.txt file. The thing is gorwsing very very fast with addresses.

Thinking about how big it is getting - any suggestions for ways to automatically start new file after so many MB or GB size or number of iterations?

Maybe after 10,000 entries write to new file others1 others2 others3 etc... I can then run another server to import, alpha sort similar to how the source database is provided from loyce.club and save to a serchable index in a bloom filter for later fast lookups and routine balance checks?

Just wondering how far this can be taken with brute force over time.... how big the pool might get - and for that matter if I can hack together a getwork pool that can generate tasks instead of making the main service generate the random seeds to process.

Also - why did you remove all the detailed comments from the code? Speed things up? Just curios - the code I looked at a while back was pretty easy to follow and learn from... loved it.

GoZippy avatar Jan 18 '23 21:01 GoZippy

FYI with my one server runing - the others file grows at about 1MB per second. I have 42 more servers doing nothing I would love to see if there is a way to make them all work together and not duplicate same work....

GoZippy avatar Jan 18 '23 21:01 GoZippy

Adding and contributing to a global list of already computed private keys is possible.

It could save time I will have to test it out

Isaacdelly avatar Jan 20 '23 04:01 Isaacdelly

I have code to save to others.txt.... file size grows very fast.

Currently working on wrapping the multiprocess parallel compute loop into a pyopencl container so we can test speed on those devices - which will make the file grow even faster... so I am thinking about a nested loop to verify others.txt file size and if more than 500MB to finish the current write and then close out that file and then open others-2 incrimentally....

Then I can use another server with access to file system to pull the latest filename -1 and then process for importing to a common db that has the ability to be sorted quickly and searched. Then we can add a bloom filter on that and setup some tables to speed up checking the master db... anyhow - this is just for me to learn right now and I am working on several optimizations...

I am experimenting with concurrent.futures module instead of multiprocessing.Pool for better async execution of callables at a higher level code base to simplify things... using Pool.map() and Pool.map_async() similar to Pool.map will return concurrent.futures.Future object to help prevent bottlenecks

Looking into integrating Pool.imap() instead of Pool.map as well as Pool.imap_unordered() methods. Unordered might be faster since writing is not held up...

Also looking at how shared memory might help with efficiency...

Anyhow - the others file gets very large - but it's not completely unfathomable to think a community like yours could join forces to build an impressive list of every possible address, over time.

Thoughts on where to upload completed others.txt files for processing and importing to global list? I could setup a website for that too... and integrate upload process on a single core stream then return that core to hashing again ... no ideas really how best to make it work best.

Thinking that building reverse pool to get work instead of generating random on the processing nodes... could be as simple as taking a block and sending a reference string to identify start and stop of that work block to then submit the completed file and ask for next block...

so many cool ideas out there...

GoZippy avatar Jan 20 '23 14:01 GoZippy

was also working on console display useful info for user too... have some issues in my code right now - (was writing code at 3AM not so smartly).

GoZippy avatar Jan 20 '23 14:01 GoZippy

the file says invalid after downloading

Trackhawk2023 avatar Mar 12 '23 12:03 Trackhawk2023

what file? more details please

GoZippy avatar Mar 13 '23 00:03 GoZippy

Referenc

RE the distributed database table.. looking for ideas on doing a mini project chain of minimal security to publish all known key pairs to test against live chains? Looking for ways to store all data in a decentralized way - but I have over 10TB of key pairs now... lol. just having fun learning...

GoZippy avatar Mar 13 '23 00:03 GoZippy