randomness-testing-toolkit
randomness-testing-toolkit copied to clipboard
Randomness testing toolkit automates running and evaluating statistical testing batteries
Easy setup of the whole infrastructure on single machine using Docker containers. Shall provide same experience as now with hosted service
It would be convenient to have the following utils on the front-end server (147.251.253.249): - [ ] `screen` so we can run experiments without having to keep ssh running -...
http://147.251.253.249/ViewResults/Experiment/6997/ (and same result with 80 GB experiment, which was deleted because of the issue). Too long means probably indefinitely.
At least 2 projects would utilize database access to results: @rozsa117 web service and generator of results for papers (I am writing something like that now). Ideal result: RTT stores...
In the form for filtering results, there are used some shortcuts (for arrows) that complicate usage (arrows are unusable). Ideally, turn them off.
Allow user to set up priority (1 - 5?), so that long-term experiments can be postponed in favor for other experiments. Example: I was running 1000 experiments over CLI, while...
From Lubo's thesis: The reference implementation described in [9] is not used in RTT because it is con- siderably slower than its optimized counterparts. The faster implementation used in RTT,...
When some of the tests in battery fails with extreme p-value (1e-300), signalise it for the whole battery with (for example) blue background with meaning "You might want to rerun...
http://147.251.253.249/ViewResults/Experiment/3266/ It would be more convenient for publication to restart jobs, that did not finished correctly.
How much does Dieharder/STS NIST result vary when run multiple times? EDIT: How does battery settings influence test results.