chess-tuning-tools icon indicating copy to clipboard operation
chess-tuning-tools copied to clipboard

Feature proposal: Ability to save game results after every game or game pair.

Open Claes1981 opened this issue 3 years ago • 4 comments

Is there any other reason for this than some computation time between iterations? That is, will the result be different compared to doing 25 times more iterations with 2 games per iteration?

The main reason is indeed the computation time between iterations. But there is also an additional consideration. In high dimensions, it becomes more and more difficult to constrain the kernel hyperparameters using marginal prior distributions (see Michael Betancourt’s excellent tutorial). By reducing the noise as much as possible, we are constraining the space of models consistent with the data, and make it easier for the model to fit the data. Ultimately reducing the number of iterations needed.

Originally posted by @kiudee in https://github.com/kiudee/chess-tuning-tools/issues/87#issuecomment-680967041

I can personally tolerate computation time that is not longer than game playing time, I think. (In my current experiment I estimate that 2 games takes around 1 hour to play. :) )

However, if the total number of required games increases considerable with only 2 games per iteration compared to 50 games per iteration, it would be very useful if you could somehow save the results in between, if you need to pause the experiment during iterations. Losing an average of about 12 hours of games every time you need to do something else on the computer is less desirable...

Claes1981 avatar Aug 27 '20 17:08 Claes1981

In this CLOP software the game results are saved after each played game, and you can abort and resume the tuning after each game.

I have no idea if that would be easy to translate to or implement in Chess Tuning Tools.

Claes1981 avatar Aug 27 '20 17:08 Claes1981

True, currently we don’t save the results of individual matches. I will note it as a potential improvement (albeit one which is no that straightforward to implement).

For games this long it is of course fine to use less games per iteration. The model even works for just 1 round (2 games). The "50-100" games was aimed more at shorter games or many parallel long time control games on fish test, where you would want to avoid long timeouts in between iterations. I have used chess-tuning-tools already for up to 2000 iterations where the slowdown is ~1-2 minutes computation per iteration.

kiudee avatar Aug 27 '20 21:08 kiudee

Thanks, I will continue to use 2 games per iteration then, hoping that the total required number of games won't increase too dramatically compared to more games per iteration.

In my current 6 parameter experiment I think I would be satisfied if I could reach a result that there is about 80% chance that the "Current optimum" parameters lie within 5 Elo from the true absolute optimum.

Claes1981 avatar Aug 28 '20 11:08 Claes1981

For the record: Briefly mentioned a very general idea of how to do this feature in this comment (some loop inside main optimization loop), but haven't yet figured out how to implement it with the current out_exp and run_match syntaxes.

The idea is to store the score received so far in a file, if interrupted during the iteration, and call run_match repeatedly for every game pair. I assume the noise estimation should be calculated for all games of the iteration combined when finished then. Maybe that is possible somehow by calling the counts_to_penta function from cli.py, don't know yet.

Update: I have now implemented saving and resume after each round in my fork. (There might still be bugs though...)

Claes1981 avatar Jul 28 '21 18:07 Claes1981