lz icon indicating copy to clipboard operation
lz copied to clipboard

Did the newest code can implement stmkomi?

Open pangafu opened this issue 4 years ago • 5 comments

I notice in patch-39 std::vector Network::gather_features change to std::vector<uint8_t> Network::gather_features

So if there is a way to implement stmkomi code? thanks~

pangafu avatar Oct 11 '19 07:10 pangafu

I think gpu worker and batchsize seperate maybe greater for stm komi, can you implement the stm komi to the newest code?

pangafu avatar Oct 11 '19 07:10 pangafu

In order to change the stm (color) planes in patch-39, you need to modify the fourth and fifth parameters of forward0 (btm and wtm): https://github.com/alreadydone/lz/blob/70a8aff1aedc9e2af5556abd928cc480fe358078/src/OpenCLScheduler.cpp#L338 https://github.com/alreadydone/lz/blob/70a8aff1aedc9e2af5556abd928cc480fe358078/src/Network.cpp#L815-L816 https://github.com/alreadydone/lz/blob/70a8aff1aedc9e2af5556abd928cc480fe358078/src/UCTSearch.cpp#L416 https://github.com/alreadydone/lz/blob/70a8aff1aedc9e2af5556abd928cc480fe358078/src/UCTSearch.cpp#L957

When I get a chance I'll try to implement dynamic komi over patch-39, and you are certainly welcome to implement it in the meantime.

Regarding workers and batchsizes: the official branch uses search threads that can send positions to any of the GPUs, while my branch (patch-39 etc.) assigns dedicated worker threads for each GPU and allows the number of worker threads and the batch size configured for each GPU separately. My approach reduces contention between threads and allows a higher n/s to be achieved with many GPUs, but I am not seeing why it might be greater for stm komi.

alreadydone avatar Oct 11 '19 17:10 alreadydone

The offical branch search too wide when batchsize is large, and stm komi is not well training, many low pn search position will cause bad value, so maybe limit worker number will make search more reasonable.

Wait for your stm komi code~ thanks a lot!

pangafu avatar Oct 12 '19 00:10 pangafu

And in my test, in patch-39, when use offical weight, if increase worker number upper than 2 (such as 3), the gpu usage will increase, pos also increase, but can't beat woker number = 2.

So I think think the weight now seem has many fault value in low pn position, because pn is low mean the weight not well training in that way, search too wide maybe mean more fault.

pangafu avatar Oct 12 '19 00:10 pangafu

Also in stm komi test, when I use 4 or 8 gpu, batchsize > 8 in offical branch stm komi code, the handicap capability is lower than 1 gpu, batchsize = 2/3 run in long time.

So maybe stm komi not suitable search that wide.

pangafu avatar Oct 12 '19 00:10 pangafu