Junyan Xu
Junyan Xu
FYI there's this [recent work](https://nanothoughts.substack.com/p/reflecting-on-reflexion) that achieves 88% at HumanEval@1.
I ran the same agent three times, and it only went past the second thinking step in my last try: https://gist.github.com/alreadydone/82d8a00cb418fb29540c9e4d2e69dfe2
Those empty functions remind me of [AI functions](https://www.askmarvin.ai/guide/concepts/ai_functions/) :) By the way, I'm very excited to see many of [my thoughts](https://github.com/alreadydone/contents/issues/3#issue-1609619158) being implemenented here.
Is there interest in using a "batch manager" such as https://bors.tech/? It's used in [an open-source project on GitHub](https://github.com/leanprover-community/mathlib) I participated in. Basically, once maintainers approve a PR, they can...
I mentioned Galactica [here](https://twitter.com/Junyan_Xu/status/1663744311761018880). The 30B version apparently performs better than LLaMA 65B on MATH. (Edit: Oh I see [it's now in TODO](https://github.com/FranxYao/chain-of-thought-hub/blob/main/resources/todo.md))
Registry data can be exported and imported, as .reg files in text form IIRC; I don't know how convenient it is in this case though.
maintainer merge
Your nets are 128x10 and 192x15 respectively. These are trained with 7.5 komi, right? @hzyhhzy tested (with --noponder -p50 -r10 -t1 -m10) the 9x9 net with his series of nets...
I don't personally know but at least two persons in [this QQ group](https://jq.qq.com/?_wv=1027&k=5DPdsJM) have trained 13x13 nets.
Leelaz7路&9路&13路 225292625 A link was posted above actually.