ToRA
ToRA copied to clipboard
ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting with tools [ICLR'24].
Bumps [transformers](https://github.com/huggingface/transformers) from 4.31.0 to 4.36.0. Release notes Sourced from transformers's releases. v4.36: Mixtral, Llava/BakLlava, SeamlessM4T v2, AMD ROCm, F.sdpa wide-spread support New model additions Mixtral Mixtral is the new...
The temperatures seem not mentioned neither in the paper or the OpenRiview.
Due to resource constraints, I am particularly interested in comparing the performance of the ToRA-code with self-consistency for k=10 and k=20. Could you kindly provide the results for these values...
Now that various LLM providers' models are getting gold in IMO, would love to see the updated version of Tora do the same