alphageometry icon indicating copy to clipboard operation
alphageometry copied to clipboard

Inconsistencies in Auxiliary Construction Predictions

Open yihong1120 opened this issue 1 year ago • 0 comments

Dear Authors,

I hope this message finds you well. I am reaching out to discuss a rather intriguing aspect of the AlphaGeometry solver that I've encountered while exploring the remarkable capabilities of your software, as described in the repository.

Whilst the solver has demonstrated profound proficiency in deducing geometric proofs, I've observed some inconsistencies in the predictions of auxiliary constructions during the problem-solving process. Specifically, there appears to be a variation in the quality and relevance of the auxiliary points or lines proposed by the language model, which occasionally leads to suboptimal proof paths or, in some cases, an inability to reach a conclusion within the stipulated time frame.

I am particularly interested in understanding the underlying factors that contribute to these disparities. Is it a consequence of the inherent stochastic nature of the language model, or could it be attributed to the training data and the model's exposure to a diverse range of geometric configurations during its learning phase?

Moreover, I am curious about the potential strategies one might employ to enhance the consistency and reliability of the auxiliary construction predictions. Would augmenting the training dataset with a broader spectrum of problems or incorporating additional fine-tuning stages be a viable approach to address this issue?

Your insights into these questions would be incredibly valuable, not only for the advancement of automated theorem proving but also for the broader AI and machine learning community's understanding of model predictability and performance optimisation.

Thank you for your time and the impressive work you've shared with the community.

Best regards, yihong1120

yihong1120 avatar Jan 22 '24 06:01 yihong1120