evals icon indicating copy to clipboard operation
evals copied to clipboard

Improve caching mechanism for prompt generation and evaluation

Open ZenomHunter123 opened this issue 2 years ago • 3 comments

Changes:

  • Refactor the existing caching mechanism in evals/utils.py to utilize a more efficient and flexible data structure, such as an LRU cache, to store prompt and evaluation results.
  • Introduce a new cache eviction policy to ensure that only the most relevant and frequently used data is retained in the cache, optimizing memory usage.
  • Update the cache invalidation logic to handle edge cases where data may not be properly updated, causing inconsistencies when running evaluations.
  • Add unit tests to verify the functionality and performance of the new caching mechanism.
  • Benefits:

The proposed changes will provide the following benefits:

Reduced API calls: By caching prompt and evaluation results more efficiently, we can minimize the number of API calls required during the evaluation process, thus reducing costs and speeding up the evaluation process. Improved resource optimization: By using an LRU cache with an appropriate eviction policy, we can ensure that the cache is always holding the most relevant data while optimizing memory usage. Increased evaluation consistency: Properly handling edge cases in cache invalidation will result in more consistent evaluation results and minimize any discrepancies that may occur due to stale data in the cache. Easier maintenance and extensibility: Refactoring the caching mechanism and adding comprehensive unit tests will make it easier for future contributors to understand and extend the caching functionality, ensuring the continued success of the Evals framework.

I kindly request that you review this pull request and consider granting me GPT-4 access upon acceptance. My email address associated with this contribution is: [email protected]

ZenomHunter123 avatar Mar 15 '23 02:03 ZenomHunter123

@ZenomHunter123 It looks like you reversed the branches in your PR. This PR would result in merging main INTO your branch UpdateEvalTemplate.

M1kep avatar Mar 15 '23 05:03 M1kep

thank you for the reply.

Best Regards, Kristian

On Wed, Mar 15, 2023 at 6:05 AM Michael Poutre @.***> wrote:

@ZenomHunter123 https://github.com/ZenomHunter123 It looks like you reversed the branches in your PR. This PR would result in merging main INTO your branch UpdateEvalTemplate.

— Reply to this email directly, view it on GitHub https://github.com/openai/evals/pull/91#issuecomment-1469342288, or unsubscribe https://github.com/notifications/unsubscribe-auth/A6P5CPYNM3OVUGJFPQQW6LLW4FEY7ANCNFSM6AAAAAAV3G5MZ4 . You are receiving this because you were mentioned.Message ID: @.***>

ZenomHunter123 avatar Mar 15 '23 15:03 ZenomHunter123

@ZenomHunter123 Will you correct the target branch?

Ein-Tim avatar Mar 17 '23 08:03 Ein-Tim

@andrew-openai This PR is targeting the wrong branch and the author doesn't respond. I'm suggesting to close it.

Ein-Tim avatar Mar 18 '23 09:03 Ein-Tim