gpt-engineer
gpt-engineer copied to clipboard
Run the benchmark with GPT 3.5 over different —steps-config
We have scripts/benchmark.py.
If we run it over more configs and store the results to RESULTS.md we will clearly be able to see what works and what does not.
Would also be great to let the script ask for “did it work?” after each run and record the output to a markdown table like benchmark/RESULTS.md (and maybe append it with some metadata to that file!)
can we use the GPT_Engineer with 3.5
Yes, but will not work as well
$ gpt-engineer --model gpt-3.5-turbo Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/bin/gpt-engineer", line 8, in
can we use the GPT_Engineer with 3.5
I would assume you would just change the model in the main.py file
This is ✅
can we use the GPT_Engineer with 3.5
I would assume you would just change the model in the main.py file
Neither applying changes to AI.py nor Main, can modify the API invalid token or the version model : during chat model creation azure. openai_api_version=os.getenv("OPENAI_API_VERSION", "2023-05-15"),