Jeb
Jeb
I would be. I'm not sure I understand you fully, but what I do understand from this statement: "How do we measure ourselves and know that we are making sure...
I believe that we should implement a CLI for gpt-engineer. I am still researching and putting a plan together so this message is a bit premature, but I already know...
> Hi @jebarpg > > what happened? 😅 I closed the old PR and created a new one.
You will also need to update the gpt-engineer\scripts\rerun_edited_message_logs.py model line 16 https://github.com/AntonOsika/gpt-engineer/blob/28cb9dfeaf6e764c5795bd0e3675d2fd1eaf6243/scripts/rerun_edited_message_logs.py#L16 otherwise there will not be any files created when running main.py, only all_output.txt will have the chat text...
> > You will also need to update the gpt-engineer\scripts\rerun_edited_message_logs.py model line 16 > > https://github.com/AntonOsika/gpt-engineer/blob/28cb9dfeaf6e764c5795bd0e3675d2fd1eaf6243/scripts/rerun_edited_message_logs.py#L16 > > > > otherwise there will not be any files created when running...
actually it would probably solve the issue generically if the user was prompted with a list of available models to choose from for fall back instead of forcing a specific...
> Thanks @patillacode for following up here! > > I don't see the need for 16k tokens. Has anyone had problems with this? > > Also, duplicate code not needed?...
I made a proposal here: https://github.com/AntonOsika/gpt-engineer/pull/123#discussion_r1233150492 @AntonOsika @patillacode
@patillacode @AntonOsika The error: gpt_engineer/main.py:21:91: E501 Line too long (107 > 90 characters) is not from something my commit changed. the errors: scripts/rerun_edited_message_logs.py:12:1: E402 Module level import not at top...
> Hi @jebarpg > > I marked this PR as a "draft" since you are still making changes. We don't want to trigger CI on every commit nor get spammed!...