Request errors on initial use: Missing valid openai response
I am really excited about the concept here but after the initial setup ran into some errors on Linux Mint 20.3 (Ubuntu Focal).
Steps taken:
- install with
npm i -g smol-dev-js - run
smol-dev-js setup- Chose OpenAI (on list for Anthropic!)
- Entered API key
- Set remaining settings for project I was in
- run
smole-dev-js run
Output from terminal:
$ smol-dev-js run
--------------------
š£ [ai]: hi its me, the ai dev ! you said you wanted
here to help you with your project, which is a ....
--------------------
CityCoins are cryptocurrencies that allow you to support your favorite cities while earning Stacks and Bitcoin.
--------------------
š£ [ai]: What would you like me to do? (PS: this is not a chat system, there is no chat memory prior to this point)
ā [you]: ⦠Suggest something please
š£ [ai]: (node:147273) ExperimentalWarning: The Fetch API is an experimental feature. This feature could change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
## Unable to handle prompt for ...
{"model":"gpt-4","temperature":0,"max_tokens":7905,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"messages":[{"role":"system","content":"You are an assistant, who can only reply in JSON object, reply with a yes (in a param named 'reply') if you understand"},{"role":"assistant","content":"{\"reply\":\"yes\"}"},{"role":"user","content":"[object Object]\n[object Object]\n[object Object]\n[object Object]\n[object Object]"}]}
## Recieved error ...
[invalid_request_error] undefined
## Unable to handle prompt for ...
{"model":"gpt-4","temperature":0,"max_tokens":7873,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"messages":[{"role":"system","content":"You are an assistant, who can only reply in JSON object, reply with a yes (in a param named 'reply') if you understand"},{"role":"assistant","content":"{\"reply\":\"yes\"}"},{"role":"user","content":"[object Object]\n[object Object]\n[object Object]\n[object Object]\n[object Object]"},{"role":"user","content":"Please update your answer, and respond with only a single JSON object, in the requested format. No apology is needed."}]}
## Recieved error ...
[invalid_request_error] undefined
Error: Missing valid openai response, please check warn logs for more details
at getChatCompletion (/home/whoabuddy/.nvm/versions/node/v18.12.1/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/openai/getChatCompletion.js:290:8)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Object.promiseGenerator (/home/whoabuddy/.nvm/versions/node/v18.12.1/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/AiBridge.js:249:11)
Last Completion null
Error: Missing valid openai response, please check warn logs for more details
at getChatCompletion (/home/whoabuddy/.nvm/versions/node/v18.12.1/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/openai/getChatCompletion.js:290:8)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Object.promiseGenerator (/home/whoabuddy/.nvm/versions/node/v18.12.1/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/AiBridge.js:249:11)
Also noticed Recieved is misspelled in the error above :microscope:
How can I check the warn logs that were mentioned?
Can you check whether your OpenAI account has an access to chatGPT-4 model from your API key ? List all the available models for your account with curl
curl https://api.openai.com/v1/models \ -H "Authorization: Bearer $OPENAI_API_KEY"
and see whether it contains chatgpt-4 model from response
+1 have subscribe at openai and api-key and access to very long list of models including "gpt 3.5" and "gpt 3.5 turbo"
however, no access to gpt 4
prereq "get api key" , get "pay as you go account w billing" are NOT enough for openai to be the config'd model
Thanks @jojonoparat looking at the list gpt4 isn't on there, I assumed it would be but just signed up for the waitlist. Is that the problem? Could this work with 3.5?
I have not tried it myself but may be you could hard code it from this file by adding or changing some line like this
model = "gpt-3.5-turbo" //<-- your available model name
The current prompts were not designed to support gpt-3.5 context size - so they may get very unexpected behaviours
thank you both for the quick replies
@jojonoparat that makes sense, looking there you could use a prompt per L35 too that would override the config :sunglasses:
@PicoCreator in that case I"ll leave it as-is and hope for access to Claude or GPT-4 soon!
It would be helpful if there was:
- ~~a small warning / set of instructions to check available models in readme~~ I see the update now!
- an error message that indicates the gpt-4 model isn't found / available
while wait for 4.0 , i tried override ( hard-code to "gpt-3.5-turbo-16k-0613")
no change except the literal val in error.msg...
- ## Recieved error ...
- [invalid_request_error] undefined
- ## Unable to handle prompt for ...
- {"model":"gpt-3.5-turbo-16k-0613","temperature":0,"max_tokens":3923,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"messages":[{"role":"system","content":"You are an assistant, who can only reply in JSON object, reply with a yes (in a param named 'reply') if you understand"},{"role":"assistant","content":"{\"reply\":\"yes\"}"},{"role":"user","content":"[object Object]\n[object Object]\n[object Object]\n[object Object]\n[object Object]"},{"role":"user","content":"Please update your answer, and respond with only a single JSON object, in the requested
Iām having the same problem. The weird thing is, the prompt works at first and GPT correctly lays out the plan. Just once I actually confirm the plan and it tries to do it, it breaks. So yeah, seems to be ai-bridge.
š£ [ai]: Working on the plan ...
š£ [ai]: Studying 0 dependencies (in parallel)
š£ [ai]: Performing any required modules install / file moves / deletion
š£ [ai]: Studying 0 dependencies (awaiting in parallel)
š£ [ai]: Preparing summaries for smol-er sub-operations ...
## Unable to handle prompt for ...
{"model":"gpt-4","temperature":0,"max_tokens":5394,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"messages":[{"role":"user","content":"You are an AI developer...."}]}
## Recieved error ...
[tokens] undefined
Error: Missing valid openai response, please check warn logs for more details
at getChatCompletion (/Users/liam/.asdf/installs/nodejs/19.2.0/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/openai/getChatCompletion.js:290:8)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Object.promiseGenerator (/Users/liam/.asdf/installs/nodejs/19.2.0/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/AiBridge.js:249:11)