agentic
agentic copied to clipboard
add support to send array of messages by role
Describe the feature
I have observed that this library, to simplify the integration, only allows sending a text as a message.
However, openia's chat integration allows you to send an array of messages:
[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}
]
If you find it interesting, and you don't have time to do it, I could give you a PR.
@bernatvadell we're sending an array of messages under the hood and handle creating this array for you, taking into account token limits.
If people want to bypass this and access the underlying chat completions API directly, we could add another method to do so, but I don't really see it being too useful in practice.
Like @transitive-bullshit said, all you need is to change the "system" role message (if you want), the rest has been already handled under the hood. To ask chatGPT API to follow the conversation, use "parentMessageId" as shown in this demo. https://github.com/transitive-bullshit/chatgpt-api/blob/main/demos/demo-conversation.ts
I'm not sure if the 'system' role message will handle my use case. Like @bernatvadell illustrates, it's sometime useful to 'prime' a conversation with all three roles (system, user, assistant). This is especially useful when using fewshot examples as a means of implementing 'tool formers'.
as an example primed dialog:
[
{'role' : 'system' , 'content': 'you are a scoring system that will provide a score between 1 and 10 based on the similarity of two words provided. You will return two and only two items in this format [{score, reason}] where reason is the justification for the score'},
{'role' : 'user' , 'content': '[apple, orange]'}
{'role' : 'assistant', 'content':'[{score:6 , reason: 'both are common fruits eaten by many people throughout the world'}
]
With that 'priming of the pump' ChatGPT will continue to respond to two word pairs in the same syntax as it was trained in our 'assistant' role above.
Forgive me if I'm misunderstanding if this in inherently supported already.
I am also confused on how to use all three roles for my use case
This is a great use case, and I'm planning on adding support for sending messages
directly soon.
If anyone gets there before me and opens a PR, that's awesome :)
I followed the format of systemMessage and added a new presetMessage field, which works for me. I'm not sure if there are any boundary cases I haven't considered. If possible, I will submit a PR.
https://github.com/transitive-bullshit/chatgpt-api/blob/5fef0f6eadbbc4cfcb83df2d752c4b263dececcc/src/chatgpt-api.ts#L326
if (systemMessage) {
messages.push({
role: 'system',
content: systemMessage
})
}
if (presetMessage) {
messages.push(...presetMessage)
}
agreed that I'd love to support this as a priority feature. I don't think it'd be too difficult to implement.
OR depending on #489, this may be the only way to invoke this API going forwards with a new major version bump.
we may still want to support the easier abstraction which handles state for your (a lot of users of this package want this because the state management part is tricky for them), but I 100% agree that we should at least expose an API which supports sending the messages
array directly.
thanks for bringing this up @bernatvadell 🙏 my comment here applies to this issue as well
PRs very welcome btw 💯 ideally smaller, more focused PRs to start out with
Looking forward for this feat... It's very useful. Multiple system can effectively improve Chatgpt performance.
@transitive-bullshit thanks for this library. Wondering if there are any updates on supporting roles?
This project is undergoing a major revamp; closing out old issues as part of the prep process.
The chatgpt
package is pretty outdated at this point, and message array support is sorely lacking. I recommend that you use the openai package or the openai-fetch package instead.