gpt4free
gpt4free copied to clipboard
compose function for bing
Hi all, we can implement the compose function for write long documents with bing , i had use python library that have this function i extract the part of code , is possible implement this with g4f?
here the url of the project: https://github.com/vsakkas/sydney.py
` async def compose( self, prompt: str, tone: str = "professional", format: str = "paragraph", length: str = "short", raw: bool = False, ) -> str | dict: """ Send a prompt to Bing Chat and compose text based on the given prompt, tone, format, and length.
Parameters
----------
prompt : str
The prompt that needs to be sent to Bing Chat.
tone : str, optional
The tone of the response. Must be one of the options listed in the `ComposeTone`
enum. Default is "professional".
format : str, optional
The format of the response. Must be one of the options listed in the `ComposeFormat`
enum. Default is "paragraph".
length : str, optional
The length of the response. Must be one of the options listed in the `ComposeLength`
enum. Default is "short".
raw : bool, optional
Whether to return the entire response object in raw JSON format. Default is False.
Returns
-------
str or dict
The response from Bing Chat. If raw is True, the function returns the entire response
object in raw JSON format.
"""
# Get the enum values corresponding to the given tone, format, and length.
compose_tone = getattr(ComposeTone, tone.upper())
compose_format = getattr(ComposeFormat, format.upper())
compose_length = getattr(ComposeLength, length.upper())
async for response in self._compose(
prompt, compose_tone, compose_format, compose_length, raw, stream=False
):
return response
raise NoResponseException("No response was returned")
async def compose_stream(
self,
prompt: str,
tone: str = "professional",
format: str = "paragraph",
length: str = "short",
raw: bool = False,
) -> AsyncGenerator[str | dict, None]:
"""
Send a prompt to Bing Chat, compose and stream text based on the given prompt, tone,
format, and length.
By default, Bing Chat returns all previous tokens along with new ones. When using this
method in text-only mode, only new tokens are returned instead.
Parameters
----------
prompt : str
The prompt that needs to be sent to Bing Chat.
tone : str, optional
The tone of the response. Must be one of the options listed in the `ComposeTone`
enum. Default is "professional".
format : str, optional
The format of the response. Must be one of the options listed in the `ComposeFormat`
enum. Default is "paragraph".
length : str, optional
The length of the response. Must be one of the options listed in the `ComposeLength`
enum. Default is "short".
raw : bool, optional
Whether to return the entire response object in raw JSON format. Default is False.
Returns
-------
str or dict
The response from Bing Chat. If raw is True, the function returns the entire response
object in raw JSON format.
"""
# Get the enum values corresponding to the given tone, format, and length.
compose_tone = getattr(ComposeTone, tone.upper())
compose_format = getattr(ComposeFormat, format.upper())
compose_length = getattr(ComposeLength, length.upper())
previous_response: str | dict = ""
async for response in self._compose(
prompt, compose_tone, compose_format, compose_length, raw, stream=True
):
if raw:
yield response
# For text-only responses, return only newly streamed tokens.
else:
new_response = response[len(previous_response) :]
previous_response = response
yield new_response`
@gio98 this are the prompt template:
Please generate some text wrapped in codeblock syntax (triple backticks) using the given keywords. Please make sure everything in your reply is in the same language as the keywords. Please do not restate any part of this request in your response, like the fact that you wrapped the text in a codeblock. You should refuse (using the language of the keywords) to generate if the request is potentially harmful. The generated text should follow these characteristics: tone: {tone.value}, length: {length.value}, format: {format.value}. The keywords are: {prompt}.
Bumping this issue because it has been open for 7 days with no activity. Closing automatically in 7 days unless it becomes active again.
Closing due to inactivity.