H Lohaus

Results 272 comments of H Lohaus

Did you know which provider returns the HTML? You can try use a other provider.

They are many other llama provider

You can have a chat context. Simply add your context to the messages. A Conversation API is not planned or wanted. Only Bing has in g4f a Conversation Api, but...

Every provider support conversation with messages

Try: https://github.com/xtekky/gpt4free#proxy-support ``` response = await g4f.ChatCompletion.create_async( model=g4f.models.default, provider=g4f.Provider.Opchatgpts, messages=[{"role": "user", "content": "Hello"}], proxy="http://host:port", # or socks5://user:pass@host:port ) ```

The errors are log to the browser console. What are the error message?

Try to update your installation

Use ChatCompletion not Completion. Then you can use a other Provider.

Please fix that: Why you set G4F_PROXY? Why so you use https://mirror.ghproxy.com? The mac script works only with root privileges.

You can patch the provider for image generation: ```python import g4f from g4f.Provider.bing.create_images import patch_provider for response in g4f.ChatCompletion.create( model=g4f.models.default, provider=g4f.Provider.Bing, messages=[{"role": "user", "content": "Hello"}], stream=True, patch_provider=patch_provider ): print(response, end="",...