wingeva1986

Results 13 comments of wingeva1986

I checked the website after using the api. got the result below: ![Screenshot_20230828_233959_com example hikerview](https://github.com/snowby666/poe-api-wrapper/assets/35324882/cd50ef61-1ff3-435e-95f7-f861d6b6ab50) the answer was 3.5 style,which didn't realize that 鲁迅 and 周树人 is the same person....

> I see. Maybe this bug is related to the OpenAI API on Poe server? I will do a research for this later I think it's not a bug but...

It's unsafe to expose a route to do this stuff,and maybe using one given token to ask poe will cause problem in some app where the key should be ignored.

好像是因为不支持stop tokens机制,所以不能正常工作。 https://help.openai.com/en/articles/5072263-how-do-i-use-stop-sequences 要在OpenAI API中使用stop tokens,您可以在API请求中包含stop参数。例如,如果您想要生成一个文本提示,并在模型输出单词 "stop" 时停止生成过程,可以在API请求中包含以下参数: ``` python Copy code import openai openai.api_key = "your_api_key" response = openai.Completion.create( engine="davinci-codex", prompt="请问你今天过得怎么样?", max_tokens=50, n=1, stop=["stop"] ) 在这个例子中,API会在生成的文本中遇到 "stop" 时停止生成。您可以使用多个停止序列,只需将它们添加到stop参数中,例如:stop=["stop", "end",...

> 希望下次更新能够增加一个排除高延迟策略,可以设置排除高延迟数值,这样筛选出来的ip再用负载均衡就很完美了,也可以再负载均衡中直接加入过滤高延迟节点! 延迟高的不代表速度慢,反之亦然。负载均衡用实时速度和延迟综合一下倒是倒是最优的方法,不然就自己测试和手动维护一个分组。

https://www.perplexity.ai (better than gpt-3.5)

> Are you considering migrating these python apps to be used with Typescript, or create a fork project? u can use http request to call Python service in your ts...

> @wingeva1986 which version of python are you using? 3.7

Does this project support third-party OpenAI interfaces (such as poe.com)? If it does, are there any other requirements for these interfaces, such as message format, context memory, and number of...

u must implement openai message wrapper and its stop tokens function,then it will work with langchain.