Daniel
Daniel
@varunshenoy Are you planning to add the ability to use it with the SGLang OpenAI like server?
@colesmcintosh Thanks As far as I can tell, mistral api is not compatible with openai, they have their own sdk for python and their api does not work through openai...
There is no CODE_MAX_STRING_LENGTH in docker-compose.yaml in the sandbox section. I also increased this variable everywhere (in the environment and other files), the result did not change.
Hello @johnbean393, any updates on this feature? And also I may try to help (planning / use cases / may be some python implementation).
added logging in stt.py and got repeated words in transcribe_file: ```python segments, transcription_info = whisper.transcribe( file.file, task=Task.TRANSCRIBE, language=language, initial_prompt=prompt, word_timestamps="word" in timestamp_granularities, temperature=temperature, vad_filter=vad_filter, hotwords=hotwords, ) ``` @fedirz any thoughts...
@fedirz I don’t think it’s faster-whisper, because i could not reproduce this issue with only faster-whisper framework. But while debugging faster-whisper-server I found out, that issue comes from stt.py: whisper.transcribe(…...
No, I’ve tested it with different files
Any updates on this issue?
+1 for openai like compatible provider in UI
@crazywoola yml of chatflow demo, in this chatflow parameter extraction memory is on and it does not work (success 0), if you turn off memory it will work with success...