Sebastian.W

Results 28 issues of Sebastian.W

### Self Checks - [X] This is only for bug report, if you would like to ask a quesion, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general). - [X] I have searched for existing...

🐞 bug
🌊 feat:workflow

# Description This PR extends API support for LocalAI backend. Below API will be added: - rerank This PR fixes #3524 ## Type of Change Please delete options that are...

size:L
⚙️ feat:model-runtime

```log ~/repo/FastChat$ python -m fastchat.serve.model_worker --model-path ~/repo/models/Qwen-14B-Chat-Int4 --gptq-wbits 4 --gptq-groupsize 128 --model-names gpt-3.5-turbo 2023-09-28 14:36:05 | INFO | model_worker | args: Namespace(host='localhost', port=21002, worker_address='http://localhost:21002', controller_address='http://localhost:21001', model_path='~/repo/models/Qwen-14B-Chat-Int4', revision='main', device='cuda', gpus=None, num_gpus=1,...

In short, vLLM depends on pydantic >= 2, ``` pydantic >= 2.0 # Required for OpenAI server. ``` on the other hand, `fastchat/serve/openai_api_server.py` depends on v1.x ```python try: from pydantic.v1...

**Is your feature request related to a problem? Please describe.** No. **Describe the solution you'd like** Please support `internlm/internlm-xcomposer2-vl-7b-4bit` model. They have already provided 4-bit quantized model, and there is...

enhancement

我尝试用LocalAI的AutoGPTQ backend 加载internvl-chat-v1.5-int8 量化模型,推理代码使用的是InternVL的ReadMe中提供的样例代码。在加载模型时报错: ``` could not load model (no success): Unexpected err=TypeError(\"internvl_chat isn't supported yet.\") ``` 我查看了HF上的模型相关文件,`internvl_chat`似乎是在`config.json`中定义的。如下: ``` "model_type": "internvl_chat", ``` 本地运行的pip依赖如下: - transformers: 4.40.1 - torch: 2.1.2 -...

### Summary Support Qwen-VL model. ### Details Qwen-VL maybe the only VL model which supports Chinese OCR. Could you please support this model in WasmEdge? ### Appendix The model repo:...

enhancement
c-WASI-NN

### Self Checks - [X] This is only for bug report, if you would like to ask a quesion, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general). - [X] I have searched for existing...

### 起始日期 | Start Date _No response_ ### 实现PR | Implementation PR _No response_ ### 相关Issues | Reference Issues _No response_ ### 摘要 | Summary AutoGPTQ doesn't support MacOS till...

question

`path_outputs` is introduced in config.txt for storing generated images. But Fooocus-API ignores the setting, outputs images to a hard-coded folder `output_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '../..', 'outputs', 'files'))` in `file_utils.py`. I think...