UI-TARS icon indicating copy to clipboard operation
UI-TARS copied to clipboard

ModuleNotFoundError: No module named 'resource'

Open Lathezero opened this issue 9 months ago • 0 comments

使用命令:

python -m vllm.entrypoints.openai.api_server --served-model-name ui-tars --model "E:\xiazai\bytedance-research.UI-TARS-7B-DPO.Q6_K.gguf" --host 0.0.0.0 --port 8000
Traceback (most recent call last):
  File "<frozen runpy>", line 189, in _run_module_as_main
  File "<frozen runpy>", line 112, in _get_module_details
  File "D:\miniconda3\envs\uitars\Lib\site-packages\vllm\__init__.py", line 3, in <module>
    from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs
  File "D:\miniconda3\envs\uitars\Lib\site-packages\vllm\engine\arg_utils.py", line 11, in <module>
    from vllm.config import (CacheConfig, CompilationConfig, ConfigFormat,
  File "D:\miniconda3\envs\uitars\Lib\site-packages\vllm\config.py", line 22, in <module>
    from vllm.model_executor.layers.quantization import (QUANTIZATION_METHODS,
  File "D:\miniconda3\envs\uitars\Lib\site-packages\vllm\model_executor\__init__.py", line 1, in <module>
    from vllm.model_executor.parameter import (BasevLLMParameter,
  File "D:\miniconda3\envs\uitars\Lib\site-packages\vllm\model_executor\parameter.py", line 7, in <module>
    from vllm.distributed import get_tensor_model_parallel_rank
  File "D:\miniconda3\envs\uitars\Lib\site-packages\vllm\distributed\__init__.py", line 1, in <module>
    from .communication_op import *
  File "D:\miniconda3\envs\uitars\Lib\site-packages\vllm\distributed\communication_op.py", line 6, in <module>
    from .parallel_state import get_tp_group
  File "D:\miniconda3\envs\uitars\Lib\site-packages\vllm\distributed\parallel_state.py", line 38, in <module>
    import vllm.distributed.kv_transfer.kv_transfer_agent as kv_transfer
  File "D:\miniconda3\envs\uitars\Lib\site-packages\vllm\distributed\kv_transfer\kv_transfer_agent.py", line 15, in <module>
    from vllm.distributed.kv_transfer.kv_connector.factory import (
  File "D:\miniconda3\envs\uitars\Lib\site-packages\vllm\distributed\kv_transfer\kv_connector\factory.py", line 3, in <module>
    from .base import KVConnectorBase
  File "D:\miniconda3\envs\uitars\Lib\site-packages\vllm\distributed\kv_transfer\kv_connector\base.py", line 14, in <module>
    from vllm.sequence import IntermediateTensors
  File "D:\miniconda3\envs\uitars\Lib\site-packages\vllm\sequence.py", line 16, in <module>
    from vllm.inputs import SingletonInputs, SingletonInputsAdapter
  File "D:\miniconda3\envs\uitars\Lib\site-packages\vllm\inputs\__init__.py", line 7, in <module>
    from .registry import (DummyData, InputContext, InputProcessingContext,
  File "D:\miniconda3\envs\uitars\Lib\site-packages\vllm\inputs\registry.py", line 13, in <module>
    from vllm.transformers_utils.tokenizer import AnyTokenizer
  File "D:\miniconda3\envs\uitars\Lib\site-packages\vllm\transformers_utils\tokenizer.py", line 16, in <module>
    from vllm.utils import make_async
  File "D:\miniconda3\envs\uitars\Lib\site-packages\vllm\utils.py", line 15, in <module>
    import resource
ModuleNotFoundError: No module named 'resource'

另外,我想使用ai写的api开放脚本来开放我的gguf模型:

from fastapi import FastAPI, Request
import uvicorn
from llama_cpp import Llama
import json

app = FastAPI()
model_path = r"E:\xiazai\bytedance-research.UI-TARS-7B-DPO.Q6_K.gguf"
model = Llama(model_path=model_path, n_ctx=2048)

@app.post("/v1/chat/completions")  # 注意这里应该是 /v1/chat/completions
async def create_chat_completion(request: Request):
    data = await request.json()
    
    messages = data.get('messages', [])
    prompt = ""
    for message in messages:
        if message.get('role') == 'user':
            content = message.get('content', [])
            if isinstance(content, list):
                for item in content:
                    if item.get('type') == 'text':
                        prompt += item.get('text', '')
            else:
                prompt += content
    
    response = model.create_completion(
        prompt=prompt,
        max_tokens=data.get('max_tokens', 128),
        temperature=0.7,
    )
    
    return {
        "id": "chatcmpl-123",
        "object": "chat.completion",
        "created": 1677858242,
        "model": "ui-tars-7b-dpo",
        "choices": [{
            "message": {
                "role": "assistant",
                "content": response['choices'][0]['text']
            },
            "index": 0,
            "finish_reason": "stop"
        }]
    }

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8000)

然后使用desktop,设置: model:E:\xiazai\bytedance-research.UI-TARS-7B-DPO.Q6_K.gguf; url:http://0.0.0.8000/v1; api: empty;

使用以上配置,发送tars消息后,一直没有回应,desktop的终端有输出,但是server的终端没响应,另外当我把url换成 http://0.0.0.8000 时,server的终端会报错404 接口之类的错误 希望可以帮忙解决一下,不知道是否有哪步运行错了还是怎么样

Lathezero avatar Mar 24 '25 13:03 Lathezero