yuebo
yuebo
尝试把编译的出来so放到当前的chatglm_cpp目录下面 ```bash cp ./build/lib.linux-x86_64-cpython-310/chatglm_cpp/_C.cpython-310-x86_64-linux-gnu.so ./chatglm_cpp/_C.cpython-310-x86_64-linux-gnu.so ```
You can use below codes in python ```python from qwen_cpp import Pipeline pipeline = Pipeline("/path_to_models/qwen7b-ggml.bin", "/path_to_tiktoken/Qwen-7B-Chat/qwen.tiktoken") result1 = pipeline.chat(["Hello"]) print(result1) result2 = pipeline.chat(["Hello"],stream=True) for item in result2: print(item) ````
re2需要加参数才能在x64上编译成功,你可以试试 `CMAKE_ARGS="-DGGML_CUBLAS=ON -DBUILD_SHARED_LIBS=ON -DCMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc" pip install .`