LocalAI icon indicating copy to clipboard operation
LocalAI copied to clipboard

exec: python: not found

Open F42J opened this issue 1 year ago • 8 comments

LocalAI version: latest-aio-cpu/latest-cpu (tested with both

Environment, CPU architecture, OS, and Version: Linux Desktop-j42f 6.8.0-44-generic #44-Ubuntu SMP PREEMPT_DYNAMIC Tue Aug 13 13:35:26 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

Describe the bug Requests to the embeddings endpoint cause a HTTP 500 error with the following error message when used with a custom model. {"error":{"code":500,"message":"grpc service not ready","type":""}}

The error seems to be caused by a missing python install as indicated by the following debug output: 8:12PM DBG GRPC(pytorch_model.bin-127.0.0.1:38153): stderr /build/backend/python/sentencetransformers/../common/libbackend.sh: line 180: exec: python: not found

To Reproduce

  1. Setup Docker hosted LocalAI instanz with the following docker file and model configuration

Dockerfile FROM localai/localai:latest-cpu RUN apt-get update && apt-get install -y wget &&
wget -O /build/models/pytorch_model.bin https://huggingface.co/medicalai/ClinicalBERT/resolve/main/pytorch_model.bin ENV DEBUG=true COPY models/* /build/models/

Modelconfiguration name: clinicalbert backend: sentencetransformers embeddings: true parameters: model: pytorch_model.bin

  1. Build and launch dockercontainer without any further options

  2. Request an embedding from the mebedding endpoint using model=clinicalbert

Expected behavior A correct embedding should be returned

Logs 10:16PM DBG Request received: {"model":"clinicalbert","language":"","translate":false,"n":0,"top_p":null,"top_k":null,"temperature":null,"max_tokens":null,"echo":false,"batch":0,"ignore_eos":false,"repeat_penalty":0,"repeat_last_n":0,"n_keep":0,"frequency_penalty":0,"presence_penalty":0,"tfz":null,"typical_p":null,"seed":null,"negative_prompt":"","rope_freq_base":0,"rope_freq_scale":0,"negative_prompt_scale":0,"use_fast_tokenizer":false,"clip_skip":0,"tokenizer":"","file":"","size":"","prompt":null,"instruction":"","input":"person with fever","stop":null,"messages":null,"functions":null,"function_call":null,"stream":false,"mode":0,"step":0,"grammar":"","grammar_json_functions":null,"backend":"","model_base_name":""} 10:16PM DBG guessDefaultsFromFile: not a GGUF file 10:16PM DBG Parameter Config: &{PredictionOptions:{Model:pytorch_model.bin Language: Translate:false N:0 TopP:0xc0005fe678 TopK:0xc0005fe680 Temperature:0xc0005fe688 Maxtokens:0xc0005fe6c8 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 RepeatLastN:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc0005fe6c0 TypicalP:0xc0005fe6b8 Seed:0xc0005fe6f0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:clinicalbert F16:0xc0005fe670 Threads:0xc0005fe658 Debug:0xc0005fe8d0 Roles:map[] Embeddings:0xc0005fe4e0 Backend:sentencetransformers TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions: UseTokenizerTemplate:false JoinChatMessagesByCharacter:} PromptStrings:[] InputStrings:[person with fever] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:false GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:false NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false PropOrder: SchemaType:} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[] ReplaceFunctionResults:[] ReplaceLLMResult:[] CaptureLLMResult:[] FunctionNameKey: FunctionArgumentsKey:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc0005fe6b0 MirostatTAU:0xc0005fe698 Mirostat:0xc0005fe690 NGPULayers:0xc0005fe6d0 MMap:0xc0005fe6d8 MMlock:0xc0005fe6d9 LowVRAM:0xc0005fe6d9 Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc0005fe650 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: FlashAttention:false NoKVOffloading:false RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: VallE:{AudioPath:}} CUDA:false DownloadFiles:[] Description: Usage:} 10:16PM INF Loading model 'pytorch_model.bin' with backend sentencetransformers 10:16PM DBG Loading model in memory from file: /build/models/pytorch_model.bin 10:16PM DBG Loading Model pytorch_model.bin with gRPC (file: /build/models/pytorch_model.bin) (backend: sentencetransformers): {backendString:sentencetransformers model:pytorch_model.bin threads:12 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc0002d7208 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false} 10:16PM DBG Loading external backend: /build/backend/python/sentencetransformers/run.sh 10:16PM DBG Loading GRPC Process: /build/backend/python/sentencetransformers/run.sh 10:16PM DBG GRPC Service for pytorch_model.bin will be running at: '127.0.0.1:40125' 10:16PM DBG GRPC Service state dir: /tmp/go-processmanager2441360389 10:16PM DBG GRPC Service Started 10:16PM DBG GRPC(pytorch_model.bin-127.0.0.1:40125): stdout Initializing libbackend for build 10:16PM DBG GRPC(pytorch_model.bin-127.0.0.1:40125): stderr /build/backend/python/sentencetransformers/../common/libbackend.sh: line 91: uv: command not found 10:16PM DBG GRPC(pytorch_model.bin-127.0.0.1:40125): stdout virtualenv created 10:16PM DBG GRPC(pytorch_model.bin-127.0.0.1:40125): stdout virtualenv activated 10:16PM DBG GRPC(pytorch_model.bin-127.0.0.1:40125): stdout activated virtualenv has been ensured 10:16PM DBG GRPC(pytorch_model.bin-127.0.0.1:40125): stderr /build/backend/python/sentencetransformers/../common/libbackend.sh: line 97: /build/backend/python/sentencetransformers/venv/bin/activate: No such file or directory 10:16PM DBG GRPC(pytorch_model.bin-127.0.0.1:40125): stderr /build/backend/python/sentencetransformers/../common/libbackend.sh: line 180: exec: python: not found 10:17PM ERR failed starting/connecting to the gRPC service error="rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:40125: connect: connection refused"" 10:17PM DBG GRPC Service NOT ready 10:17PM ERR Server error error="grpc service not ready" ip=172.17.0.1 latency=40.428444674s method=POST status=500 url=/v1/embeddings

Additional context

F42J avatar Sep 16 '24 22:09 F42J

CPU images don't have Python installed. It's by design

Nyralei avatar Sep 16 '24 22:09 Nyralei

Thanks for the information. If the CPU images don't have python by design, what is the recommended way to run models like the clinicalbert project on a server only equipped with CPU? Is one of the other backends also capable of using these models and doesn't require python?

On Tue, Sep 17, 2024, 00:37 Alexander Izotov @.***> wrote:

CPU images don't have Python installed. It's by design

— Reply to this email directly, view it on GitHub https://github.com/mudler/LocalAI/issues/3583#issuecomment-2354150083, or unsubscribe https://github.com/notifications/unsubscribe-auth/AN2GBNVMHOZEORV3MG5UD3DZW5MSTAVCNFSM6AAAAABOKG5H7CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNJUGE2TAMBYGM . You are receiving this because you authored the thread.Message ID: @.***>

F42J avatar Sep 16 '24 23:09 F42J

Thanks for the information. If the CPU images don't have python by design, what is the recommended way to run models like the clinicalbert project on a server only equipped with CPU? Is one of the other backends also capable of using these models and doesn't require python? On Tue, Sep 17, 2024, 00:37 Alexander Izotov @.> wrote: CPU images don't have Python installed. It's by design — Reply to this email directly, view it on GitHub <#3583 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AN2GBNVMHOZEORV3MG5UD3DZW5MSTAVCNFSM6AAAAABOKG5H7CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNJUGE2TAMBYGM . You are receiving this because you authored the thread.Message ID: @.>

You need to add Python and installation of chosen backend, in your case sentencetransformers (https://github.com/mudler/LocalAI/tree/master/backend/python/sentencetransformers), to your Dockerfile. There is install.sh which sets up venv and installs requirements.

Nyralei avatar Sep 16 '24 23:09 Nyralei

Unfortunately i still cannot make it work Once python, uv as well as the sentencetransformers backend are installed (using the backend install script) its fails because of missing the module backend_pb2 It looks like this is a dependency on another backend but i couldn't figure out which backends i have to build additionally.

F42J avatar Sep 17 '24 06:09 F42J

Try this:

FROM localai/localai:latest-cpu

ENV CONDA_DIR /opt/conda
RUN curl https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -o ~/miniconda.sh && \
    /bin/bash ~/miniconda.sh -b -p /opt/conda

ENV PATH $CONDA_DIR/bin:$PATH

RUN pip install grpcio-tools==1.66.0 \
    uv

RUN make -C backend/python/sentencetransformers protogen \
    && make -C backend/python/sentencetransformers

Also specifying local file doesn't seem to work, SentenceTransformer backend receives model_name like this: sentence-transformers/pytorch_model.bin and throws an error:

ERR Server error error="could not load model (no success): Unexpected err=OSError(\"sentence-transformers/pytorch_model.bin is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\\nIf this is a private repository, make sure to pass a token having permission to this repo either by logging in with huggingface-cli loginor by passingtoken=<your_token>\"), type(err)=<class 'OSError'>" ip=172.17.0.1 latency=6.441716341s method=POST status=500 url=/embeddings

So for now yaml file should look like this:

name: clinicalbert
backend: sentencetransformers
embeddings: true
parameters:
  model: medicalai/ClinicalBERT

Nyralei avatar Sep 17 '24 09:09 Nyralei

another way would be to use the gpu images which comes with python and the backends already installed. See also: https://localai.io/basics/container/#standard-container-images

mudler avatar Sep 17 '24 10:09 mudler

Thanks for the help, the gpu images then worked. I had assumed them to be only working for systems with gpus.

The issue pointed out by @Nyralei does occur with the local model, however the modified yaml also doesnt seem to work for me. Is there any incompatibility with the sentence-transformer backend that causes the following message: DBG GRPC(medicalai/ClinicalBERT-127.0.0.1:36351): stderr No sentence-transformers model found with name medicalai/ClinicalBERT. Creating a new one with mean pooling.

(I'm generally not that experienced with this so already thanks a lot for any help)

F42J avatar Sep 17 '24 23:09 F42J

DBG GRPC(medicalai/ClinicalBERT-127.0.0.1:36351): stderr No sentence-transformers model found with name medicalai/ClinicalBERT. Creating a new one with mean pooling.

This message appears when sentence-transformers starts to download the model. Is there network activity? Did it create the directory with model in models path?

Nyralei avatar Sep 18 '24 04:09 Nyralei

I have the same problem with the docker image localai/localai:master-cublas-cuda12. I was able to resolve it by installing python-is-python3 in the container.

However, the next problem is:

local-ai   | 1:32PM DBG GRPC(jina-embedding-127.0.0.1:45343): stdout Initializing libbackend for transformers
local-ai   | 1:32PM DBG GRPC(jina-embedding-127.0.0.1:45343): stderr ./../common/libbackend.sh: line 94: uv: command not found
local-ai   | 1:32PM DBG GRPC(jina-embedding-127.0.0.1:45343): stdout virtualenv created
local-ai   | 1:32PM DBG GRPC(jina-embedding-127.0.0.1:45343): stdout virtualenv activated
local-ai   | 1:32PM DBG GRPC(jina-embedding-127.0.0.1:45343): stdout activated virtualenv has been ensured
local-ai   | 1:32PM DBG GRPC(jina-embedding-127.0.0.1:45343): stderr ./../common/libbackend.sh: line 100: /build/backend/python/transformers/venv/bin/activate: No such file or directory
local-ai   | 1:32PM DBG GRPC(jina-embedding-127.0.0.1:45343): stderr Traceback (most recent call last):
local-ai   | 1:32PM DBG GRPC(jina-embedding-127.0.0.1:45343): stderr   File "/build/backend/python/transformers/backend.py", line 15, in <module>
local-ai   | 1:32PM DBG GRPC(jina-embedding-127.0.0.1:45343): stderr     import backend_pb2
local-ai   | 1:32PM DBG GRPC(jina-embedding-127.0.0.1:45343): stderr ModuleNotFoundError: No module named 'backend_pb2'

Which I was able to resolve by installing uv. However, after that, there are still dependencies missing:

local-ai   | 1:45PM DBG GRPC(jina-embedding-127.0.0.1:35317): stdout Initializing libbackend for transformers
local-ai   | 1:45PM DBG GRPC(jina-embedding-127.0.0.1:35317): stderr Using CPython 3.10.12 interpreter at: /usr/bin/python3.10
local-ai   | 1:45PM DBG GRPC(jina-embedding-127.0.0.1:35317): stderr Creating virtual environment at: venv
local-ai   | 1:45PM DBG GRPC(jina-embedding-127.0.0.1:35317): stdout virtualenv created
local-ai   | 1:45PM DBG GRPC(jina-embedding-127.0.0.1:35317): stdout virtualenv activated
local-ai   | 1:45PM DBG GRPC(jina-embedding-127.0.0.1:35317): stdout activated virtualenv has been ensured
local-ai   | 1:45PM DBG GRPC(jina-embedding-127.0.0.1:35317): stderr Traceback (most recent call last):
local-ai   | 1:45PM DBG GRPC(jina-embedding-127.0.0.1:35317): stderr   File "/build/backend/python/transformers/backend.py", line 15, in <module>
local-ai   | 1:45PM DBG GRPC(jina-embedding-127.0.0.1:35317): stderr     import backend_pb2
local-ai   | 1:45PM DBG GRPC(jina-embedding-127.0.0.1:35317): stderr ModuleNotFoundError: No module named 'backend_pb2'

Fr0d0Beutl1n avatar May 29 '25 13:05 Fr0d0Beutl1n

@Fr0d0Beutl1n python dependencies are available only in extra images as documented in https://localai.io/basics/container/#standard-container-images

mudler avatar May 29 '25 22:05 mudler

That is contradictory to

If you are running LocalAI from the containers you are good to go and should be already configured for use.

https://localai.io/features/embeddings/

Or am I missing something? Is there some way to run sentencetransformers embedding models without the extra image?

I tried using localai/localai:master-cublas-cuda12-extras, and now the error looks like this:

local-ai   | 6:52AM DBG GRPC(jina-embedding-127.0.0.1:42283): stdout Initializing libbackend for transformers
local-ai   | 6:52AM DBG GRPC(jina-embedding-127.0.0.1:42283): stdout virtualenv activated
local-ai   | 6:52AM DBG GRPC(jina-embedding-127.0.0.1:42283): stdout activated virtualenv has been ensured
local-ai   | 6:52AM DBG GRPC(jina-embedding-127.0.0.1:42283): stderr /build/backend/python/transformers/venv/lib/python3.10/site-packages/google/protobuf/runtime_version.py:98: UserWarning: Protobuf gencode version 5.29.0 is exactly one major version older than the runtime version 6.31.0 at backend.proto. Please update the gencode to avoid compatibility violations in the next runtime release.
local-ai   | 6:52AM DBG GRPC(jina-embedding-127.0.0.1:42283): stderr   warnings.warn(
local-ai   | 6:52AM DBG [WatchDog] Watchdog checks for busy connections
local-ai   | 6:52AM DBG GRPC(jina-embedding-127.0.0.1:42283): stderr /build/backend/python/transformers/venv/lib/python3.10/site-packages/transformers/utils/hub.py:111: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
local-ai   | 6:52AM DBG GRPC(jina-embedding-127.0.0.1:42283): stderr   warnings.warn(
local-ai   | 6:52AM DBG GRPC(jina-embedding-127.0.0.1:42283): stderr /build/backend/python/transformers/venv/lib/python3.10/site-packages/pygame/pkgdata.py:25: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
local-ai   | 6:52AM DBG GRPC(jina-embedding-127.0.0.1:42283): stderr   from pkg_resources import resource_stream, resource_exists
local-ai   | 6:52AM DBG GRPC(jina-embedding-127.0.0.1:42283): stderr Server started. Listening on: 127.0.0.1:42283
local-ai   | 6:52AM DBG GRPC Service Ready
local-ai   | 6:52AM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:0xc0002f4e58} sizeCache:0 unknownFields:[] Model:jinaai/jina-embeddings-v3 ContextSize:1024 Seed:2057670989 NBatch:512 F16Memory:false MLock:false MMap:true VocabOnly:false LowVRAM:false Embeddings:true NUMA:false NGPULayers:9999999 MainGPU: TensorSplit: Threads:6 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/jinaai/jina-embeddings-v3 PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 ControlNet: Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 LoadFormat: DisableLogStatus:false DType: LimitImagePerPrompt:0 LimitVideoPerPrompt:0 LimitAudioPerPrompt:0 MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 Type:SentenceTransformer FlashAttention:false NoKVOffload:false ModelPath:/models LoraAdapters:[] LoraScales:[] Options:[] CacheTypeKey: CacheTypeValue: GrammarTriggers:[] Reranking:false}
local-ai   | 6:52AM DBG GRPC(jina-embedding-127.0.0.1:42283): stderr Error: No module named 'custom_st'

Fr0d0Beutl1n avatar May 29 '25 22:05 Fr0d0Beutl1n

This seems to be a problem specific to jina-embeddings-v3.

Apparently, it needs sentence-transformers>=3.1.0, and trust_remote_code.

https://huggingface.co/jinaai/jina-embeddings-v3/discussions/33

The remote code in question, appears to be this file: https://huggingface.co/jinaai/jina-embeddings-v3/blob/main/custom_st.py

I read a bit in the code, and found that it can be enabled as follows:

name: jina-embeddings-v3
backend: sentencetransformers
embeddings: True
model_base_name: jina-embeddings-v3
trust_remote_code: True
parameters:
  model: jinaai/jina-embeddings-v3

Is there documentation that describes these options?

Fr0d0Beutl1n avatar Jun 01 '25 07:06 Fr0d0Beutl1n

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar Sep 04 '25 02:09 github-actions[bot]

This issue was closed because it has been stalled for 5 days with no activity.

github-actions[bot] avatar Sep 10 '25 02:09 github-actions[bot]