strix icon indicating copy to clipboard operation
strix copied to clipboard

How to configure a local large model

Open bk007lz opened this issue 1 month ago • 1 comments

strix -t https://70.179.6.240/

config env
declare LLM_API_BASE="http://70.189.82.120:52001/v1" declare LLM_API_KEY="XXXXXX"

However, the following error was reported. How exactly should the local large model be configured?

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.

Provider List: https://docs.litellm.ai/docs/providers

╭─────────────────────────────────────────────────────────────────────────── 🛡️ STRIX STARTUP ERROR ────────────────────────────────────────────────────────────────────────────╮ │ │ │ ❌ LLM CONNECTION FAILED │ │ │ │ Could not establish connection to the language model. │ │ Please check your configuration and try again. │ │ │ │ Error: litellm.APIError: APIError: OpenAIException - │ │

│ │ │ │ │ │ ERROR: The requested URL could not be retrieved │ │ │ │ │ │
│ │

ERROR

│ │

The requested URL could not be retrieved

│ │
│ │
│ │ │ │
│ │

The following error was encountered while trying to retrieve the URL: <a │ │ href="http://10.32.136.230:8285/v1/chat/completions">http://10.32.136.230:8285/v1/chat/completions

│ │ │ │
│ │

Access Denied.

│ │
│ │ │ │

Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect.

│ │ │ │

Your cache administrator is <a │ │ href="mailto:root?subject=CacheErrorInfo%20-%20ERR_ACCESS_DENIED&body=CacheHost%3A%20wuxshcsitd08801.novalocal%0D%0AErrPage%3A%20ERR_ACCESS_DENIED%0D%0AErr%3A%20%5Bnon │ │ e%5D%0D%0ATimeStamp%3A%20Thu,%2020%20Nov%202025%2006%3A47%3A04%20GMT%0D%0A%0D%0AClientIP%3A%2070.189.82.120%0D%0A%0D%0AHTTP%20Request%3A%0D%0APOST%20%2Fv1%2Fchat%2Fcomplet │ │ ions%20HTTP%2F1.1%0AAccept-Encoding%3A%20gzip,%20deflate,%20br,%20zstd%0D%0AAccept%3A%20application%2Fjson%0D%0AContent-Type%3A%20application%2Fjson%0D%0AUser-Agent%3A%20O │ │ penAI%2FPython%201.99.9%0D%0AX-Stainless-Lang%3A%20python%0D%0AX-Stainless-Package-Version%3A%201.99.9%0D%0AX-Stainless-Os%3A%20Linux%0D%0AX-Stainless-Arch%3A%20x64%0D%0AX │ │ -Stainless-Runtime%3A%20CPython%0D%0AX-Stainless-Runtime-Version%3A%203.12.11%0D%0AAuthorization%3A%20Bearer%20Xfusion12%23$%0D%0AX-Stainless-Async%3A%20false%0D%0AX-Stain │ │ less-Raw-Response%3A%20true%0D%0AX-Stainless-Retry-Count%3A%200%0D%0AX-Stainless-Read-Timeout%3A%20600.0%0D%0AContent-Length%3A%20204%0D%0AVia%3A%201.1%20wuxshcsitd08801.n │ │ ovalocal%20(squid%2F4.9)%0D%0AX-Forwarded-For%3A%2070.189.82.120%0D%0ACache-Control%3A%20max-age%3D259200%0D%0AConnection%3A%20keep-alive%0D%0AHost%3A%2010.32.136.230%3A82 │ │ 85%0D%0A%0D%0A%0D%0A">root.

│ │
│ │
│ │ │ │
│ │ │ │ │ │

bk007lz avatar Nov 20 '25 06:11 bk007lz

For OLLAMA I use LLM_API_BASE="http://192.168.7.7:11434" LLM_API_KEY="1234" STRIX_LLM="ollama/mistral"

for example and it works for me

h1tch01 avatar Nov 20 '25 14:11 h1tch01