Add better local model support for LM Studio + document current workaround using `humanify openai --baseURL`
- https://lmstudio.ai/
-
Your local AI toolkit.
-
Download and run Llama, DeepSeek, Mistral, Phi on your computer.
- https://lmstudio.ai/models
-
Model Catalog
-
- https://lmstudio.ai/docs/app
-
Docs
- https://lmstudio.ai/docs/app/api
-
LM Studio as a Local LLM API Server
You can serve local LLMs from LM Studio's Developer tab, either on localhost or on the network.
LM Studio's APIs can be used through an OpenAI compatibility mode, enhanced REST API, or through a client library like lmstudio-js.
- https://lmstudio.ai/docs/app/api/endpoints/openai
-
OpenAI Compatibility API
-
- https://lmstudio.ai/docs/app/api/endpoints/rest
-
LM Studio REST API (beta)
-
LM Studio now has its own REST API, in addition to OpenAI compatibility mode.
The REST API includes enhanced stats such as Token / Second and Time To First Token (TTFT), as well as rich information about models such as loaded vs unloaded, max context, quantization, and more.
-
- https://lmstudio.ai/docs/app/api/structured-output
-
Structured Output
You can enforce a particular response format from an LLM by providing a JSON schema to the
/v1/chat/completionsendpoint, via LM Studio's REST API (or via any OpenAI client).
-
-
- https://lmstudio.ai/docs/typescript
-
lmstudio-js (TypeScript SDK)
The SDK provides you a set of programmatic tools to interact with LLMs, embeddings models, and agentic flows.
- https://github.com/lmstudio-ai/lmstudio-js
-
LM Studio TypeScript SDK
-
lmstudio-tsis LM Studio's official JavaScript/TypeScript client SDK, it allows you to- Use LLMs to respond in chats or predict text completions
- Define functions as tools, and turn LLMs into autonomous agents that run completely locally
- Load, configure, and unload models from memory
- Supports both browser and any Node-compatible environments
- Generate embeddings for text, and more!
- https://github.com/lmstudio-ai/lmstudio-js#why-use-lmstudio-js-over-openai-sdk
-
Why use
lmstudio-jsoveropenaisdk?Open AI's SDK is designed to use with Open AI's proprietary models. As such, it is missing many features that are essential for using LLMs in a local environment, such as:
- Managing loading and unloading models from memory
- Configuring load parameters (context length, gpu offload settings, etc.)
- Speculative decoding
- Getting information (such as context length, model size, etc.) about a model
- ... and more
In addition, while
openaisdk is automatically generated,lmstudio-jsis designed from ground-up to be clean and easy to use for TypeScript/JavaScript developers.
-
-
- https://github.com/lmstudio-ai/lmstudio-js
-
-
-
See Also
- https://github.com/jehna/humanify/issues/400
- https://github.com/jehna/humanify/issues/84
- https://github.com/jehna/humanify/issues/14
- https://github.com/jehna/humanify/issues/416