lms icon indicating copy to clipboard operation
lms copied to clipboard

LM Studio CLI

Results 137 lms issues
Sort by recently updated
recently updated
newest added

./LM-Studio-0.3.6-8-x64.AppImage [17681:0109/083534.143284:FATAL:setuid_sandbox_host.cc(163)] The SUID sandbox helper binary was found, but is not configured correctly. Rather than run without sandboxing I'm aborting now. You need to make sure that /tmp/.mount_LM-StuLZG250/chrome-sandbox is...

Hello, this isn't a problem but I think it could help many people. Yesterday I wanted to download LM Studio on Ubuntu Linux (I had no graphical environment, just CLI)....

I'm new, so it's probably just me? I fired up a Google VM to try and load LM-Studio headlessly after enjoying it on my laptop. After much ado, got it...

I use another computer to storage those large models and set it as a smb server. When I set model path like "\\192.168.1.101\models", everything went well untill the server computer...

In this article is anounced AMD supports the usage of their AI processors on LM Studio: https://manilastandard.net/tech/tech-news/314557714/experience-the-deepseek-r1-distilled-reasoning-models-on-amd-ryzen-ai-radeon.html However, not only the integrated graphics Radeon 780M is no longer supported on...

First, I want to thank all the collaborators of the project. LM Studio is an amazing tool and I hope we continue making it better and better. Now, that in...

enhancement

I have the same mistake when starting. Sees additional memory, but cannot work with it [https://github.com/LostRuins/koboldcpp/issues/1493](url)

LMS Version 0.3.14 (0.3.14) Model:mlx-community/Kimi-VL-A3B-Thinking-4bit https://model.lmstudio.ai/download/mlx-community/Kimi-VL-A3B-Thinking-4bit The following error message is reported when loading a running model: ``` Failed to load model Error when loading model: ValueError: Model type kimi_vl...

Can you make it possible so we can login to a Huggingface and get models that are gated? `google/gemma-3-27b-it-qat-q4_0-gguf`, for example.

Is it possible to remove the hard requirement for AVX? I understand it's necessary for CPU inference but why would it be necessary for GPU-only inference? I have Phenom II...