jan icon indicating copy to clipboard operation
jan copied to clipboard

bug: Jan stops any instance of llama-server upon starting

Open ramonpzg opened this issue 5 months ago • 1 comments

TODO: Add details on the issue

ramonpzg avatar Jun 23 '25 03:06 ramonpzg

Hi, I encountered this issue when running my own instance of llama-server and starting up Jan after.

On starting Jan, my server instance was unexpectedly terminated. After some investigation, I found that the cause may be this line in src-tauri/src/core/setup.rs:

pub fn setup_sidecar(app: &App) -> Result<(), String> {
    clean_up();
    let app_handle = app.handle().clone();
    let app_handle_for_spawn = app_handle.clone();

This clean_up() function indiscriminately kills any process named llama-server, regardless of whether it was spawned by Jan. Here’s the relevant part of the function:

//
// Clean up function to kill the sidecar process
//
pub fn clean_up() {
    #[cfg(windows)]
    {
        use std::os::windows::process::CommandExt;
        let _ = std::process::Command::new("taskkill")
            .args(["-f", "-im", "llama-server.exe"])
            .creation_flags(0x08000000)
            .spawn();
        let _ = std::process::Command::new("taskkill")
            .args(["-f", "-im", "cortex-server.exe"])
            .creation_flags(0x08000000)
            .spawn();
    }
    #[cfg(unix)]
    {
        let _ = std::process::Command::new("pkill")
            .args(["-f", "llama-server"])
            .spawn();
        let _ = std::process::Command::new("pkill")
            .args(["-f", "cortex-server"])
            .spawn();
    }
    log::info!("Clean up function executed, sidecar processes killed.");
}

To confirm this, I commented out the clean_up() call locally and Jan no longer kills my external llama-server process on startup.

Would it be possible to scope this more narrowly to only terminate processes Jan starts (e.g. by tracking the Child handle or PID)?

Happy to help explore a fix or contribute a PR if needed.

JovenSoh avatar Jun 23 '25 07:06 JovenSoh

Fixed in 0.6.6 with dbdc03158300dea06c1f3ff025fe4c7ceff66969 and subsequent commits. Now the backend manages its own llama-server state.

qnixsynapse avatar Jul 21 '25 03:07 qnixsynapse