Xuan Son Nguyen

Results 19 issues of Xuan Son Nguyen

I've been looking for a lightweight input remapper. You project is absolutely amazing, it's so efficient and so easy to work with. I created a forked version of the project....

### System information Fedora 36, using AppImage version ### What happens? - Create spotify app as described in the "Configuring" section of README - Copy/paste client ID + secret and...

The `llama_kv_cache_seq_shift` or `llama_kv_cache_seq_rm` (or all two of them) is broken with cache type q4_0 for K. In the `main.cpp`, these functions are used for "context swapping", meaning we can...

bug-unconfirmed

# Motivation This subject is already brought up in https://github.com/ggerganov/llama.cpp/issues/4216 , but my initial research failed. Recently, I discovered a new line of model designed specifically for this usage: https://github.com/MeetKai/functionary...

enhancement

**Description:** On windows, the `SetBrightness` controls the flag `DWMWA_USE_IMMERSIVE_DARK_MODE` as shown here: https://github.com/leanflutter/window_manager/blob/main/windows/window_manager.cpp#L1002 The value of `DWMWA_USE_IMMERSIVE_DARK_MODE` is hard coded to 19 in the same file: https://github.com/leanflutter/window_manager/blob/main/windows/window_manager.cpp#L31 However, in the...

## Motivation While we already have [support for known chat templates](https://github.com/ggerganov/llama.cpp/wiki/Templates-supported-by-llama_chat_apply_template), it sometimes not enough for users who want to: - Use their own fine tuned model - Or, use...

enhancement

I don't know if it's a good idea or not. Still WIP, not tested, would be nice if some one can test it out. ``` usage: ./merge ./path/model_1 CONFIG1 ./path/model_2...

help wanted
demo

# Motivation From the day I added `llama_chat_apply_template` #5538 , I already started thinking about adding it into `main.cpp` for replacing the current `-cml` option. However, it is not as...

enhancement

Resolve #6391 The core idea is to use `llama_chat_apply_template` to apply the template twice: with and without the last user message. Then, we find the diff between 2 output strings...

Based on the discussion from https://github.com/ggerganov/llama.cpp/issues/6391#issuecomment-2068353974 We introduce an `enum llama_chat_template` for templates and a family of functions: ```cpp /// Get the Jinja model saved inside given model /// @param...