Carl-Robert Linnupuu

Results 106 comments of Carl-Robert Linnupuu

Hmm, if you're connecting via Ollama provider, then this parameter isn't configurable. I'm unable to reproduce this issue, I've tried multiple models, including the Llama 3.1 8b with the most...

oke, I think I found the issue. Their `POST /api/chat` API streaming seems to be broken. I will change the underlying API to use the `/v1/chat/completions` endpoint instead.

Making the conversations project-scoped is probably the best move. There's just one thing that worries me - will it still be possible to search, filter, and attach conversations from other...

fyi: to make the conversations sync across all projects, you'll probably need to implement an application-level listener that handles the syncing - https://plugins.jetbrains.com/docs/intellij/plugin-listeners.html#defining-application-level-listeners

> No. And this is opinionated, but I don't think it should be either. If a conversation is relevant to multiple projects, then it seems to me the correct solution...

Thanks for the detailed explanation and the migration options. I agree with your reasoning on the project-scoped approach. I like **option 1** for the migration. However, I think we should...

I noticed this behaviour after I upgraded the `llama.cpp` submodule. However, it doesn't seem to happen when running the extension locally. I haven't had time to dive into this yet.

This will be fixed in the next version along with some other improvements. https://github.com/user-attachments/assets/fdbcbdb6-8a8b-4042-8791-021f0186ed00

Hmm, I think this is also a question for the users in general. How would you like it to behave? Would you rather have chats per project or shared across...

No, you shouldn't. The initial fetching is correct though. However, we shouldn't throw an exception in case of any connection errors.