Devis Lucato
Devis Lucato
> Thank you @dluc for the answer. Now I'm experimenting some weird situations in which a question that had a similarity of 0.79 with `text-embedding-ada-002`, now has only 0.33 with...
hi @0x7c13 > We could theoretically convert the existing code to use Parallel.ForEach instead to drastically improve the embedding speed since the embedding for partitionFiles are not logically coupled. there's...
hi @doggy8088 you should be able to reuse [SK Gemini connector](https://github.com/microsoft/semantic-kernel/tree/main/dotnet/src/Connectors/Connectors.Google/Services), via `WithSemanticKernelTextEmbeddingGenerationService`
@vvdb-architecture I've noticed something similar but I could not repro, I would report it to the LLamaSharp project. They will probably ask for logs
If you add a call to `NativeLibraryConfig.Instance.WithLogs()` you should see logs about the backend selection. For instance, if you run the code here https://github.com/microsoft/kernel-memory/tree/llamatest the console should contain some useful...
Considering that the service is also packaged as a Docker image, even if we add a comment, the Docker image will have all the LLamaSharp packages, and the issue will...
The runtime detection was available last year too but it never worked in my tests, with runtime always using CPU. Might be about the way assemblies are loaded and persist...
> Tables in markdown need to be chunked in a single embedding, it doesn't make sense to split the content strictly based on token limit. hi @Licantrop0, it's not that...
Thanks for the details. The existing chunker is a sample with its limitations, and we welcome improvements. The behavior with markdown file is a bare minimum implementation, and there are...
We welcome PRs in that sense, in the meantime you could fetch the web page with custom code and upload the corresponding HTML file.