Results 21 comments of Moncef Arajdal

@thomashacker Yes they're both set and, now I can see them in the frontend but, they don't work for some reason. How can I use Ollama for both generation and...

@bakongi I've done the same as you but I can't figure out where to choose this custom embedder in the frontend of Verba. Any suggestions please?

@bakongi I installed Verba using `pip install goldenverba` like shown in the documentation

@bakongi I make the changes exactly in the files that you mentioned. "I think you should make changes in python shared library folder where verba is installed" Can you please...

@bakongi One more thing, the new embedding model that I added doesn't seem to be downloaded from HugginFace my guess is an api key should be configured or does sentence_transformers...

I see. I've installed Verba `pip install goldenverba` on a virtual environment created using python venv and it's located in the project directory. Is this correct?

So what should I do in this case for the project to run correctly?

I'm also looking for this feature. I tried to hack into the Llama generator to use falcon-7b model from HuggingFace, but things seem not to be working. If anyone has...

I was getting the same error when using MiniLMEmbedder on my mac that doesn't have a cuda gpu. So I tried @f0rmiga solution and I updated my code like this:...

@dreispt I have migrated the module and tested it locally and it works fine. I'm facing a problem with the test cases. Normally the super()._login method takes login and password...