Saifeddine ALOUI
Saifeddine ALOUI
If you have time, can you add some sliders to select the temperature and so on. Just put them into a form with update button that sends this all to...
perfect. PR acccepted. Thanks. I'll have to add the backend wizardry to recover the discussions from the database and populate a list of entries.
If you want there is already a rest api that supports ctransformer: https://github.com/ParisNeo/lollms it allows you to generate text using a distributed or centralized architecture with multiple service nodes and...
Sorry I forgot this issue. It was fixed long ago. Now LoLLMs supports AWQ models without any problem. Thanks.
Hi. Sorry didn't see your message. I just upgraded everything to new cuda 12.1, new torch 2.1 and reinstalled transformers. Since then every thing works fine.
The latent space of the encoder output.
I was thinking to build an animation that shows the model encoder output moving inside a 2d or 3d projection of the latent space with a background of text chunks...
I thought about it and maybe just expose the embed function of llamacpp would already be useful for me.
I need to give it text and it gives me the embeddings for the input text. Can you expose that in the model?
In llama-cpp-python binding, they have embed function in their model: https://abetlen.github.io/llama-cpp-python/ Also the ctransformers binding have embed method: https://github.com/marella/ctransformers I think they use the llamacpp in background.