ctransformers
ctransformers copied to clipboard
Python bindings for the Transformer models implemented in C/C++ using GGML library.
I notify you that the link to langchain in your readme https://python.langchain.com/v0.1/docs/ecosystem/integrations/ctransformers/ is broken.
Hi, I just want to run this simple code on GPU... ``` from langchain_community.llms import CTransformers llm = CTransformers(model="./airoboros-mistral2.2-7b.Q4_K_S.gguf", model_type="mistral", gpu_layers=32, verbose=True) print(llm.invoke('AI is going to')) ```  As you...
I am trying to dockerize a LLM (Mistral) on my local laptop using Ctransformers. For some reason I am noticing a weird error below. However, the same is not happening...
I am trying to run this example code: `llm = AutoModelForCausalLM.from_pretrained("marella/gpt-2-ggml")` The model is loaded and then the cell just keeps running without outputting anything. Does anyone know what the...
Hi, Is there something wrong with using it this way? I can't run the gguf models I want to try. ``` from ctransformers import AutoModelForCausalLM model_name = "SanctumAI/Llama-3.2-3B-Instruct-GGUF" gguf_file =...
``` from ctransformers import AutoModelForCausalLM llm = AutoModelForCausalLM.from_pretrained("ibm-granite/granite-3.1-2b-instruct", model_type="gpt2") print(llm("Tell me the difference between thinking in humans and in LLMs.")) ``` the above doesn't work. how can I use it?