ctransformers icon indicating copy to clipboard operation
ctransformers copied to clipboard

Python bindings for the Transformer models implemented in C/C++ using GGML library.

Results 106 ctransformers issues
Sort by recently updated
recently updated
newest added

Are there any instructions for compiling the dlls from scratch? I am thinking to maybe just run through the GitHub workflow line by line on command line, but I want...

Any Ideas as to why the first generation for a model instance is good but if I try to run that same instance with a new prompt it either returns...

I am trying to run llama2 gguf on windows 11 Version 22H2. I have python 3.11 installed on my local machine. Below is the code: ``` import gradio as gr...

FileNotFoundError: Could not find module 'C:\Users\IR\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\ctransformers\lib\avx2\ctransformers.dll' (or one of its dependencies). Try using the full path with constructor syntax. What are the dependencies of ctransformers.dll Or, should I need to...

Hi, I couldn't install the library with HIPBLAS because of missing CUDA stuff. Turns out there was an extra option for compiling with CUBLAS when CT_HIPBLAS is defined. Also, should...

Hello, community! Recently I have witnessed the rise of `Llama.cpp` & `Ctransformers` and how it has managed to let anyone use LLM on their personal computer. I am having some...

Allow saving to a defined cache folder (similar to huggingface)

C:\Windows\System32>interpreter --local Open Interpreter will use Code Llama for local execution. Use your arrow keys to set up the model. [?] Parameter count (smaller is faster, larger is more capable):...

!pip install ctransformers ctransformers[gptq] ERROR: Could not find a version that satisfies the requirement exllama==0.1.0; extra == "gptq" (from ctransformers[gptq]) (from versions: none) ERROR: No matching distribution found for exllama==0.1.0;...

Below is my simple code for text generation. The problem is that when the prompt is little bugger (as is that case) the generation goes wild and just keeps repeating...