llama-cpp-python icon indicating copy to clipboard operation
llama-cpp-python copied to clipboard

igpu

Open ayttop opened this issue 6 months ago • 1 comments

from llama_cpp import Llama

llm = Llama( model_path="C:\Users\ArabTech\Desktop\4\phi-3.5-mini-instruct-q4_k_m.gguf", n_gpu_layers=-1, verbose=True, ) output = llm( "Q: Who is Napoleon Bonaparte A: ", max_tokens=1024, stop=["\n"] # Add a stop sequence to end generation at a newline ) print(output)

n_gpu_layers=-1 n_gpu_layers=32

not work on igpu intel

how ofload model on igpu intel?

ayttop avatar Aug 26 '24 03:08 ayttop