paper-qa icon indicating copy to clipboard operation
paper-qa copied to clipboard

'ChatOllama' object has no attribute 'model_name'

Open BenjaminRosell opened this issue 1 year ago • 1 comments

I am trying to use Paper QA with some models locally.

When trying to create a Docs() object, I get an attribute error saying that the ChatOllama object has to attribute model_name.

This is my code :

from paperqa import Docs
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.embeddings import OllamaEmbeddings
from langchain_community.vectorstores import Chroma
from langchain_community.chat_models import ChatOllama

# have tried a few models
model = "llama3"
llm = ChatOllama(model=model, base_url="http://localhost:11434")
embeddings = OllamaEmbeddings(base_url="http://localhost:11434", model=model)

# Demonstrate Ollama and langchain are working
print(llm.invoke("Who was the first US President?"))

docs = Docs(llm="langchain", client=llm, embedding_client=embeddings)
docs.add("I Pencil.pdf")
answer = docs.invoke("Are pencils made of wood?")

BenjaminRosell avatar May 13 '24 20:05 BenjaminRosell

anything solution? i have same question.

yanwun avatar Jul 22 '24 03:07 yanwun

Hi, got it working with ollama with the following setup:

from paperqa import Docs, OpenAILLMModel
from openai import AsyncOpenAI

local_client = AsyncOpenAI(
    base_url='http://localhost:11434/v1',
    api_key='ollama',
)

docs = Docs(
    client=local_client,
    embedding="nomic-embed-text",
    llm_model=OpenAILLMModel(
        config=dict(
            model="llama3.1", temperature=0.1, frequency_penalty=1.5, max_tokens=512,
        )
    ),
    summary_llm_model=OpenAILLMModel(
        config=dict(
            model="llama3.1", temperature=0.1, frequency_penalty=1.5, max_tokens=512,
        )
    ),
)

Madnex avatar Aug 13 '24 09:08 Madnex

``> Hi, got it working with ollama with the following setup:

from paperqa import Docs, OpenAILLMModel
from openai import AsyncOpenAI

local_client = AsyncOpenAI(
    base_url='http://localhost:11434/v1',
    api_key='ollama',
)

docs = Docs(
    client=local_client,
    embedding="nomic-embed-text",
    llm_model=OpenAILLMModel(
        config=dict(
            model="llama3.1", temperature=0.1, frequency_penalty=1.5, max_tokens=512,
        )
    ),
    summary_llm_model=OpenAILLMModel(
        config=dict(
            model="llama3.1", temperature=0.1, frequency_penalty=1.5, max_tokens=512,
        )
    ),
)

Hi @Madnex I'm using your code suggestion but I get this error: This does not look like a text document: PersonInfoReport-14030127_143225.pdf. Path disable_check to ignore this error.

from paperqa import Docs from langchain.callbacks.manager import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.llms import Ollama from langchain.embeddings import OllamaEmbeddings

from paperqa import Docs, OpenAILLMModel from openai import AsyncOpenAI

local_client = AsyncOpenAI( base_url='http://localhost:11434/v1', api_key='ollama', )

docs = Docs( client=local_client, embedding="nomic-embed-text", llm_model=OpenAILLMModel( config=dict( model="llama3.1", temperature=0.1, frequency_penalty=1.5, max_tokens=512, ) ), summary_llm_model=OpenAILLMModel( config=dict( model="llama3.1", temperature=0.1, frequency_penalty=1.5, max_tokens=512, ) ), ) docs.add('PersonInfoReport-14030127_143225.pdf') answer = docs.query("Where dose he live?")

sctrueew avatar Aug 18 '24 07:08 sctrueew

``> Hi, got it working with ollama with the following setup:

from paperqa import Docs, OpenAILLMModel

from openai import AsyncOpenAI

local_client = AsyncOpenAI(

base_url='http://localhost:11434/v1',
api_key='ollama',

)

docs = Docs(

client=local_client,
embedding="nomic-embed-text",
llm_model=OpenAILLMModel(
    config=dict(
        model="llama3.1", temperature=0.1, frequency_penalty=1.5, max_tokens=512,
    )
),
summary_llm_model=OpenAILLMModel(
    config=dict(
        model="llama3.1", temperature=0.1, frequency_penalty=1.5, max_tokens=512,
    )
),

)

Hi @Madnex

I'm using your code suggestion but I get this error:

This does not look like a text document: PersonInfoReport-14030127_143225.pdf. Path disable_check to ignore this error.

from paperqa import Docs

from langchain.callbacks.manager import CallbackManager

from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

from langchain.llms import Ollama

from langchain.embeddings import OllamaEmbeddings

from paperqa import Docs, OpenAILLMModel

from openai import AsyncOpenAI

local_client = AsyncOpenAI(

base_url='http://localhost:11434/v1',
api_key='ollama',

)

docs = Docs(

client=local_client,
embedding="nomic-embed-text",
llm_model=OpenAILLMModel(
    config=dict(
        model="llama3.1", temperature=0.1, frequency_penalty=1.5, max_tokens=512,
    )
),
summary_llm_model=OpenAILLMModel(
    config=dict(
        model="llama3.1", temperature=0.1, frequency_penalty=1.5, max_tokens=512,
    )
),

)

docs.add('PersonInfoReport-14030127_143225.pdf')

answer = docs.query("Where dose he live?")

Did you try with a different PDF? It sounds like it's not a valid PDF. Maybe it's a scanned document? Then you'll need to do some OCR on it before maybe 🤔

Madnex avatar Aug 18 '24 08:08 Madnex

Thank you @Madnex for your help here. In general, we have just released paper-qa version 5, which completely outsources all LLM management to BerriAI/litellm. So I am going to close this out

If anyone still has an issue, please reopen a new issue using paper-qa>=5

jamesbraza avatar Sep 11 '24 17:09 jamesbraza