PlebeiusG

Results 18 comments of PlebeiusG

This is my intent as well - will report back with my progress. Currently had success running with Ollama on an M2 macbook.

1. I used the provided example (https://github.com/langchain-ai/langgraph/blob/main/examples/agent_executor/base.ipynb) 2. I imported the Ollama LLM generator `from langchain_community.llms.ollama import Ollama` 3. I changed the code from `llm = ChatOpenAI(model="gpt-3.5-turbo-1106", streaming=True)` to `llm...

After digging deeper I found a wrapper for Ollama that should add function calling capability: `from langchain_experimental.llms.ollama_functions import OllamaFunctions` Now, setting up my llm as `llm = OllamaFunctions()` I get...

Not sure what you mean by "having to import any Langchain module." You're talking about LangGraph - right? So you'd have to import that much, but it doesn't force you...

Still reporting the same error as above. Great to see this repo updated, though!

Another use case is using LangChain/LangGraph. Invoking a runnable often creates new threads (under the hood), which causes a ScriptRunContext error when using StreamlitCallbackHandler. As LangChain is, what I image...

I like ChatGPT's UI. I'm sure it could be improved, but there are no suggestions listed here.

I wonder if we could focus on creating a web portal to interact with the chatbot, instead of a Qt application.

I can't tell if the CLI version is good-to-go - I'm getting an error when I run it: ``` Traceback (most recent call last): File "/Users/micah/Downloads/A.I./gpt4all/gpt4all-bindings/cli/app.py", line 118, in app()...

bump. Also running into this - doesn't seem compatible with langgraph. ```python # ... graph_config = RunnableConfig() st_callback = StreamlitCallbackHandler(answer_container) graph_config['callbacks'] = [st_callback] graph_config['hyperparameters'] = st.session_state.graph_hyperparameters graph_config['metadata'] = { "conversation_id":...