langchain-prefect
langchain-prefect copied to clipboard
Tools for using Langchain with Prefect
Bumps [JamesIves/github-pages-deploy-action](https://github.com/jamesives/github-pages-deploy-action) from 4.4.1 to 4.4.3. Release notes Sourced from JamesIves/github-pages-deploy-action's releases. v4.4.3 What's Changed Bump @types/node from 18.8.0 to 18.8.4 by @dependabot in JamesIves/github-pages-deploy-action#1239 Bump webfactory/ssh-agent from 0.5.4 to...
I'm just wondering if that is possible or would need to make some major changes to it.
I tried to use a local LLM as follows: ``` import asyncio from langchain_ollama import OllamaLLM from langchain_prefect.plugins import RecordLLMCalls llm = OllamaLLM(model="qwen2.5:latest") async def record_call_using_LLM_agenerate(): """async func""" await llm.agenerate(...