punitchauhan771
punitchauhan771
> @punitchauhan771, Langchain currently does not support ollama as an embedding provider. The reason, probably, is that ollama currenlty [does not have an openai compatble (/v1) embedding endoint. ](https://github.com/ollama/ollama/issues/2416) Hi,...
@mindwellsolutions by default the agent uses gpt-4, if you want to use your gemini model you can provide the gemini llm inside the agent definition: eg: ``` from langchain_google_genai import...
Hi @mindwellsolutions, I don't think there is a way to throttle the agent activity, However if you want to know how many request did your agent made you can use...
Hi @mindwellsolutions the solution I provided works for Gemini as well,though it doesn't count tokens 🙂.
@mindwellsolutions if possible can you provide me the code snippet? Because I just ran the code and this is the response I got while I used Gemini. ``` Tokens Used:...
> @edisonzf2020 Thanks for your comment. I got time to test out gemini api in crewai further over the weekend and as you mentioned it seems to be having issues...
Hi @mindwellsolutions, Thank you for providing a detailed issue, I tried fetching the latest ces 2024 info and this is the response that I got. eg: when I used a...
Hi @mindwellsolutions, Sure here is the code: # importing necessary modules ``` from langchain_google_genai import ChatGoogleGenerativeAI from langchain.agents import load_tools from langchain.utilities import SerpAPIWrapper from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder from...
Hi @FeelsDaumenMan, you can use @janda-datascience code 🙂.
@mindwellsolutions, I tried this with chat-gpt-3.5 turbo also, It has the same problem, the agent hallucinates and thinks it cannot access the tools or search, I guess lower level models...