LimSim
LimSim copied to clipboard
Can this simulator run by local ollama model
i deploy gemma:2b in local by ollama, can i run this simulator by local model
Thanks for using LimSim++ and if you have deployed LLM locally, you can refer to our ExampleLLMAgentCloseLoop.py code. In this code, we have built a GPT-4 based driver agent using langchain, you can search how langchain calls the local gemma:2b model and then build your own driver agent following our code.
I found the following reference link on the web for you:
- https://blog.csdn.net/2301_79342058/article/details/136637557
- https://ai.google.dev/gemma/docs/integrations/langchain Of course, you can also define your own agent without Langchain, you just need to define the inputs and outputs of the agent and it will work fine.
Stale issue message, no activity