LimSim icon indicating copy to clipboard operation
LimSim copied to clipboard

Can this simulator run by local ollama model

Open baibingren opened this issue 1 year ago • 2 comments

i deploy gemma:2b in local by ollama, can i run this simulator by local model

baibingren avatar Mar 21 '24 07:03 baibingren

Thanks for using LimSim++ and if you have deployed LLM locally, you can refer to our ExampleLLMAgentCloseLoop.py code. In this code, we have built a GPT-4 based driver agent using langchain, you can search how langchain calls the local gemma:2b model and then build your own driver agent following our code. I found the following reference link on the web for you:

  • https://blog.csdn.net/2301_79342058/article/details/136637557
  • https://ai.google.dev/gemma/docs/integrations/langchain Of course, you can also define your own agent without Langchain, you just need to define the inputs and outputs of the agent and it will work fine.

fudaocheng avatar Mar 22 '24 02:03 fudaocheng

Stale issue message, no activity

github-actions[bot] avatar May 22 '24 01:05 github-actions[bot]