llama-node icon indicating copy to clipboard operation
llama-node copied to clipboard

langchain integration

Open luca-saggese opened this issue 1 year ago • 2 comments

Hello, I'm trying to use the langchin integration but I cannot figure out how to use it, I'm following some examples in langchain:

import { LLM } from "llama-node";
import { LLamaRS } from "llama-node/dist/llm/llama-rs.js";
import readline from "readline";
import fs from "fs";
import path from "path";
import { SerpAPI } from 'langchain/tools';
import {initializeAgentExecutorWithOptions} from 'langchain/agents';
import { Calculator } from 'langchain/tools/calculator';
import { LLamaEmbeddings } from "llama-node/dist/extensions/langchain.js";

const SERPAPI_KEY = '';

const model = path.resolve(process.cwd(), "./ggml-vic7b-q4_1.bin"); 
const llama = new LLM(LLamaRS);
llama.load({ path: model });

const tools =[
    new SerpAPI(SERPAPI_KEY,{
        hl:'en',
        gl:'us'
    }),
    new Calculator(),
]

const executor = await initializeAgentExecutorWithOptions(tools, llama, {
    agentType: 'chat-zero-shot-react-description'
});
console.log('initialized')
const ret = await executor.call({
    input: "Who is Olivia Wilde's boyfrient? What is his age raised to the 0.23 power?"
});

console.log('ret:', ret.output);

but I got:

TypeError: this.llm.generatePrompt is not a function
    at LLMChain._call (file:///Users/lvx/dalai/node_modules/langchain/dist/chains/llm_chain.js:80:48)
    at async LLMChain.call (file:///Users/lvx/dalai/node_modules/langchain/dist/chains/base.js:65:28)
    at async LLMChain.predict (file:///Users/lvx/dalai/node_modules/langchain/dist/chains/llm_chain.js:98:24)
    at async ChatAgent._plan (file:///Users/lvx/dalai/node_modules/langchain/dist/agents/agent.js:197:24)
    at async AgentExecutor._call (file:///Users/lvx/dalai/node_modules/langchain/dist/agents/executor.js:82:28)
    at async AgentExecutor.call (file:///Users/lvx/dalai/node_modules/langchain/dist/chains/base.js:65:28)
    at async file:///Users/lvx/dalai/agent.js:35:13

i understand that is because the LLM model does not have this function is there any method to call it or do I have to create a translation class?

luca-saggese avatar May 11 '23 16:05 luca-saggese

by now you have to manually adapt generate function to langchain.

hlhr202 avatar May 12 '23 10:05 hlhr202

I'm not as familiar with gpt4all, but noticed they are adding langchain support, maybe there is some overlap: https://github.com/hwchase17/langchainjs/pull/1204

matthoffner avatar May 19 '23 19:05 matthoffner