gpt4all icon indicating copy to clipboard operation
gpt4all copied to clipboard

typescript: Could not find any implementations for build variant: default

Open vuthanhtrung2010 opened this issue 9 months ago • 1 comments

Bug Report

IMG_20240509_125237

Example Code

import { createCompletion, DEFAULT_DIRECTORY, DEFAULT_LIBRARIES_DIRECTORY, loadModel } from './src/gpt4all.js'
(async (): Promise<void> => {
    const model = await loadModel('mistral-7b-instruct-v0.1.Q4_0.gguf', { verbose: true, device: 'cpu', nCtx: 2048 });
    const chat = await model.createChatSession({
        // any completion options set here will be used as default for all completions in this
chat session
        temperature: 0.8,
        // a custom systemPrompt can be set here. note that the template depends on the model.
        // if unset, the systemPrompt that comes with the model will be used.
        systemPrompt: "### System:\nYou are an advanc ited mathematician.\n\n",
    });
    // create a completion using a string as input
    const res1 = await createCompletion(chat, "What is 1 + 1?");
    console.debug(res1.choices[0].message);
    // multiple messages can be input to the conversation at once.
    // note that if the last message is not of role 'user', an empty message will be returned.
    await createCompletion(chat, [
        {
            role: "user",
            content: "What is 2 + 2?",
        },
        {
            role: "assistant",
            content: "It's 5.",
        },
    ]);
    const res3 = await createCompletion(chat, "Could you recalculate that?");
    console.debug(res3.choices[0].message);
    model.dispose();

Steps to Reproduce

1.Git clone the repo 2. Npm I and run build script in typescript folder 3. Run my index.ts above

Expected Behavior

It should be done with no bugs

Your Environment

  • Bindings version (e.g. "Version" from pip show gpt4all): latest fr. I don't know how to check with js (I recloned it multiple times just rn)
  • Operating System: Ubuntu 22.04
  • Chat model used (if applicable): Mistral 7b

vuthanhtrung2010 avatar May 09 '24 05:05 vuthanhtrung2010

Hello!

If you wanna work on the bindings make sure you follow the instructions here https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/typescript#build-from-source

  • Check if you ran git submodule update --init --recursive and that gpt4all-backend/llama.cpp-mainline exists.
  • Then run npm run build:backend to build it and create the necessary artifacts in runtime
  • Finally run node scripts/prebuild.js to build the bindings.

Then it should run. Maybe we can simplify the build sometime.

iimez avatar May 09 '24 17:05 iimez

Creating LLModel: { llmOptions: { model_name: 'orca-mini-3b-gguf2-q4_0.gguf', model_path: '/Users/mohaurammala/.cache/gpt4all', library_path: '/Users/mohaurammala/Documents/Dev-Work/DevOps/Gesh1/gpt4all/gpt4all-bindings/typescript/src', device: 'gpu', nCtx: 2048, ngl: 100 }, modelConfig: { systemPrompt: '### System:\n' + 'You are an AI assistant that follows instruction extremely well. Help as much as you can.', promptTemplate: '### User:\n%1\n\n### Response:\n', order: 'n', md5sum: '0e769317b90ac30d6e09486d61fefa26', name: 'Mini Orca (Small)', filename: 'orca-mini-3b-gguf2-q4_0.gguf', filesize: '1979946720', requires: '2.5.0', ramrequired: '4', parameters: '3 billion', quant: 'q4_0', type: 'OpenLLaMa', description: 'Small version of new model with novel dataset

  • Very fast responses
  • Instruction based
  • Explain tuned datasets
  • Orca Research Paper dataset construction approaches
  • Cannot be used commercially
', url: 'https://gpt4all.io/models/gguf/orca-mini-3b-gguf2-q4_0.gguf', path: '/Users/mohaurammala/.cache/gpt4all/orca-mini-3b-gguf2-q4_0.gguf' } } /Users/mohaurammala/Documents/Dev-Work/DevOps/Gesh1/gpt4all/gpt4all-bindings/typescript/src/gpt4all.js:77 const llmodel = new LLModel(llmOptions); ^

Error: Could not find any implementations for build variant: default at loadModel (/Users/mohaurammala/Documents/Dev-Work/DevOps/Gesh1/gpt4all/gpt4all-bindings/typescript/src/gpt4all.js:77:21) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async file:///Users/mohaurammala/Documents/Dev-Work/DevOps/Gesh1/gpt4all/gpt4all-bindings/typescript/src/index.mjs:3:15

Followed the instruction at https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/typescript#build-from-source and got the same error

mohaurammala avatar May 14 '24 19:05 mohaurammala

Hello!

If you wanna work on the bindings make sure you follow the instructions here https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/typescript#build-from-source

  • Check if you ran git submodule update --init --recursive and that gpt4all-backend/llama.cpp-mainline exists.
  • Then run npm run build:backend to build it and create the necessary artifacts in runtime
  • Finally run node scripts/prebuild.js to build the bindings.

Then it should run. Maybe we can simplify the build sometime.

Fixed. Thanks

vuthanhtrung2010 avatar May 15 '24 14:05 vuthanhtrung2010