LlamaIndexTS icon indicating copy to clipboard operation
LlamaIndexTS copied to clipboard

Npm run generate has failed

Open RonanJackson opened this issue 1 year ago • 11 comments

> node src/controllers/engine/generate.mjs

Generating storage context...
No valid data found at path: cache/doc_store.json starting new store.
No valid data found at path: cache/index_store.json starting new store.
No valid data found at path: ./cache/vector_store.json starting new store.
file:///Users/name/Projects/test/backend/node_modules/llamaindex/env.mjs:2167
            throw new Error("Not implemented");
                  ^

Error: Not implemented

I have tried using different versions of node and different variations of the npx create-llama@latest questions but end in the same spot. I am using the example PDF option and store the data in the file system.

Running on Mac OS 14.2.1 M1 Chip

RonanJackson avatar Jan 26 '24 17:01 RonanJackson

This is a really impressive project!

I'm afraid I'm getting exactly the same thing on WSL. Here are the options I'm selecting:

(base) ➜  ~ npx create-llama@latest
✔ What is your project named? … example
✔ Which template would you like to use? › Chat without streaming
✖ Which framework would you like to use? › Express
Exiting.
(base) ➜  ~ npx create-llama@latest
✔ What is your project named? … example
✔ Which template would you like to use? › Chat with streaming
✔ Which framework would you like to use? › NextJS
✔ Which UI would you like to use? › Shadcn
✔ Which model would you like to use? › gpt-3.5-turbo
✔ Which data source would you like to use? › Use an example PDF
✔ Would you like to use a vector database? › No, just store the data in the file system
✔ Please provide your OpenAI API key (leave blank to skip): … MY_API_KEY
✔ Would you like to use ESLint? … Yes
✔ How would you like to proceed? › Generate code and install dependencies (~2 min)
Creating a new LlamaIndex app in /home/tajd/example.

I think the last working line for me is here.

TAJD avatar Jan 27 '24 23:01 TAJD

I'm running to the same issue (Windows).

Here was my "create-llama" setup:

"
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
success Installed "[email protected]" with binaries:
      - create-llama
√ What is your project named? ... prse
√ Which template would you like to use? » Chat with streaming
√ Which framework would you like to use? » NextJS
√ Which UI would you like to use? » Shadcn
√ Which model would you like to use? » gpt-4-vision-preview
√ Which data source would you like to use? » Use a local PDF file
√ Would you like to use a vector database? » PostgreSQL
√ Please provide your OpenAI API key (leave blank to skip): ... <OPENAI_KEY>
√ Would you like to use ESLint? ... No / Yes
√ How would you like to proceed? » Generate code and install dependencies (~2 min)
Creating a new LlamaIndex app in C:\Users\userNAME\OneDrive\Documents\GitHub\llamachat_test\PROJECT."

What's wrong?

jbbae avatar Jan 28 '24 08:01 jbbae

@jbbae

Your OpenAI API key is exposed in your comment, I would recommend disabling the key.

setianke avatar Jan 28 '24 11:01 setianke

I belive I found a fix for the generate.mjs under windows

Import fs

import fs from "fs/promises";

for the SimpleDirecotryReader pass the fs in the object

const documents = await new SimpleDirectoryReader().loadData({
    directoryPath: STORAGE_DIR,
    fs, // pass fs here
});

run npm run generate

did work for me - happy coding

setianke avatar Jan 28 '24 12:01 setianke

Encountering the same issue on Macbook Pro M1 Max Sonoma 14.0

➜  rag-app git:(develop) npm run generate

> [email protected] generate
> node app/api/chat/engine/generate.mjs

Generating storage context...
No valid data found at path: cache/doc_store.json starting new store.
No valid data found at path: cache/index_store.json starting new store.
No valid data found at path: ./cache/vector_store.json starting new store.
(node:45539) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
file:///Users/USER_NAME/dev/projects/ai-projects/rag-app/node_modules/llamaindex/env.mjs:2167
            throw new Error("Not implemented");

I am using the Use a local PDF file option and store the data in the file system.

maherbel avatar Jan 28 '24 16:01 maherbel

Thanks, we will take a look. Appreciate the bug report!

yisding avatar Jan 28 '24 17:01 yisding

There were some issues with output generation. try @0.1.3

himself65 avatar Jan 28 '24 18:01 himself65

I'm still having an issue with the index_store.json file not being created by the npm run generate command. I've just started with this project, ran the npx create-llama command, and went through the steps with the provided 101.pdf file in the data directory and I can't run any queries.

Every time I run the generate command it notifies me that it wasn't found and that it's starting a new store:

> [email protected] generate
> tsx app/api/chat/engine/generate.ts

Using 'openai' model provider
Generating storage context...
No valid data found at path: cache/index_store.json starting new store.
Storage context successfully generated in 0.294s.
Finished generating storage.

But the file is not created. When I try to run a query in the UI I get this error in the terminal:

No valid data found at path: cache/index_store.json starting new store.
[LlamaIndex] Error: Cannot initialize VectorStoreIndex without nodes or indexStruct
    at VectorStoreIndex.init (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/indices/vectorStore/index.js:426:19)
    at async getDataSource (webpack-internal:///(rsc)/./app/api/chat/engine/index.ts:19:12)
    at async createChatEngine (webpack-internal:///(rsc)/./app/api/chat/engine/chat.ts:10:19)
    at async POST (webpack-internal:///(rsc)/./app/api/chat/route.ts:59:28)
    at async /workspaces/personal-llama/personal-llama-project/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:55038
    at async ek.execute (/workspaces/personal-llama/personal-llama-project/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:45808)
    at async ek.handle (/workspaces/personal-llama/personal-llama-project/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:56292)
    at async doRender (/workspaces/personal-llama/personal-llama-project/node_modules/next/dist/server/base-server.js:1377:42)
    at async cacheEntry.responseCache.get.routeKind (/workspaces/personal-llama/personal-llama-project/node_modules/next/dist/server/base-server.js:1599:28)
    at async DevServer.renderToResponseWithComponentsImpl (/workspaces/personal-llama/personal-llama-project/node_modules/next/dist/server/base-server.js:1507:28)
    at async DevServer.renderPageComponent (/workspaces/personal-llama/personal-llama-project/node_modules/next/dist/server/base-server.js:1931:24)
    at async DevServer.renderToResponseImpl (/workspaces/personal-llama/personal-llama-project/node_modules/next/dist/server/base-server.js:1969:32)
    at async DevServer.pipeImpl (/workspaces/personal-llama/personal-llama-project/node_modules/next/dist/server/base-server.js:920:25)
    at async NextNodeServer.handleCatchallRenderRequest (/workspaces/personal-llama/personal-llama-project/node_modules/next/dist/server/next-server.js:272:17)
    at async DevServer.handleRequestImpl (/workspaces/personal-llama/personal-llama-project/node_modules/next/dist/server/base-server.js:816:17)
    at async /workspaces/personal-llama/personal-llama-project/node_modules/next/dist/server/dev/next-dev-server.js:339:20
    at async Span.traceAsyncFn (/workspaces/personal-llama/personal-llama-project/node_modules/next/dist/trace/trace.js:154:20)
    at async DevServer.handleRequest (/workspaces/personal-llama/personal-llama-project/node_modules/next/dist/server/dev/next-dev-server.js:336:24)
    at async invokeRender (/workspaces/personal-llama/personal-llama-project/node_modules/next/dist/server/lib/router-server.js:174:21)
    at async handleRequest (/workspaces/personal-llama/personal-llama-project/node_modules/next/dist/server/lib/router-server.js:353:24)
    at async requestHandlerImpl (/workspaces/personal-llama/personal-llama-project/node_modules/next/dist/server/lib/router-server.js:377:13)
    at async Server.requestListener (/workspaces/personal-llama/personal-llama-project/node_modules/next/dist/server/lib/start-server.js:141:13)
 POST /api/chat 500 in 9447ms

Downgrading llamaindex to version @0.1.3 as suggested above by @himself65 did not work. Is there something else I'm missing?

joji-harada avatar Jul 19 '24 19:07 joji-harada

@joji-harada can you try the latest version of create-llama again if that doesn't work, paste at https://github.com/run-llama/create-llama/issues an issue with the exact way that you generated the app.

marcusschiesser avatar Jul 25 '24 12:07 marcusschiesser

@joji-harada getting the same error with the latest version, have you figured out a solution.

MidasXIV avatar Sep 29 '24 20:09 MidasXIV

I will put this issue on my roadmap next week

himself65 avatar Sep 29 '24 21:09 himself65

the issue still persists with 0.7.0, any fixes so far?

shivaprabhu avatar Oct 20 '24 17:10 shivaprabhu

Under Windows DEFAULT_INDEX_STORE_PERSIST_FILENAME should point to a folder without leading dot. In .env replace STORAGE_CACHE_DIR=.cache with STORAGE_CACHE_DIR=cache or use other VectorStore

albertomario avatar Dec 08 '24 18:12 albertomario

The issue is caused by VectorStoreIndex.fromDocuments failing after adding documents to the doc store (e.g. no embeddings can be generated caused by a missing api key). A workaround is to delete the cache folder before running the generate script again.

Root cause is fixed in https://github.com/run-llama/LlamaIndexTS/pull/1588 - will be available with the next release.

marcusschiesser avatar Jan 07 '25 05:01 marcusschiesser