web-llm
web-llm copied to clipboard
"caches is not defined"
I'm trying to run the getting started code. I'm using Vite.
I see caches is not defined
.
Browser: Version 120.0.6099.216 (Official Build) (arm64)
I'm on an M3 Max.
Code:
import * as webllm from "@mlc-ai/web-llm";
// We use label to intentionally keep it simple
function setLabel(id: string, text: string) {
const label = document.getElementById(id);
if (label == null) {
throw Error("Cannot find label " + id);
}
label.innerText = text;
}
async function main() {
// create a ChatModule,
const chat = new webllm.ChatModule();
// This callback allows us to report initialization progress
chat.setInitProgressCallback((report: webllm.InitProgressReport) => {
setLabel("init-label", report.text);
});
// You can also try out "RedPajama-INCITE-Chat-3B-v1-q4f32_1"
await chat.reload("Llama-2-7b-chat-hf-q4f32_1");
const generateProgressCallback = (_step: number, message: string) => {
setLabel("generate-label", message);
};
const prompt0 = "What is the capital of Canada?";
setLabel("prompt-label", prompt0);
const reply0 = await chat.generate(prompt0, generateProgressCallback);
console.log(reply0);
const prompt1 = "Can you write a poem about it?";
setLabel("prompt-label", prompt1);
const reply1 = await chat.generate(prompt1, generateProgressCallback);
console.log(reply1);
console.log(await chat.runtimeStatsText());
}
main();-
Also tested in Firefox, same error.
just check your browser version https://caniuse.com/mdn-api_caches
just check your browser version https://caniuse.com/mdn-api_caches
I’m on chrome 120 which appears to be supported.
related to #144
I think that caches is flaky and the project should use IndexedDB instead
related to #144
I think that caches is flaky and the project should use IndexedDB instead
@DavidGOrtega I think caches are a good choice. What do you expect IndexedDB to do better?
@christianliebel caches are problematic please read #144. IndexedDB is better for this use case, said that I prefer to have an abstraction there, allowing us to use whatever we might like.
@christianliebel caches are problematic please read #144. IndexedDB is better for this use case, said that I prefer to have an abstraction there, allowing us to use whatever we might like.
The issue from #144 (quota exceeded) would very likely also apply to IndexedDB (see Storage quotas and eviction criteria). If there’s not enough free disk space (or other restrictions apply), IndexedDB also can’t store that data. Caches were designed precisely for use cases like this. In my opinion, adding an IndexedDB adapter would only increase the complexity without significant gain. By the way, there’s the StorageManager API that allows you to estimate whether there’s enough free space to cache the model locally.
@christianliebel that not necessarily true. Indeed IndexedDB allows me to store the model and caches do not. I have done testing in some OS and Chrome versions.
As I say a hight level abstraction of storage would fix this. Have you read the title of this issue?
caches are not defined
As I say a hight level abstraction of storage would fix this. Have you read the title of this issue?
caches are not defined
Yes, I have read the title. The "caches" property should be available on Chrome and Firefox. I expect something else to be the culprit here, maybe other dependencies, a bundling issue etc. I'd suggest performing a deeper analysis here to identify the root cause and would be happy to assist.
Update: both Cache API and IndexedDB cache are supported now in 0.2.31
. User can choose either with AppConfig.useIndexedDBCache
. For more see the PR:
- https://github.com/mlc-ai/web-llm/pull/352
We also exposed some cache-related utils (e.g. deleting the model (either weights or wasm) from cache, checking if a model exists in cache). See its usage in examples/cache-usage
.
Closing; feel free to open new ones if issues persist