web-llm
web-llm copied to clipboard
[Tracking] Model loading/caching enhancements
Overview
There have been many great suggestions from the community regarding loading and caching model weights. This tracker issue compiles the suggestions and keeps track of the progress.
Action Items
-
[x] C0: Make ArtifactCache https://github.com/apache/tvm/blob/main/web/src/runtime.ts#L991 an interface ArtifactCache in a new file
artifact_cache.ts
- Provide implementation ArtifactCacheBasic (our existing approach)
- Provide parallel download methods in the same file (C1)
- Optionally, allow injection of additional class of ArtifactCache that can be implemented via other means.
- https://github.com/apache/tvm/pull/16525
-
[x] C1: Parallelize weight shards download on tvmjs side
- https://github.com/mlc-ai/web-llm/issues/280
- https://github.com/apache/tvm/pull/16525
-
[x] C2: Add a helper function to delete cache storage (part of C0)
- https://github.com/mlc-ai/web-llm/issues/267
- This applies to both model library and weights. Especially when we update the model libraries, it can be tricky to use the newest version, since the names are the same.
- TVM-side support for deleting model weights is added via https://github.com/apache/tvm/pull/16525
- Missing WebLLM side API
-
[x] C3: Switch to IndexDB for caching
- https://github.com/mlc-ai/web-llm/issues/144
- https://github.com/mlc-ai/web-llm/issues/257
- Currently, in some environments, the cache storage may be too small to load the entire weights
- This should be an implementation of C0, namely ArtifactCaheIndexDB
-
[x] C4: Allow using local models
- https://github.com/mlc-ai/web-llm/issues/282
- Downloading model weights is arguably the largest overhead of our project; providing alternatives would be helpful.
Great! I was thinking to group all of them too. I would implement a facade and then we can implement specific storages. I suggest to use level if we are able to just use a package.
I can definitely take this one! Im actually taking it for a project of mine. My question would be, Are we attacking TVMjs or web-llm? Right now the cache mechanism is the one implemented in TVMjs. And I know for sure that mlc-ai is actually related to TVM but Im afraid of how long does it take to deploy something in TVM. Currently they have 220 PRs some of them from 2021 (including yours) @CharlieFRuan
I can definitely take this one! Im actually taking it for a project of mine.
@DavidGOrtega Thanks for offering help! You are referring to item C3 right?
My question would be, Are we attacking TVMjs or web-llm?
I think we should make the changes in TVMjs. I wouldn't be too worried about getting things merged there:)
Also cc @DiegoCao who is looking into IndexedDB as well.
Hi David, thanks for offering the help! I'm looking into the TVMjs as well and we need to make changes in module TVMjs.
@CharlieFRuan I think all of them are in TVMjs. Parallelise downloads and change the cache layer for something much more agnostic. As I say I would use level because we can then me it work with different caches. Its a facade
let me know what works for you @DiegoCao. What do you want to pick?
I think we can go with your suggestion and use level here. Looking forward to the change!
@DavidGOrtega I can work on C2 first and work on C4 after you built the indexDB and levels.
Perfect so I do a PR for C1 and another for C3
Sorry for chiming in late. For the caching layer, ideally we would like something that comes with minimal dependency. Spefically, we should:
- Make ArtifactCache https://github.com/apache/tvm/blob/main/web/src/runtime.ts#L991 an interface
ArtifactCache
in a new fileartifact_cache.ts
- Provide implementatons ArtifactCacheIndexDB, and ArtifactCacheBasic(via caches)
- Provide parallel download methods in the same file
- Optionally, allow injection of additional class of ArtifactCache that can be implemented via other means.
This way the default implementation won't come with extra dependency via IndexDB API.
Sorry for chiming in late. For the caching layer, ideally we would like something that comes with minimal dependency. Spefically, we should:
- Make ArtifactCache https://github.com/apache/tvm/blob/main/web/src/runtime.ts#L991 an interface
ArtifactCache
in a new fileartifact_cache.ts
- Provide implementatons ArtifactCacheIndexDB, and ArtifactCacheBasic(via caches)
- Provide parallel download methods in the same file
- Optionally, allow injection of additional class of ArtifactCache that can be implemented via other means.
This way the default implementation won't come with extra dependency via IndexDB API.
Received, will do a PR for C0 for migrating old ArtifactCache to interface and implementation of the existing Basic Approach.
Another nice feature, especially if you're doing C4, would be to allow inspecting items in the cache "downloading" them, i.e. copying them out of the cache to disk (so that users don't have to find them wherever their browser is storing files).
I would be interested in contributing to C4, for both web-llm and web-sd.
@DiegoCao unless you already have some progress on it, I can take a crack at it this weekend. Lmk if I'm late to the party.
Hi @ethrx thanks! It requires some changes on the TVM side and I have started working on it
Hi! Thank you for this issue, quite helpful to see a glimpse of the future here.
Are there any plans to allow for resumable downloads and/or add the ability the cancel a download? This would prove particularly useful for folks on unstable connection but I am wondering whether there are some technical limitations that prevent the library from doing that at the moment?
Hi @germain-gg! I believe currently the downloads are resumable, as weights are broken into shards (e.g. ~105 shards for Llama-3-8B). For each shard that finishes downloading, it would be cached. To see the effect, try load a model in the demo page, then refresh/close the browser, and re-load, you'll see the download resumes rather than starting over.