David Kell
David Kell
Sure, happy to. Am I correct in saying that the latest version only works with Graphene 3? In which case we'll probably have to keep a 0.1.0 fork going as...
If you don't need the SSR feature, you can just use a [dynamic import](https://nextjs.org/docs/advanced-features/dynamic-import#with-no-ssr) with SSR disabled.
Here you go @conao3 - are you happy to merge in @necaris ?
Here's how I'm currently solving this (adapted from [usage](https://github.com/evo-design/evo#usage) in README) : ``` from evo import Evo import torch device = 'cuda:0' evo_model = Evo('evo-1-131k-base') model, tokenizer = evo_model.model, evo_model.tokenizer...
I had a similar experience. I was able to get inference working for 2k sequences on A100 80GB (e.g. available on Paperspace), although around 2.5-3k I would get OOM. I...
Quoting from this issue https://github.com/evo-design/evo/issues/24: > Prompting with longer sequences requires sharding for the model, which is currently not supported So I think if you want to generate embeddings for...
https://github.com/evo-design/evo/issues/32
I had this issue and updating to latest version of transformers fixed it for me
We would be interested in implementing this. We are building an offline-first electron app with React, and we'd like to bind the state to a local SQLite database and sync...
@endpress Yes, please go ahead and contribute! From our end, we've been using the LokiJS/IndexedDB adaptor in the Electron app, and this is working well. We'll probably hold off writing...