ollama-js
ollama-js copied to clipboard
digest mismatch when executing 'create()'
I have a consistent error when creating a new model based on my local gguf file. This error only happens when creating a first model, i.e. if there are any already loaded models in ollama this does not reproduce. Also, when running this command using ollama cli there is no error, only when using this library. Maybe I am missing some previous steps? Here is my code:
const response = await ollama.create({
model: 'my-model', modelfile: `FROM ${modelPath}`
});
where modelPath is an absolute path to GGUF file on my machine. This same command works ok with ollama CLI. Response:
Failed to parse error response as JSON
ResponseError: digest mismatch, expected "sha256:06227ae08c77f2f6c8b16931c88510498cbc3c2a0acbb85e62186d614efb3737", got "sha256:9b3e16499cb74ed9e4824383fc96e014028d53e5679b68d4334bbb9df55afd57"
at file:///Users/<my_path>/node_modules/ollama/dist/utils.js:58:15
at Generator.next (<anonymous>)
at fulfilled (file:///Users/<my_path>/node_modules/ollama/dist/utils.js:4:58)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
error: 'digest mismatch, expected "sha256:06227ae08c77f2f6c8b16931c88510498cbc3c2a0acbb85e62186d614efb3737", got "sha256:9b3e16499cb74ed9e4824383fc96e014028d53e5679b68d4334bbb9df55afd57"',
status_code: 400
}
Steps to reproduce:
- Install and run ollama
- Make sure there are no models loaded by running ollama list (important)
- Create a new ollama model by using this library
create()
method
const response = await ollama.create({
model: Core.model_name,
modelfile: Core.modelfile,
});
Same here.
ResponseError: digest mismatch, expected "sha256:c2ca99d853de276fb25a13e369a0db2fd3782eff8d28973404ffa5ffca0b9267", got "sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a"
I only get this while using the ollama package, POST-ing to the API works good though.
sha256 is absolutely the same:
wolf@wolf-mint:/usr/share/ollama/.ollama/models/okuu$ sha256sum Meta-Llama-3-8B-Instruct.Q6_K.gguf
c2ca99d853de276fb25a13e369a0db2fd3782eff8d28973404ffa5ffca0b9267 Meta-Llama-3-8B-Instruct.Q6_K.gguf
Hey all, an update here. This is a bug in the ollama-js library. The problem stems from this bit of code that attempts to open a stream to upload the file: https://github.com/ollama/ollama-js/blob/main/src/index.ts#L93
It's getting closed early so the checksum isn't correct. I'm still working on this, but if anyone has any ideas of a good fix please let me know.
Hi there, is there any update here? I'm also running into this. Thank you!
Hi folks,
Could this issue be related to the AMD Threadripper processor? While searching for this error, I noticed that a few other people with the AMD Threadripper processor seem to have experienced the same problem. I am unable to download most models or create models from files. I've already tested this with two Debian-based OS (Pop!_OS and Ubuntu 22.04) with no luck. I'm not behind any proxy, and models download just fine on my M2 MacBook Pro.
Here is my spec on my AI Rig;
OS : Pop!_OS 22.04 LTS x86_64 Kernel : 6.9.3-76060903-generic CPU : AMD Ryzen Threadripper 3970X (64) @ 3.700GHz GPU : NVIDIA GeForce RTX 3090 (x2) Memory : 128666MiB
Hi @kr1ps, you AMD processor would be unrelated to this issue. This is an issue with how the create request is being done in JavaScript. I'd suggest opening an issue in the ollama/ollama repo, and checking that Ollama has write access in your case.
Ok. thx for the advice.