ollama icon indicating copy to clipboard operation
ollama copied to clipboard

Error: invalid file magic when importing Safetensors models

Open amnweb opened this issue 2 years ago • 10 comments

What is the issue?

ollama create test -f Modelfile transferring model data creating model layer Error: invalid file magic

This happens for all the Safetensors models I try to import.

Modelfile content FROM ./model.safetensors

Screenshot 2024-03-19 141058

What did you expect to see?

expect to working :)

Steps to reproduce

No response

Are there any recent changes that introduced the issue?

No response

OS

Windows

Architecture

amd64

Platform

No response

Ollama version

0.1.29

GPU

Nvidia

GPU info

No response

CPU

Intel

Other software

No response

amnweb avatar Mar 19 '24 13:03 amnweb

What safetensors model were you trying to import? Right now only Mistral and Mistral fine tunes are supported. More are coming soon though!

pdevine avatar Mar 19 '24 13:03 pdevine

Oh, okay, I didn't know that. I was just trying to test random models to see how they're working :D

I tried three and four text models, but none of them worked, anyway none of them were Mistral.

amnweb avatar Mar 19 '24 13:03 amnweb

Sorry about that! I have Gemma now working, but haven't yet sent out the PR. I'll add an error message saying that the other models aren't yet supported.

pdevine avatar Mar 19 '24 13:03 pdevine

@amnweb can you list which models you tried? I just realized there should be code to catch that.

pdevine avatar Mar 19 '24 14:03 pdevine

I think this is the last one I tried https://huggingface.co/google-bert/bert-base-uncased/tree/main or this https://huggingface.co/pysentimiento/robertuito-sentiment-analysis/tree/main

Already delete from disk and can't remember what I was downloaded

Edit: Btw this one also, just found it in history https://huggingface.co/stabilityai/stable-diffusion-2-1/tree/main

amnweb avatar Mar 19 '24 15:03 amnweb

Can you post one of the modelfiles? I'm trying to figure out if you had converted/quantized these yourself or got ollama to convert the safetensors files.

pdevine avatar Mar 19 '24 15:03 pdevine

I have downloaded model.safetensors https://huggingface.co/google-bert/bert-base-uncased/resolve/main/model.safetensors Created Modelfile with content FROM ./model.safetensors and run command from terminal. If I understand Ollama should be convert safetensors or I'm thinking wrong ?

amnweb avatar Mar 19 '24 15:03 amnweb

I'm having this same issue with ollama in docker and using https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF codellama-7b-instruct.Q4_0.gguf file.. here is my modelfile


FROM /models/CodeLlama-7B-Instruct-GGUF/codellama-7b-instruct.Q4_0.gguf
TEMPLATE """[INST] <<SYS>>{{ .System }}<</SYS>>

{{ .Prompt }} [/INST]"""
PARAMETER rope_frequency_base 1000000
PARAMETER stop [INST]
PARAMETER stop [/INST]
PARAMETER stop <<SYS>>
PARAMETER stop <</SYS>>

mroark1m avatar Apr 23 '24 02:04 mroark1m

I'm having this same issue with ollama in docker and using https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF codellama-7b-instruct.Q4_0.gguf file.. here is my modelfile

I had a corrupt file.

mroark1m avatar Apr 23 '24 03:04 mroark1m

@amnweb sorry for the slow response! I somehow lost track of this. I don't believe any of the models you converted will work inside of the llama.cpp runner unfortunately.

pdevine avatar Apr 23 '24 03:04 pdevine

I'm having this same issue with ollama, I used it on linux 20.04

ninghairong avatar Jun 14 '24 07:06 ninghairong