returned non-zero exit status 1
My os is Windows , when I manually download the model and run with local path :
huggingface-cli download HF1BitLLM/Llama3-8B-1.58-100B-tokens --local-dir models/Llama3-8B-1.58-100B-tokens
python setup_env.py -md models/Llama3-8B-1.58-100B-tokens -q i2_s
Powershell shows me :
ERROR:root:Error occurred while running command: Command '['D:\developTools\Anaconda\python.exe', 'utils/convert-hf-to-gguf-bitnet.py', 'models/Llama3-8B-1.58-100B-tokens', '--outtype', 'f32']' returned non-zero exit status 1., check details in logs\convert_to_f32_gguf.log
(base) PS D:\AI\bitnet\BitNet>
I open convert_to_f32_gguf.log , It shows:
Traceback (most recent call last):
File "D:\AI\bitnet\BitNet\utils\convert-hf-to-gguf-bitnet.py", line 20, in
My laptop CPU: AMD Ryzen 9 5900HX OS :Windows 11 develop tools :Visual Studio Community 2022 17.11.5
i had the same error gguf.GGMLQuantizationType.TL1 here TL1 is not getting imoported does GGMLQuantizationType have it?>
Facing the same issue INFO:root:Converting HF model to GGUF format... ERROR:root:Error occurred while running command: Command '['C:\Users\user\anaconda3\envs\bitnet-cpp\python.exe', 'utils/convert-hf-to-gguf-bitnet.py', 'models/Llama3-8B-1.58-100B-tokens', '--outtype', 'f32']' returned non-zero exit status 3221225477., check details in logs\convert_to_f32_gguf.log
log file is not showing any error and when i run convert-hf-to-gguf-bitnet.py file alone it executed without any error but still it makes problem while in the quantization step
i commented out the lines its working good now, i guess the gguf.GGMLQuantizationType does not have TL1/2 rn
i think its there TL1/2, ['BF16', 'F16', 'F32', 'F64', 'I16', 'I32', 'I64', 'I8', 'IQ1_M', 'IQ1_S', 'IQ2_S', 'IQ2_XS', 'IQ2_XXS', 'IQ3_S', 'IQ3_XXS', 'IQ4_NL', 'IQ4_XS', 'Q2_K', 'Q3_K', 'Q4_0', 'Q4_0_4_4', 'Q4_0_4_8', 'Q4_0_8_8', 'Q4_1', 'Q4_K', 'Q5_0', 'Q5_1', 'Q5_K', 'Q6_K', 'Q8_0', 'Q8_1', 'Q8_K', 'TL1', 'TL2', 'TQ1_0', 'TQ2_0']
gguf version?
0.10.0
i commented out the lines its working good now, i guess the gguf.GGMLQuantizationType does not have TL1/2 rn
"llama.cpp" in "3rdparty" should be at https://github.com/Eddie-Wang1120/llama.cpp/tree/406a5036f9a8aaee9ec5e96e652f61691340fe95 The "setup_env.py" installs "gguf" from "3rdparty/llama.cpp/gguf-py"
yes fixed it thanks
did i want to make any changes in the structure , still i am getting the same error
did you cloned the repo recursively?
I faced maybe same error on windows.
ERROR:root:Error occurred while running command: Command '['D:\Users\gorn\.conda\envs\bitnet-cpp\python.exe', 'utils/convert-hf-to-gguf-bitnet.py', 'models/Llama3-8B-1.58-100B-tokens', '--outtype', 'f32']' returned non-zero exit status 3221225477., check details in logs\convert_to_f32_gguf.log
INFO:hf-to-gguf:Loading model: Llama3-8B-1.58-100B-tokens
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Set model parameters
INFO:hf-to-gguf:gguf: context length = 8192
INFO:hf-to-gguf:gguf: embedding length = 4096
INFO:hf-to-gguf:gguf: feed forward length = 14336
INFO:hf-to-gguf:gguf: head count = 32
INFO:hf-to-gguf:gguf: key-value head count = 8
INFO:hf-to-gguf:gguf: rope theta = 500000.0
INFO:hf-to-gguf:gguf: rms norm epsilon = 1e-05
INFO:hf-to-gguf:gguf: file type = 0
INFO:hf-to-gguf:Set model tokenizer
INFO:gguf.vocab:Adding 280147 merge(s).
INFO:gguf.vocab:Setting special token type bos to 128000
INFO:gguf.vocab:Setting special token type eos to 128009
INFO:gguf.vocab:Setting chat_template to {% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>
'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>
' }}{% endif %}
INFO:hf-to-gguf:Exporting model to 'models\Llama3-8B-1.58-100B-tokens\ggml-model-f32.gguf'
INFO:hf-to-gguf:gguf: loading model part 'model.safetensors'
INFO:hf-to-gguf:gguf: loading model part 'model.safetensors'
INFO:hf-to-gguf:output.weight, torch.bfloat16 --> F32, shape = {4096, 128256}
INFO:hf-to-gguf:token_embd.weight, torch.bfloat16 --> F32, shape = {4096, 128256}
INFO:hf-to-gguf:blk.0.attn_norm.weight, torch.bfloat16 --> F32, shape = {4096}
git clone --recursive https://github.com/microsoft/BitNet.git
cd BitNet
conda create -n bitnet-cpp python=3.9
conda activate bitnet-cpp
python setup_env.py --hf-repo HF1BitLLM/Llama3-8B-1.58-100B-tokens -q i2_s
huggingface-cli download HF1BitLLM/Llama3-8B-1.58-100B-tokens --local-dir models/Llama3-8B-1.58-100B-tokens
python setup_env.py -md models/Llama3-8B-1.58-100B-tokens -q i2_s
Hi, I had the same error and managed to solve it like this:
When the error occurs, the command is returned as an array, where each index represents an argument. For example:
ERROR:root:Error occurred while running command:
Command '['python', 'utils/convert-hf-to-gguf-bitnet.py', 'models/Llama3-8B-1.58-100B-tokens', '--outtype', 'f32']'
returned non-zero exit status 3221225477., check details in logs\convert_to_f32_gguf.log.
At the end, it says you can check the log file, but the actual error isn’t in the log.
To figure out what’s causing the error, you need to run the command manually instead of using the BitNet tool. For example, execute this command directly in your terminal:
python utils/convert-hf-to-gguf-bitnet.py models/Llama3-8B-1.58-100B-tokens --outtype f32
or whatever command is causing the error in your case. Notice how the command corresponds to the array elements I mentioned earlier.
By running it yourself, you'll be able to see the full error. In my case, the problem was that I didn’t have clang installed.
So, to fix this, just run the command manually in the terminal. After doing so, you’ll see the complete error message, since BitNet only tells you that it failed but doesn’t provide details.
@GamerLegion, the issue reporter got the following error::
Traceback (most recent call last):
File "D:\AI\bitnet\BitNet\utils\convert-hf-to-gguf-bitnet.py", line 20, in
import torch
ModuleNotFoundError: No module named 'torch'
In the repository’s README.md, there are step-by-step instructions to run BitNet. Step 2 involves installing dependencies:
conda create -n bitnet-cpp python=3.9
conda activate bitnet-cpp
pip install -r requirements.txt
The Bitnet guys recommend using conda, but in my case I just used virtualenv.
virtualenv venv
source venv/bin/activate
pip install -r requirements.txt
Once you activate your environment, you can install the dependencies. The error you’re seeing is related to a missing torch module. If this doesn’t resolve it, try running:
pip install torch
Merhaba, aynı hatayı ben de aldım ve şu şekilde çözmeyi başardım:
Hata oluştuğunda, komut her dizinin bir argümanı temsil ettiği bir dizi olarak döndürülür. Örneğin:
ERROR:root:Error occurred while running command:
Command '['python', 'utils/convert-hf-to-gguf-bitnet.py', 'models/Llama3-8B-1.58-100B-tokens', '--outtype', 'f32']' returned non-zero exit status 3221225477., check details in logs\convert_to_f32_gguf.log. En sonunda log dosyasını kontrol edebileceğinizi söylüyor ancak gerçek hata log dosyasında bulunmuyor.
Hatanın nedenini bulmak için BitNet aracını kullanmak yerine komutu manuel olarak çalıştırmanız gerekir. Örneğin, bu komutu doğrudan terminalinizde çalıştırın:
python utils/convert-hf-to-gguf-bitnet.py models/Llama3-8B-1.58-100B-tokens --outtype f32 veya sizin durumunuzda hataya neden olan komut hangisiyse. Komutun daha önce bahsettiğim dizi elemanlarına nasıl karşılık geldiğine dikkat edin.
Kendiniz çalıştırarak, tam hatayı görebileceksiniz . Benim durumumda, sorun clang'ın kurulu olmamasıydı.
Yani, bunu düzeltmek için, komutu terminalde manuel olarak çalıştırmanız yeterlidir. Bunu yaptıktan sonra, BitNet yalnızca başarısız olduğunu söyleyip ayrıntı vermediği için tam hata mesajını göreceksiniz.
@GamerLegion, sorunu bildiren kişi şu hatayı aldı::
Traceback (most recent call last): File "D:\AI\bitnet\BitNet\utils\convert-hf-to-gguf-bitnet.py", line 20, in import torch ModuleNotFoundError: No module named 'torch' Depodaki README.md dosyasında, BitNet'i çalıştırmak için adım adım talimatlar bulunmaktadır. 2. Adım bağımlılıkları yüklemeyi içerir:
conda create -n bitnet-cpp python=3.9 conda activate bitnet-cpp
pip install -r requirements.txt Bitnet ekibi conda kullanılmasını öneriyor ama ben sadece virtualenv kullandım.
virtualenv venv source venv/bin/activate
pip install -r requirements.txt Ortamınızı etkinleştirdikten sonra bağımlılıkları yükleyebilirsiniz. Gördüğünüz hata eksik bir torch modülüyle ilgilidir. Bu sorunu çözmezse, şunu çalıştırmayı deneyin:
pip install torch
Hi, I had the same error and managed to solve it like this:
When the error occurs, the command is returned as an array, where each index represents an argument. For example:
ERROR:root:Error occurred while running command:
Command '['python', 'utils/convert-hf-to-gguf-bitnet.py', 'models/Llama3-8B-1.58-100B-tokens', '--outtype', 'f32']' returned non-zero exit status 3221225477., check details in logs\convert_to_f32_gguf.log. At the end, it says you can check the log file, but the actual error isn’t in the log.
To figure out what’s causing the error, you need to run the command manually instead of using the BitNet tool. For example, execute this command directly in your terminal:
python utils/convert-hf-to-gguf-bitnet.py models/Llama3-8B-1.58-100B-tokens --outtype f32 or whatever command is causing the error in your case. Notice how the command corresponds to the array elements I mentioned earlier.
By running it yourself, you'll be able to see the full error. In my case, the problem was that I didn’t have clang installed.
So, to fix this, just run the command manually in the terminal. After doing so, you’ll see the complete error message, since BitNet only tells you that it failed but doesn’t provide details.
@GamerLegion, the issue reporter got the following error::
Traceback (most recent call last): File "D:\AI\bitnet\BitNet\utils\convert-hf-to-gguf-bitnet.py", line 20, in import torch ModuleNotFoundError: No module named 'torch' In the repository’s README.md, there are step-by-step instructions to run BitNet. Step 2 involves installing dependencies:
conda create -n bitnet-cpp python=3.9 conda activate bitnet-cpp
pip install -r requirements.txt The Bitnet guys recommend using conda, but in my case I just used virtualenv.
virtualenv venv source venv/bin/activate
pip install -r requirements.txt Once you activate your environment, you can install the dependencies. The error you’re seeing is related to a missing torch module. If this doesn’t resolve it, try running:
pip install torch
... INFO:hf-to-gguf:blk.11.attn_k.weight, torch.uint8 --> F32, shape = {4096, 1024} INFO:hf-to-gguf:blk.11.attn_output.weight, torch.uint8 --> F32, shape = {4096, 4096} INFO:hf-to-gguf:blk.11.attn_q.weight, torch.uint8 --> F32, shape = {4096, 4096} INFO:hf-to-gguf:blk.11.attn_v.weight, torch.uint8 --> F32, shape = {4096, 1024} INFO:hf-to-gguf:blk.12.attn_norm.weight, torch.bfloat16 --> F32, shape = {4096} INFO:hf-to-gguf:blk.12.ffn_down.weight, torch.uint8 --> F32, shape = {14336, 4096} INFO:hf-to-gguf:blk.12.ffn_gate.weight, torch.uint8 --> F32, shape = {4096, 14336} INFO:hf-to-gguf:blk.12.ffn_up.weight, torch.uint8 --> F32, shape = {4096, 14336} INFO:hf-to-gguf:blk.12.ffn_norm.weight, torch.bfloat16 --> F32, shape = {4096} INFO:hf-to-gguf:blk.12.attn_k.weight, torch.uint8 --> F32, shape = {4096, 1024} INFO:hf-to-gguf:blk.12.attn_output.weight, torch.uint8 --> F32, shape = {4096, 4096} INFO:hf-to-gguf:blk.12.attn_q.weight, torch.uint8 --> F32, shape = {4096, 4096} INFO:hf-to-gguf:blk.12.attn_v.weight, torch.uint8 --> F32, shape = {4096, 1024} INFO:hf-to-gguf:blk.13.attn_norm.weight, torch.bfloat16 --> F32, shape = {4096} Killed
Model conversion takes much more memory than inferencing. So we recommend to directly download the converted gguf model. thanks. https://huggingface.co/microsoft/bitnet-b1.58-2B-4T-gguf