pyllama
pyllama copied to clipboard
pyllama/downloads returns empty folders
Hello, when running:
python3 -m llama.download
the command runs almost instantly but only creates empty folders named 7B
, 13B
, etc...
I also tried by specifying --model-size
and --folder
with the same result
Just updated the code base. Can you reinstall pyllama and try?
I just reinstalled and I still got the same behavior
Same problem in Mac M1
Same in Fedora
same problem for me here (intel mac)
Do you have wget
installed? You need to install wget
otherwise you will only get empty folder.
I have wget installed and in my path. It doesn't work
I can not reproduce the error. It always work for me.
Try pip install hiq-python -U
?
i tried after installing wget and hiq-python but it still does not work
I do not get any warnings..
❯ python3 -m llama.download --model_size 7B ❤️ Resume download is supported. You can ctrl-c and rerun the program to resume the downloading Downloading tokenizer... ✅ pyllama_data/tokenizer.model ✅ pyllama_data/tokenizer_checklist.chk Downloading 7B ✅ pyllama_data/7B/params.json ✅ pyllama_data/7B/checklist.chk Checking checksums
same on ubuntu, I can only get the token but not the model
same on ubuntu, I can only get the token but not the model
I am using ubuntu and it works well for me though.
work well on ubuntu, i pull last code from repo and reinstall pylama (pip install pyllama -U)
work for mac(intel), empty folder for mac(m1)
wget is on my path, and hiq-python
is up to date. m1 mac, python 3.10, pyllama v0.0.18:
$ python -m llama.download --model_size 7B --folder llama
❤️ Resume download is supported. You can ctrl-c and rerun the program to resume the downloading
Downloading tokenizer...
✅ llama/tokenizer.model
✅ llama/tokenizer_checklist.chk
Downloading 7B
✅ llama/7B/params.json
✅ llama/7B/checklist.chk
Checking checksums
$ ls llama/
7B/
$ du -sh llama/
0B llama/
Clearing pyllama out from my site-packages
folder, cloning this repo, and running the same command as the previous comment in the root directory of this repository works as it is supposed to.
Can confirm what @llimllib said. I had to do:
pip uninstall pyllama
git clone https://github.com/juncongmoo/pyllama
pip install -e pyllama
After that it works for me.
I also had another issue with py_itree, as reported here. I think this is happening on Mac M1 machines.
This was fixed by uninstalling py_itree first, then installing it from source:
pip uninstall py_itree
pip install https://github.com/juncongmoo/itree/archive/refs/tags/v0.0.18.tar.gz
If you are on macos there is a problem with the download_community.sh
script that is called from download.py
:
- it uses
declare -A
which needs bash v4+, but macos only comes with bash 3.x - default shell on recent macos is
zsh
so we can install a newer bash without breaking things, justbrew install bash
- but the
download_community.sh
script has#!/bin/bash
at the top which points to the system bash instead of homebrew bash (which has been installed as the default, if Ibash --version
I getversion 5.2.15
now, but in a different location)- we can fix this by changing the line at the top of
download_community.sh
to#!/usr/bin/env bash
(should work for everyone with that change I think?)
- we can fix this by changing the line at the top of
- now we just need to
brew install wget
After these steps I am now in the process of downloading the 7B checkpoints 🎉
I'm on an M1, Ventura 13.3
Hopefully this is fixed following #70 and #71, let me know if you see any problems with the updated script!
@llimllib I'm getting the same issue on windows
macOS running ARM/Apple Silicon M1 empty folder always persists no matter what steps are listed here or in the linked commits.
Still encounter this problem after upgrading bash to v5.
@Genie-Liu @AstroWa3l if you clone this repository and run llama/download_community.sh 7B /tmp/llama-models
hopefully you'll get the 7B model in /tmp/models
. Looks like there hasn't been a new release yet, so this issue will persist at least until then
@llimllib Unfortunately it did not fix the issue
@AstroWa3l can you elaborate? Were there errors? What was the output?
the script might benefit from having set -e
at the top so it exits early instead of continuing after errors
@anentropic I do that with all my scripts, but I've never worked on this project before and wasn't sure how they were calling the script so I didn't add it
@llimllib I found one error after following your instructions and it was due to a missing md5sum package so I installed it and now it is downloading. I could not download it due to a lack of this package on my system to check hashes. Thank you!!!
@llimllib
✅ Worked on Mac (2012) OS Catalina x64_86 architecture Intel chip
- Clone Repo
pip uninstall pyllama
git clone https://github.com/juncongmoo/pyllama
pip install -e pyllama
-
cd pyllama
- Run
llama/download_community.sh 7B /tmp/llama-models
,
🔴
However using python3 -m llama.download --model_size 7B --folder llama/
command it fails with recursion error.
% pipenv run python3 -m llama.download --model_size 7B --folder llama/
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/my_user/pyllama/llama/download.py", line 87, in <module>
download(args)
File "/Users/my_user/pyllama/llama/download.py", line 20, in download
download(args)
File "/Users/my_user/pyllama/llama/download.py", line 20, in download
download(args)
File "/Users/my_user/pyllama/llama/download.py", line 20, in download
download(args)
[Previous line repeated 985 more times]
File "/Users/my_user/pyllama/llama/download.py", line 17, in download
retcode = hiq.execute_cmd(cmd, verbose=False, shell=True, runtime_output=True, env=os.environ)
File "/Users/my_user/.local/share/virtualenvs/my-env-cIchWPfI/lib/python3.9/site-packages/hiq/utils.py", line 101, in execute_cmd
proc = subprocess.Popen(
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/subprocess.py", line 1737, in _execute_child
for k, v in env.items():
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/_collections_abc.py", line 851, in __iter__
for key in self._mapping:
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/os.py", line 701, in __iter__
yield self.decodekey(key)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/os.py", line 759, in decode
return value.decode(encoding, 'surrogateescape')
RecursionError: maximum recursion depth exceeded while calling a Python object`
@Genie-Liu @AstroWa3l if you clone this repository and run
llama/download_community.sh 7B /tmp/llama-models
hopefully you'll get the 7B model in/tmp/models
. Looks like there hasn't been a new release yet, so this issue will persist at least until then
Ah! I already had this via brew install coreutils