openplayground icon indicating copy to clipboard operation
openplayground copied to clipboard

huggingface models pending

Open merrychrishna opened this issue 1 year ago • 15 comments

why when i put in my huggingface api key and select a gpt4 model, does it just say pending? what am i supposed to do?

merrychrishna avatar Apr 08 '23 01:04 merrychrishna

Do you have any terminal outputs or browser console outputs? There should be no gpt4 model through huggingface

zainhuda avatar Apr 08 '23 10:04 zainhuda

image

image

merrychrishna avatar Apr 08 '23 13:04 merrychrishna

image

merrychrishna avatar Apr 08 '23 13:04 merrychrishna

I have the exact same issue with the exact same logs besides the IP address. I looked at the network traffic during and there is no pull from the internet for download. I used the docker image, openplayground run, and server.app to no avail. I can confirm the presents of the bug across all installs.

System: OS: Debian 11 (virtualized) CPU: Xeon(R) CPU E5-2670 v3 (24 cores virtualized) RAM: 64 GB HDD: 1 TB

theman23290 avatar Apr 09 '23 16:04 theman23290

I will be looking into this today

AlexanderLourenco avatar Apr 10 '23 16:04 AlexanderLourenco

Can you try pulling the latest code changes, or if you're using pyp, pip install --force -i https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ openplayground==0.1.13a3

I have also added a logging flag python -m server.app -l INFO which should help diagnose the problem

AlexanderLourenco avatar Apr 11 '23 19:04 AlexanderLourenco

after uninstalling openplayground with pip uninstall openplayground then installing open playground with pip install openplayground then upgrading with pip install --upgrade openplayground openplayground is not updating and i still get the same thing in windows 10 cmd

C:\Users\user2>openplayground run Initializing download manager... Should download ykilcher/gpt-4chan from huggingface Download loop started... About to start downloading

merrychrishna avatar Apr 11 '23 19:04 merrychrishna

Nuked old openplayground and pip packages and reinstalled from source the new 0.1.12a5. Unfortunately, the issue still persists. Here are the updates logs from this version using the -l INFO flags (tried to download gpt4-x-alpaca as the model in question and llama-7b-hf):

senpai@debian:/media/senpai/Storage/openplayground$ python3 -m server.app --host 192.168.X.X -l INFO INFO:server.lib.sseserver:SUBSCRIBING TO: inferences INFO:server.lib.sseserver:SUBSCRIBING TO: notifications INFO:server.lib.sseserver:GETTING TOPIC: notifications INFO:server.lib.sseserver:GETTING TOPIC: inferences INFO:main:Initializing download manager... INFO:main:Download loop started...

  • Serving Flask app 'app'
  • Debug mode: off INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
  • Running on http://192.168.X.X:5432 INFO:werkzeug:Press CTRL+C to quit --- Logging error --- Traceback (most recent call last): File "/media/senpai/Storage/openplayground/server/app.py", line 252, in download_loop _ = AutoTokenizer.from_pretrained(model.name) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 676, in from_pretrained raise ValueError( ValueError: Tokenizer class LlamaTokenizer does not exist or is not currently imported.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.9/logging/init.py", line 1079, in emit msg = self.format(record) File "/usr/lib/python3.9/logging/init.py", line 923, in format return fmt.format(record) File "/usr/lib/python3.9/logging/init.py", line 659, in format record.message = record.getMessage() File "/usr/lib/python3.9/logging/init.py", line 363, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "/usr/lib/python3.9/threading.py", line 912, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner self.run() File "/usr/lib/python3.9/threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "/media/senpai/Storage/openplayground/server/app.py", line 266, in download_loop logger.error("error", e) Message: 'error' Arguments: (ValueError('Tokenizer class LlamaTokenizer does not exist or is not currently imported.'),) ERROR:main:Failed to download anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g from huggingface --- Logging error --- Traceback (most recent call last): File "/media/senpai/Storage/openplayground/server/app.py", line 252, in download_loop _ = AutoTokenizer.from_pretrained(model.name) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 676, in from_pretrained raise ValueError( ValueError: Tokenizer class LlamaTokenizer does not exist or is not currently imported.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.9/logging/init.py", line 1079, in emit msg = self.format(record) File "/usr/lib/python3.9/logging/init.py", line 923, in format return fmt.format(record) File "/usr/lib/python3.9/logging/init.py", line 659, in format record.message = record.getMessage() File "/usr/lib/python3.9/logging/init.py", line 363, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "/usr/lib/python3.9/threading.py", line 912, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner self.run() File "/usr/lib/python3.9/threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "/media/senpai/Storage/openplayground/server/app.py", line 266, in download_loop logger.error("error", e) Message: 'error' Arguments: (ValueError('Tokenizer class LlamaTokenizer does not exist or is not currently imported.'),) ERROR:main:Failed to download anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g from huggingface-local --- Logging error --- Traceback (most recent call last): File "/media/senpai/Storage/openplayground/server/app.py", line 252, in download_loop _ = AutoTokenizer.from_pretrained(model.name) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 676, in from_pretrained raise ValueError( ValueError: Tokenizer class LlamaTokenizer does not exist or is not currently imported.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.9/logging/init.py", line 1079, in emit msg = self.format(record) File "/usr/lib/python3.9/logging/init.py", line 923, in format return fmt.format(record) File "/usr/lib/python3.9/logging/init.py", line 659, in format record.message = record.getMessage() File "/usr/lib/python3.9/logging/init.py", line 363, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "/usr/lib/python3.9/threading.py", line 912, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner self.run() File "/usr/lib/python3.9/threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "/media/senpai/Storage/openplayground/server/app.py", line 266, in download_loop logger.error("error", e) Message: 'error' Arguments: (ValueError('Tokenizer class LlamaTokenizer does not exist or is not currently imported.'),) ERROR:main:Failed to download anon8231489123/vicuna-13b-GPTQ-4bit-128g from huggingface-local --- Logging error --- Traceback (most recent call last): File "/media/senpai/Storage/openplayground/server/app.py", line 252, in download_loop _ = AutoTokenizer.from_pretrained(model.name) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 676, in from_pretrained raise ValueError( ValueError: Tokenizer class LLaMATokenizer does not exist or is not currently imported.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.9/logging/init.py", line 1079, in emit msg = self.format(record) File "/usr/lib/python3.9/logging/init.py", line 923, in format return fmt.format(record) File "/usr/lib/python3.9/logging/init.py", line 659, in format record.message = record.getMessage() File "/usr/lib/python3.9/logging/init.py", line 363, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "/usr/lib/python3.9/threading.py", line 912, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner self.run() File "/usr/lib/python3.9/threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "/media/senpai/Storage/openplayground/server/app.py", line 266, in download_loop logger.error("error", e) Message: 'error' Arguments: (ValueError('Tokenizer class LLaMATokenizer does not exist or is not currently imported.'),) ERROR:main:Failed to download decapoda-research/llama-7b-hf from huggingface-local --- Logging error --- Traceback (most recent call last): File "/media/senpai/Storage/openplayground/server/app.py", line 252, in download_loop _ = AutoTokenizer.from_pretrained(model.name) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 676, in from_pretrained raise ValueError( ValueError: Tokenizer class LlamaTokenizer does not exist or is not currently imported.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.9/logging/init.py", line 1079, in emit msg = self.format(record) File "/usr/lib/python3.9/logging/init.py", line 923, in format return fmt.format(record) File "/usr/lib/python3.9/logging/init.py", line 659, in format record.message = record.getMessage() File "/usr/lib/python3.9/logging/init.py", line 363, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "/usr/lib/python3.9/threading.py", line 912, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner self.run() File "/usr/lib/python3.9/threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "/media/senpai/Storage/openplayground/server/app.py", line 266, in download_loop logger.error("error", e) Message: 'error' Arguments: (ValueError('Tokenizer class LlamaTokenizer does not exist or is not currently imported.'),) ERROR:main:Failed to download mongolian-basket-weaving/koala-13b-fp16-safetensors from huggingface-local INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 16:55:26] "GET /settings HTTP/1.1" 200 - INFO:server.lib.api:Received notification request INFO:server.lib.sseserver:LISTENING TO: notifications INFO:server.lib.sseserver:LISTENING INFO:server.lib.api:Getting providers with models INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 16:55:26] "GET /api/providers-with-key-and-models HTTP/1.1" 200 - INFO:server.lib.api.provider:Enabling Provider Model huggingface-local anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g INFO:server.lib.storage:Saving models.json INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 16:55:32] "GET /api/provider/huggingface-local/model/anon8231489123%2Fgpt4-x-alpaca-13b-native-4bit-128g/toggle-status HTTP/1.1" 200 - INFO:server.lib.api.provider:Enabling Provider Model huggingface-local anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g INFO:server.lib.storage:Saving models.json INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 16:55:34] "GET /api/provider/huggingface-local/model/anon8231489123%2Fgpt4-x-alpaca-13b-native-4bit-128g/toggle-status HTTP/1.1" 200 - INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 16:57:22] "GET /settings HTTP/1.1" 304 - INFO:server.lib.api:Received notification request INFO:server.lib.sseserver:LISTENING TO: notifications INFO:server.lib.sseserver:LISTENING INFO:server.lib.api:Getting providers with models INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 16:57:23] "GET /api/providers-with-key-and-models HTTP/1.1" 200 - INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 16:58:57] "GET /settings HTTP/1.1" 304 - INFO:server.lib.api:Getting providers with models INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 16:58:57] "GET /api/providers-with-key-and-models HTTP/1.1" 200 - INFO:server.lib.api:Received notification request INFO:server.lib.sseserver:LISTENING TO: notifications INFO:server.lib.sseserver:LISTENING INFO:server.lib.api.provider:Enabling Provider Model huggingface-local anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g INFO:server.lib.storage:Saving models.json INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 17:01:51] "GET /api/provider/huggingface-local/model/anon8231489123%2Fgpt4-x-alpaca-13b-native-4bit-128g/toggle-status HTTP/1.1" 200 - INFO:server.lib.api.provider:Enabling Provider Model huggingface-local decapoda-research/llama-7b-hf INFO:server.lib.storage:Saving models.json INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 17:01:52] "GET /api/provider/huggingface-local/model/decapoda-research%2Fllama-7b-hf/toggle-status HTTP/1.1" 200 - INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 17:02:02] "GET /settings HTTP/1.1" 304 - INFO:server.lib.api:Received notification request INFO:server.lib.sseserver:LISTENING TO: notifications INFO:server.lib.sseserver:LISTENING INFO:server.lib.api:Getting providers with models INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 17:02:02] "GET /api/providers-with-key-and-models HTTP/1.1" 200 -

theman23290 avatar Apr 11 '23 21:04 theman23290

I installed transformers using pip3 install git+https://github.com/huggingface/transformers and now I don't get the "LlamaTokenizer does not exist or is not currently imported" error anymore. Now there is another error preventing download.

senpai@debian:/media/senpai/Storage/openplayground$ python3 -m server.app --host 192.168.X.X -l INFO INFO:server.lib.sseserver:SUBSCRIBING TO: inferences INFO:server.lib.sseserver:SUBSCRIBING TO: notifications INFO:server.lib.sseserver:GETTING TOPIC: notifications INFO:server.lib.sseserver:GETTING TOPIC: inferences INFO:main:Initializing download manager... INFO:main:Download loop started...

  • Serving Flask app 'app'
  • Debug mode: off INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
  • Running on http://192.168.X.X:5432 INFO:werkzeug:Press CTRL+C to quit INFO:main:Downloading tokenizer.model: 0%| | 0.00/500k [00:00<?, ?B/s] INFO:main:Downloading tokenizer.model: 100%|##########| 500k/500k [00:00<00:00, 4.91MB/s] INFO:main:Downloading tokenizer.model: 100%|##########| 500k/500k [00:00<00:00, 4.87MB/s] --- Logging error --- Traceback (most recent call last): File "/media/senpai/Storage/openplayground/server/app.py", line 252, in download_loop _ = AutoTokenizer.from_pretrained(model.name) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 701, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1811, in from_pretrained return cls._from_pretrained( File "/home/senpai/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1965, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 89, in init super().init( File "/home/senpai/.local/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 114, in init fast_tokenizer = convert_slow_tokenizer(slow_tokenizer) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/convert_slow_tokenizer.py", line 1288, in convert_slow_tokenizer return converter_class(transformer_tokenizer).converted() File "/home/senpai/.local/lib/python3.9/site-packages/transformers/convert_slow_tokenizer.py", line 441, in init requires_backends(self, "protobuf") File "/home/senpai/.local/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1053, in requires_backends raise ImportError("".join(failed)) ImportError: LlamaConverter requires the protobuf library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones that match your environment. Please note that you may need to restart your runtime after installation.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.9/logging/init.py", line 1079, in emit msg = self.format(record) File "/usr/lib/python3.9/logging/init.py", line 923, in format return fmt.format(record) File "/usr/lib/python3.9/logging/init.py", line 659, in format record.message = record.getMessage() File "/usr/lib/python3.9/logging/init.py", line 363, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "/usr/lib/python3.9/threading.py", line 912, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner self.run() File "/usr/lib/python3.9/threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "/media/senpai/Storage/openplayground/server/app.py", line 266, in download_loop logger.error("error", e) Message: 'error' Arguments: (ImportError('\nLlamaConverter requires the protobuf library but it was not found in your environment. Checkout the instructions on the\ninstallation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones\nthat match your environment. Please note that you may need to restart your runtime after installation.\n'),) ERROR:main:Failed to download anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g from huggingface INFO:main:Downloading (…)cial_tokens_map.json: 0%| | 0.00/96.0 [00:00<?, ?B/s] INFO:main:Downloading (…)cial_tokens_map.json: 100%|##########| 96.0/96.0 [00:00<00:00, 22.0kB/s] --- Logging error --- Traceback (most recent call last): File "/media/senpai/Storage/openplayground/server/app.py", line 252, in download_loop _ = AutoTokenizer.from_pretrained(model.name) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 701, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1811, in from_pretrained return cls._from_pretrained( File "/home/senpai/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1965, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 89, in init super().init( File "/home/senpai/.local/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 114, in init fast_tokenizer = convert_slow_tokenizer(slow_tokenizer) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/convert_slow_tokenizer.py", line 1288, in convert_slow_tokenizer return converter_class(transformer_tokenizer).converted() File "/home/senpai/.local/lib/python3.9/site-packages/transformers/convert_slow_tokenizer.py", line 441, in init requires_backends(self, "protobuf") File "/home/senpai/.local/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1053, in requires_backends raise ImportError("".join(failed)) ImportError: LlamaConverter requires the protobuf library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones that match your environment. Please note that you may need to restart your runtime after installation.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.9/logging/init.py", line 1079, in emit msg = self.format(record) File "/usr/lib/python3.9/logging/init.py", line 923, in format return fmt.format(record) File "/usr/lib/python3.9/logging/init.py", line 659, in format record.message = record.getMessage() File "/usr/lib/python3.9/logging/init.py", line 363, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "/usr/lib/python3.9/threading.py", line 912, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner self.run() File "/usr/lib/python3.9/threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "/media/senpai/Storage/openplayground/server/app.py", line 266, in download_loop logger.error("error", e) Message: 'error' Arguments: (ImportError('\nLlamaConverter requires the protobuf library but it was not found in your environment. Checkout the instructions on the\ninstallation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones\nthat match your environment. Please note that you may need to restart your runtime after installation.\n'),) ERROR:main:Failed to download anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g from huggingface-local INFO:main:Downloading tokenizer.model: 0%| | 0.00/500k [00:00<?, ?B/s] INFO:main:Downloading tokenizer.model: 100%|##########| 500k/500k [00:00<00:00, 4.51MB/s] INFO:main:Downloading tokenizer.model: 100%|##########| 500k/500k [00:00<00:00, 4.47MB/s] --- Logging error --- Traceback (most recent call last): File "/media/senpai/Storage/openplayground/server/app.py", line 252, in download_loop _ = AutoTokenizer.from_pretrained(model.name) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 701, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1811, in from_pretrained return cls._from_pretrained( File "/home/senpai/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1965, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 89, in init super().init( File "/home/senpai/.local/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 114, in init fast_tokenizer = convert_slow_tokenizer(slow_tokenizer) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/convert_slow_tokenizer.py", line 1288, in convert_slow_tokenizer return converter_class(transformer_tokenizer).converted() File "/home/senpai/.local/lib/python3.9/site-packages/transformers/convert_slow_tokenizer.py", line 441, in init requires_backends(self, "protobuf") File "/home/senpai/.local/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1053, in requires_backends raise ImportError("".join(failed)) ImportError: LlamaConverter requires the protobuf library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones that match your environment. Please note that you may need to restart your runtime after installation.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.9/logging/init.py", line 1079, in emit msg = self.format(record) File "/usr/lib/python3.9/logging/init.py", line 923, in format return fmt.format(record) File "/usr/lib/python3.9/logging/init.py", line 659, in format record.message = record.getMessage() File "/usr/lib/python3.9/logging/init.py", line 363, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "/usr/lib/python3.9/threading.py", line 912, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner self.run() File "/usr/lib/python3.9/threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "/media/senpai/Storage/openplayground/server/app.py", line 266, in download_loop logger.error("error", e) Message: 'error' Arguments: (ImportError('\nLlamaConverter requires the protobuf library but it was not found in your environment. Checkout the instructions on the\ninstallation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones\nthat match your environment. Please note that you may need to restart your runtime after installation.\n'),) ERROR:main:Failed to download anon8231489123/vicuna-13b-GPTQ-4bit-128g from huggingface-local INFO:main:Downloading (…)cial_tokens_map.json: 0%| | 0.00/411 [00:00<?, ?B/s] INFO:main:Downloading (…)cial_tokens_map.json: 100%|##########| 411/411 [00:00<00:00, 200kB/s] --- Logging error --- Traceback (most recent call last): File "/media/senpai/Storage/openplayground/server/app.py", line 252, in download_loop _ = AutoTokenizer.from_pretrained(model.name) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 698, in from_pretrained raise ValueError( ValueError: Tokenizer class LLaMATokenizer does not exist or is not currently imported.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.9/logging/init.py", line 1079, in emit msg = self.format(record) File "/usr/lib/python3.9/logging/init.py", line 923, in format return fmt.format(record) File "/usr/lib/python3.9/logging/init.py", line 659, in format record.message = record.getMessage() File "/usr/lib/python3.9/logging/init.py", line 363, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "/usr/lib/python3.9/threading.py", line 912, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner self.run() File "/usr/lib/python3.9/threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "/media/senpai/Storage/openplayground/server/app.py", line 266, in download_loop logger.error("error", e) Message: 'error' Arguments: (ValueError('Tokenizer class LLaMATokenizer does not exist or is not currently imported.'),) ERROR:main:Failed to download decapoda-research/llama-7b-hf from huggingface-local INFO:main:Downloading tokenizer.model: 0%| | 0.00/500k [00:00<?, ?B/s] INFO:main:Downloading tokenizer.model: 100%|##########| 500k/500k [00:00<00:00, 4.00MB/s] INFO:main:Downloading tokenizer.model: 100%|##########| 500k/500k [00:00<00:00, 3.95MB/s] INFO:main:Downloading (…)/main/tokenizer.json: 0%| | 0.00/1.84M [00:00<?, ?B/s] INFO:main:Downloading (…)/main/tokenizer.json: 100%|##########| 1.84M/1.84M [00:00<00:00, 8.26MB/s] INFO:main:Downloading (…)/main/tokenizer.json: 100%|##########| 1.84M/1.84M [00:00<00:00, 8.05MB/s] INFO:main:Downloading (…)cial_tokens_map.json: 0%| | 0.00/411 [00:00<?, ?B/s] INFO:main:Downloading (…)cial_tokens_map.json: 100%|##########| 411/411 [00:00<00:00, 202kB/s] INFO:main:Downloading (…)lve/main/config.json: 0%| | 0.00/507 [00:00<?, ?B/s] INFO:main:Downloading (…)lve/main/config.json: 100%|##########| 507/507 [00:00<00:00, 40.6kB/s] --- Logging error --- Traceback (most recent call last): File "/media/senpai/Storage/openplayground/server/app.py", line 253, in download_loop _ = AutoModel.from_pretrained(model.name) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 471, in from_pretrained return model_class.from_pretrained( File "/home/senpai/.local/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2455, in from_pretrained raise EnvironmentError( OSError: mongolian-basket-weaving/koala-13b-fp16-safetensors does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.9/logging/init.py", line 1079, in emit msg = self.format(record) File "/usr/lib/python3.9/logging/init.py", line 923, in format return fmt.format(record) File "/usr/lib/python3.9/logging/init.py", line 659, in format record.message = record.getMessage() File "/usr/lib/python3.9/logging/init.py", line 363, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "/usr/lib/python3.9/threading.py", line 912, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner self.run() File "/usr/lib/python3.9/threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "/media/senpai/Storage/openplayground/server/app.py", line 266, in download_loop logger.error("error", e) Message: 'error' Arguments: (OSError('mongolian-basket-weaving/koala-13b-fp16-safetensors does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.'),) ERROR:main:Failed to download mongolian-basket-weaving/koala-13b-fp16-safetensors from huggingface-local INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:36:22] "GET /settings HTTP/1.1" 304 - INFO:server.lib.api:Getting providers with models INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:36:22] "GET /api/providers-with-key-and-models HTTP/1.1" 200 - INFO:server.lib.api:Received notification request INFO:server.lib.sseserver:LISTENING TO: notifications INFO:server.lib.sseserver:LISTENING INFO:server.lib.api.provider:Enabling Provider Model huggingface-local anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g INFO:server.lib.storage:Saving models.json INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:37:44] "GET /api/provider/huggingface-local/model/anon8231489123%2Fgpt4-x-alpaca-13b-native-4bit-128g/toggle-status HTTP/1.1" 200 - INFO:server.lib.api:Getting enabled models INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:38:13] "GET /api/models-enabled HTTP/1.1" 200 - INFO:server.lib.api:Getting providers with models INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:38:14] "GET /api/providers-with-key-and-models HTTP/1.1" 200 - INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:38:16] "GET /settings HTTP/1.1" 304 - INFO:server.lib.api:Getting providers with models INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:38:16] "GET /api/providers-with-key-and-models HTTP/1.1" 200 - INFO:server.lib.api:Received notification request INFO:server.lib.sseserver:LISTENING TO: notifications INFO:server.lib.sseserver:LISTENING ^C^CException ignored in: <module 'threading' from '/usr/lib/python3.9/threading.py'> Traceback (most recent call last): File "/usr/lib/python3.9/threading.py", line 1428, in _shutdown lock.acquire() KeyboardInterrupt:

I tried running pip3 install protobuf==3.19.0 and now I don't see any errors but I still have the pending problem with no errors. I don't know what the issue is after this.

senpai@debian:/media/senpai/Storage/openplayground$ python3 -m server.app --host 192.168.X.X -l INFO INFO:server.lib.sseserver:SUBSCRIBING TO: inferences INFO:server.lib.sseserver:SUBSCRIBING TO: notifications INFO:server.lib.sseserver:GETTING TOPIC: notifications INFO:server.lib.sseserver:GETTING TOPIC: inferences INFO:main:Initializing download manager... INFO:main:Download loop started...

  • Serving Flask app 'app'
  • Debug mode: off INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
  • Running on http://192.168.X.X:5432 INFO:werkzeug:Press CTRL+C to quit INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:48:09] "GET /settings HTTP/1.1" 304 - INFO:server.lib.api:Received notification request INFO:server.lib.sseserver:LISTENING TO: notifications INFO:server.lib.sseserver:LISTENING INFO:server.lib.api:Getting providers with models INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:48:09] "GET /api/providers-with-key-and-models HTTP/1.1" 200 - INFO:server.lib.api.provider:Enabling Provider Model huggingface-local anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g INFO:server.lib.storage:Saving models.json INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:48:38] "GET /api/provider/huggingface-local/model/anon8231489123%2Fgpt4-x-alpaca-13b-native-4bit-128g/toggle-status HTTP/1.1" 200 - INFO:server.lib.api.provider:Enabling Provider Model huggingface-local decapoda-research/llama-7b-hf INFO:server.lib.storage:Saving models.json INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:48:41] "GET /api/provider/huggingface-local/model/decapoda-research%2Fllama-7b-hf/toggle-status HTTP/1.1" 200 - INFO:server.lib.api.provider:Enabling Provider Model huggingface-local anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g INFO:server.lib.storage:Saving models.json INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:48:43] "GET /api/provider/huggingface-local/model/anon8231489123%2Fgpt4-x-alpaca-13b-native-4bit-128g/toggle-status HTTP/1.1" 200 - INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:48:47] "GET /settings HTTP/1.1" 304 - INFO:server.lib.api:Getting providers with models INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:48:47] "GET /api/providers-with-key-and-models HTTP/1.1" 200 - INFO:server.lib.api:Received notification request INFO:server.lib.sseserver:LISTENING TO: notifications INFO:server.lib.sseserver:LISTENING INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:49:26] "GET /settings HTTP/1.1" 304 - INFO:server.lib.api:Getting providers with models INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:49:26] "GET /api/providers-with-key-and-models HTTP/1.1" 200 - INFO:server.lib.api:Received notification request INFO:server.lib.sseserver:LISTENING TO: notifications INFO:server.lib.sseserver:LISTENING

theman23290 avatar Apr 11 '23 23:04 theman23290

after uninstalling openplayground with pip uninstall openplayground then installing open playground with pip install openplayground then upgrading with pip install --upgrade openplayground openplayground is not updating and i still get the same thing in windows 10 cmd

C:\Users\user2>openplayground run Initializing download manager... Should download ykilcher/gpt-4chan from huggingface Download loop started... About to start downloading

You're installing from pyp in production mode but I haven't published those changes yet. Please use the pytest version as shown in my previous message

AlexanderLourenco avatar Apr 12 '23 00:04 AlexanderLourenco

I installed transformers using pip3 install git+https://github.com/huggingface/transformers and now I don't get the "LlamaTokenizer does not exist or is not currently imported" error anymore. Now there is another error preventing download.

senpai@debian:/media/senpai/Storage/openplayground$ python3 -m server.app --host 192.168.X.X -l INFO INFO:server.lib.sseserver:SUBSCRIBING TO: inferences INFO:server.lib.sseserver:SUBSCRIBING TO: notifications INFO:server.lib.sseserver:GETTING TOPIC: notifications INFO:server.lib.sseserver:GETTING TOPIC: inferences INFO:main:Initializing download manager... INFO:main:Download loop started...

  • Serving Flask app 'app'
  • Debug mode: off INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
  • Running on http://192.168.X.X:5432 INFO:werkzeug:Press CTRL+C to quit INFO:main:Downloading tokenizer.model: 0%| | 0.00/500k [00:00<?, ?B/s] INFO:main:Downloading tokenizer.model: 100%|##########| 500k/500k [00:00<00:00, 4.91MB/s] INFO:main:Downloading tokenizer.model: 100%|##########| 500k/500k [00:00<00:00, 4.87MB/s] --- Logging error --- Traceback (most recent call last): File "/media/senpai/Storage/openplayground/server/app.py", line 252, in download_loop _ = AutoTokenizer.from_pretrained(model.name) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 701, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1811, in from_pretrained return cls._from_pretrained( File "/home/senpai/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1965, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 89, in init super().init( File "/home/senpai/.local/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 114, in init fast_tokenizer = convert_slow_tokenizer(slow_tokenizer) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/convert_slow_tokenizer.py", line 1288, in convert_slow_tokenizer return converter_class(transformer_tokenizer).converted() File "/home/senpai/.local/lib/python3.9/site-packages/transformers/convert_slow_tokenizer.py", line 441, in init requires_backends(self, "protobuf") File "/home/senpai/.local/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1053, in requires_backends raise ImportError("".join(failed)) ImportError: LlamaConverter requires the protobuf library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones that match your environment. Please note that you may need to restart your runtime after installation.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.9/logging/init.py", line 1079, in emit msg = self.format(record) File "/usr/lib/python3.9/logging/init.py", line 923, in format return fmt.format(record) File "/usr/lib/python3.9/logging/init.py", line 659, in format record.message = record.getMessage() File "/usr/lib/python3.9/logging/init.py", line 363, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "/usr/lib/python3.9/threading.py", line 912, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner self.run() File "/usr/lib/python3.9/threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "/media/senpai/Storage/openplayground/server/app.py", line 266, in download_loop logger.error("error", e) Message: 'error' Arguments: (ImportError('\nLlamaConverter requires the protobuf library but it was not found in your environment. Checkout the instructions on the\ninstallation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones\nthat match your environment. Please note that you may need to restart your runtime after installation.\n'),) ERROR:main:Failed to download anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g from huggingface INFO:main:Downloading (…)cial_tokens_map.json: 0%| | 0.00/96.0 [00:00<?, ?B/s] INFO:main:Downloading (…)cial_tokens_map.json: 100%|##########| 96.0/96.0 [00:00<00:00, 22.0kB/s] --- Logging error --- Traceback (most recent call last): File "/media/senpai/Storage/openplayground/server/app.py", line 252, in download_loop _ = AutoTokenizer.from_pretrained(model.name) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 701, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1811, in from_pretrained return cls._from_pretrained( File "/home/senpai/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1965, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 89, in init super().init( File "/home/senpai/.local/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 114, in init fast_tokenizer = convert_slow_tokenizer(slow_tokenizer) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/convert_slow_tokenizer.py", line 1288, in convert_slow_tokenizer return converter_class(transformer_tokenizer).converted() File "/home/senpai/.local/lib/python3.9/site-packages/transformers/convert_slow_tokenizer.py", line 441, in init requires_backends(self, "protobuf") File "/home/senpai/.local/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1053, in requires_backends raise ImportError("".join(failed)) ImportError: LlamaConverter requires the protobuf library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones that match your environment. Please note that you may need to restart your runtime after installation.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.9/logging/init.py", line 1079, in emit msg = self.format(record) File "/usr/lib/python3.9/logging/init.py", line 923, in format return fmt.format(record) File "/usr/lib/python3.9/logging/init.py", line 659, in format record.message = record.getMessage() File "/usr/lib/python3.9/logging/init.py", line 363, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "/usr/lib/python3.9/threading.py", line 912, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner self.run() File "/usr/lib/python3.9/threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "/media/senpai/Storage/openplayground/server/app.py", line 266, in download_loop logger.error("error", e) Message: 'error' Arguments: (ImportError('\nLlamaConverter requires the protobuf library but it was not found in your environment. Checkout the instructions on the\ninstallation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones\nthat match your environment. Please note that you may need to restart your runtime after installation.\n'),) ERROR:main:Failed to download anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g from huggingface-local INFO:main:Downloading tokenizer.model: 0%| | 0.00/500k [00:00<?, ?B/s] INFO:main:Downloading tokenizer.model: 100%|##########| 500k/500k [00:00<00:00, 4.51MB/s] INFO:main:Downloading tokenizer.model: 100%|##########| 500k/500k [00:00<00:00, 4.47MB/s] --- Logging error --- Traceback (most recent call last): File "/media/senpai/Storage/openplayground/server/app.py", line 252, in download_loop _ = AutoTokenizer.from_pretrained(model.name) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 701, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1811, in from_pretrained return cls._from_pretrained( File "/home/senpai/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1965, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 89, in init super().init( File "/home/senpai/.local/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 114, in init fast_tokenizer = convert_slow_tokenizer(slow_tokenizer) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/convert_slow_tokenizer.py", line 1288, in convert_slow_tokenizer return converter_class(transformer_tokenizer).converted() File "/home/senpai/.local/lib/python3.9/site-packages/transformers/convert_slow_tokenizer.py", line 441, in init requires_backends(self, "protobuf") File "/home/senpai/.local/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1053, in requires_backends raise ImportError("".join(failed)) ImportError: LlamaConverter requires the protobuf library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones that match your environment. Please note that you may need to restart your runtime after installation.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.9/logging/init.py", line 1079, in emit msg = self.format(record) File "/usr/lib/python3.9/logging/init.py", line 923, in format return fmt.format(record) File "/usr/lib/python3.9/logging/init.py", line 659, in format record.message = record.getMessage() File "/usr/lib/python3.9/logging/init.py", line 363, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "/usr/lib/python3.9/threading.py", line 912, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner self.run() File "/usr/lib/python3.9/threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "/media/senpai/Storage/openplayground/server/app.py", line 266, in download_loop logger.error("error", e) Message: 'error' Arguments: (ImportError('\nLlamaConverter requires the protobuf library but it was not found in your environment. Checkout the instructions on the\ninstallation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones\nthat match your environment. Please note that you may need to restart your runtime after installation.\n'),) ERROR:main:Failed to download anon8231489123/vicuna-13b-GPTQ-4bit-128g from huggingface-local INFO:main:Downloading (…)cial_tokens_map.json: 0%| | 0.00/411 [00:00<?, ?B/s] INFO:main:Downloading (…)cial_tokens_map.json: 100%|##########| 411/411 [00:00<00:00, 200kB/s] --- Logging error --- Traceback (most recent call last): File "/media/senpai/Storage/openplayground/server/app.py", line 252, in download_loop _ = AutoTokenizer.from_pretrained(model.name) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 698, in from_pretrained raise ValueError( ValueError: Tokenizer class LLaMATokenizer does not exist or is not currently imported.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.9/logging/init.py", line 1079, in emit msg = self.format(record) File "/usr/lib/python3.9/logging/init.py", line 923, in format return fmt.format(record) File "/usr/lib/python3.9/logging/init.py", line 659, in format record.message = record.getMessage() File "/usr/lib/python3.9/logging/init.py", line 363, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "/usr/lib/python3.9/threading.py", line 912, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner self.run() File "/usr/lib/python3.9/threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "/media/senpai/Storage/openplayground/server/app.py", line 266, in download_loop logger.error("error", e) Message: 'error' Arguments: (ValueError('Tokenizer class LLaMATokenizer does not exist or is not currently imported.'),) ERROR:main:Failed to download decapoda-research/llama-7b-hf from huggingface-local INFO:main:Downloading tokenizer.model: 0%| | 0.00/500k [00:00<?, ?B/s] INFO:main:Downloading tokenizer.model: 100%|##########| 500k/500k [00:00<00:00, 4.00MB/s] INFO:main:Downloading tokenizer.model: 100%|##########| 500k/500k [00:00<00:00, 3.95MB/s] INFO:main:Downloading (…)/main/tokenizer.json: 0%| | 0.00/1.84M [00:00<?, ?B/s] INFO:main:Downloading (…)/main/tokenizer.json: 100%|##########| 1.84M/1.84M [00:00<00:00, 8.26MB/s] INFO:main:Downloading (…)/main/tokenizer.json: 100%|##########| 1.84M/1.84M [00:00<00:00, 8.05MB/s] INFO:main:Downloading (…)cial_tokens_map.json: 0%| | 0.00/411 [00:00<?, ?B/s] INFO:main:Downloading (…)cial_tokens_map.json: 100%|##########| 411/411 [00:00<00:00, 202kB/s] INFO:main:Downloading (…)lve/main/config.json: 0%| | 0.00/507 [00:00<?, ?B/s] INFO:main:Downloading (…)lve/main/config.json: 100%|##########| 507/507 [00:00<00:00, 40.6kB/s] --- Logging error --- Traceback (most recent call last): File "/media/senpai/Storage/openplayground/server/app.py", line 253, in download_loop _ = AutoModel.from_pretrained(model.name) File "/home/senpai/.local/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 471, in from_pretrained return model_class.from_pretrained( File "/home/senpai/.local/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2455, in from_pretrained raise EnvironmentError( OSError: mongolian-basket-weaving/koala-13b-fp16-safetensors does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.9/logging/init.py", line 1079, in emit msg = self.format(record) File "/usr/lib/python3.9/logging/init.py", line 923, in format return fmt.format(record) File "/usr/lib/python3.9/logging/init.py", line 659, in format record.message = record.getMessage() File "/usr/lib/python3.9/logging/init.py", line 363, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "/usr/lib/python3.9/threading.py", line 912, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner self.run() File "/usr/lib/python3.9/threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "/media/senpai/Storage/openplayground/server/app.py", line 266, in download_loop logger.error("error", e) Message: 'error' Arguments: (OSError('mongolian-basket-weaving/koala-13b-fp16-safetensors does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.'),) ERROR:main:Failed to download mongolian-basket-weaving/koala-13b-fp16-safetensors from huggingface-local INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:36:22] "GET /settings HTTP/1.1" 304 - INFO:server.lib.api:Getting providers with models INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:36:22] "GET /api/providers-with-key-and-models HTTP/1.1" 200 - INFO:server.lib.api:Received notification request INFO:server.lib.sseserver:LISTENING TO: notifications INFO:server.lib.sseserver:LISTENING INFO:server.lib.api.provider:Enabling Provider Model huggingface-local anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g INFO:server.lib.storage:Saving models.json INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:37:44] "GET /api/provider/huggingface-local/model/anon8231489123%2Fgpt4-x-alpaca-13b-native-4bit-128g/toggle-status HTTP/1.1" 200 - INFO:server.lib.api:Getting enabled models INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:38:13] "GET /api/models-enabled HTTP/1.1" 200 - INFO:server.lib.api:Getting providers with models INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:38:14] "GET /api/providers-with-key-and-models HTTP/1.1" 200 - INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:38:16] "GET /settings HTTP/1.1" 304 - INFO:server.lib.api:Getting providers with models INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:38:16] "GET /api/providers-with-key-and-models HTTP/1.1" 200 - INFO:server.lib.api:Received notification request INFO:server.lib.sseserver:LISTENING TO: notifications INFO:server.lib.sseserver:LISTENING ^C^CException ignored in: <module 'threading' from '/usr/lib/python3.9/threading.py'> Traceback (most recent call last): File "/usr/lib/python3.9/threading.py", line 1428, in _shutdown lock.acquire() KeyboardInterrupt:

I tried running pip3 install protobuf==3.19.0 and now I don't see any errors but I still have the pending problem with no errors. I don't know what the issue is after this.

senpai@debian:/media/senpai/Storage/openplayground$ python3 -m server.app --host 192.168.X.X -l INFO INFO:server.lib.sseserver:SUBSCRIBING TO: inferences INFO:server.lib.sseserver:SUBSCRIBING TO: notifications INFO:server.lib.sseserver:GETTING TOPIC: notifications INFO:server.lib.sseserver:GETTING TOPIC: inferences INFO:main:Initializing download manager... INFO:main:Download loop started...

  • Serving Flask app 'app'
  • Debug mode: off INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
  • Running on http://192.168.X.X:5432 INFO:werkzeug:Press CTRL+C to quit INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:48:09] "GET /settings HTTP/1.1" 304 - INFO:server.lib.api:Received notification request INFO:server.lib.sseserver:LISTENING TO: notifications INFO:server.lib.sseserver:LISTENING INFO:server.lib.api:Getting providers with models INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:48:09] "GET /api/providers-with-key-and-models HTTP/1.1" 200 - INFO:server.lib.api.provider:Enabling Provider Model huggingface-local anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g INFO:server.lib.storage:Saving models.json INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:48:38] "GET /api/provider/huggingface-local/model/anon8231489123%2Fgpt4-x-alpaca-13b-native-4bit-128g/toggle-status HTTP/1.1" 200 - INFO:server.lib.api.provider:Enabling Provider Model huggingface-local decapoda-research/llama-7b-hf INFO:server.lib.storage:Saving models.json INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:48:41] "GET /api/provider/huggingface-local/model/decapoda-research%2Fllama-7b-hf/toggle-status HTTP/1.1" 200 - INFO:server.lib.api.provider:Enabling Provider Model huggingface-local anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g INFO:server.lib.storage:Saving models.json INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:48:43] "GET /api/provider/huggingface-local/model/anon8231489123%2Fgpt4-x-alpaca-13b-native-4bit-128g/toggle-status HTTP/1.1" 200 - INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:48:47] "GET /settings HTTP/1.1" 304 - INFO:server.lib.api:Getting providers with models INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:48:47] "GET /api/providers-with-key-and-models HTTP/1.1" 200 - INFO:server.lib.api:Received notification request INFO:server.lib.sseserver:LISTENING TO: notifications INFO:server.lib.sseserver:LISTENING INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:49:26] "GET /settings HTTP/1.1" 304 - INFO:server.lib.api:Getting providers with models INFO:werkzeug:172.18.0.2 - - [11/Apr/2023 19:49:26] "GET /api/providers-with-key-and-models HTTP/1.1" 200 - INFO:server.lib.api:Received notification request INFO:server.lib.sseserver:LISTENING TO: notifications INFO:server.lib.sseserver:LISTENING

Thank you for sharing the logs! I will try to run the same model locally to see if I can replicate the problem

AlexanderLourenco avatar Apr 12 '23 00:04 AlexanderLourenco

after uninstalling openplayground with pip uninstall openplayground then installing open playground with pip install openplayground then upgrading with pip install --upgrade openplayground openplayground is not updating and i still get the same thing in windows 10 cmd C:\Users\user2>openplayground run Initializing download manager... Should download ykilcher/gpt-4chan from huggingface Download loop started... About to start downloading

You're installing from pyp in production mode but I haven't published those changes yet. Please use the pytest version as shown in my previous message

oh ok so installing with pip install --force -i https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ openplayground==0.1.13a3 gives me these errors when i run openplayground

C:\Users\user2>openplayground run INFO:server.lib.sseserver:SUBSCRIBING TO: inferences INFO:server.lib.sseserver:SUBSCRIBING TO: notifications INFO:server.lib.sseserver:GETTING TOPIC: notifications INFO:server.lib.sseserver:GETTING TOPIC: inferences INFO:server.app:Initializing download manager... INFO:server.app:Download loop started...

  • Serving Flask app 'server.app'
  • Debug mode: off INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
  • Running on http://localhost:5432 INFO:werkzeug:Press CTRL+C to quit --- Logging error --- Traceback (most recent call last): File "c:\users\user2\appdata\roaming\python395\lib\site-packages\huggingface_hub\utils_errors.py", line 259, in hf_raise_for_status response.raise_for_status() File "c:\users\user2\appdata\roaming\python395\lib\site-packages\requests\models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/ykilcher/gpt-4chan/resolve/main/tokenizer_config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "c:\users\user2\appdata\roaming\python395\lib\site-packages\transformers\utils\hub.py", line 409, in cached_file resolved_file = hf_hub_download( File "c:\users\user2\appdata\roaming\python395\lib\site-packages\huggingface_hub\utils_validators.py", line 120, in _inner_fn return fn(*args, **kwargs) File "c:\users\user2\appdata\roaming\python395\lib\site-packages\huggingface_hub\file_download.py", line 1166, in hf_hub_download metadata = get_hf_file_metadata( File "c:\users\user2\appdata\roaming\python395\lib\site-packages\huggingface_hub\utils_validators.py", line 120, in _inner_fn return fn(*args, **kwargs) File "c:\users\user2\appdata\roaming\python395\lib\site-packages\huggingface_hub\file_download.py", line 1507, in get_hf_file_metadata hf_raise_for_status(r) File "c:\users\user2\appdata\roaming\python395\lib\site-packages\huggingface_hub\utils_errors.py", line 291, in hf_raise_for_status raise RepositoryNotFoundError(message, response) from e huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-6435fa96-430b6366564f5cd87d2a1331)

Repository Not Found for url: https://huggingface.co/ykilcher/gpt-4chan/resolve/main/tokenizer_config.json. Please make sure you specified the correct repo_id and repo_type. If you are trying to access a private or gated repo, make sure you are authenticated. Invalid username or password.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "c:\users\user2\appdata\roaming\python395\lib\site-packages\server\app.py", line 252, in download_loop _ = AutoTokenizer.from_pretrained(model.name) File "c:\users\user2\appdata\roaming\python395\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 619, in from_pretrained tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs) File "c:\users\user2\appdata\roaming\python395\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 463, in get_tokenizer_config resolved_config_file = cached_file( File "c:\users\user2\appdata\roaming\python395\lib\site-packages\transformers\utils\hub.py", line 424, in cached_file raise EnvironmentError( OSError: ykilcher/gpt-4chan is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo with use_auth_token or log in with huggingface-cli login and pass use_auth_token=True.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "c:\users\user2\appdata\roaming\python395\lib\logging_init_.py", line 1083, in emit msg = self.format(record) File "c:\users\user2\appdata\roaming\python395\lib\logging_init_.py", line 927, in format return fmt.format(record) File "c:\users\user2\appdata\roaming\python395\lib\logging_init_.py", line 663, in format record.message = record.getMessage() File "c:\users\user2\appdata\roaming\python395\lib\logging_init_.py", line 367, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "c:\users\user2\appdata\roaming\python395\lib\threading.py", line 912, in _bootstrap self._bootstrap_inner() File "c:\users\user2\appdata\roaming\python395\lib\threading.py", line 954, in _bootstrap_inner self.run() File "c:\users\user2\appdata\roaming\python395\lib\threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "c:\users\user2\appdata\roaming\python395\lib\site-packages\server\app.py", line 266, in download_loop logger.error("error", e) Message: 'error' Arguments: (OSError("ykilcher/gpt-4chan is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\nIf this is a private repository, make sure to pass a token having permission to this repo with use_auth_token or log in with huggingface-cli login and pass use_auth_token=True."),) ERROR:server.app:Failed to download ykilcher/gpt-4chan from huggingface --- Logging error --- Traceback (most recent call last): File "c:\users\user2\appdata\roaming\python395\lib\site-packages\server\app.py", line 252, in download_loop _ = AutoTokenizer.from_pretrained(model.name) File "c:\users\user2\appdata\roaming\python395\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 676, in from_pretrained raise ValueError( ValueError: Tokenizer class LlamaTokenizer does not exist or is not currently imported.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "c:\users\user2\appdata\roaming\python395\lib\logging_init_.py", line 1083, in emit msg = self.format(record) File "c:\users\user2\appdata\roaming\python395\lib\logging_init_.py", line 927, in format return fmt.format(record) File "c:\users\user2\appdata\roaming\python395\lib\logging_init_.py", line 663, in format record.message = record.getMessage() File "c:\users\user2\appdata\roaming\python395\lib\logging_init_.py", line 367, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "c:\users\user2\appdata\roaming\python395\lib\threading.py", line 912, in _bootstrap self._bootstrap_inner() File "c:\users\user2\appdata\roaming\python395\lib\threading.py", line 954, in _bootstrap_inner self.run() File "c:\users\user2\appdata\roaming\python395\lib\threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "c:\users\user2\appdata\roaming\python395\lib\site-packages\server\app.py", line 266, in download_loop logger.error("error", e) Message: 'error' Arguments: (ValueError('Tokenizer class LlamaTokenizer does not exist or is not currently imported.'),) ERROR:server.app:Failed to download chavinlo/gpt4-x-alpaca from huggingface --- Logging error --- Traceback (most recent call last): File "c:\users\user2\appdata\roaming\python395\lib\site-packages\huggingface_hub\utils_errors.py", line 259, in hf_raise_for_status response.raise_for_status() File "c:\users\user2\appdata\roaming\python395\lib\site-packages\requests\models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/ykilcher/gpt-4chan/resolve/main/tokenizer_config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "c:\users\user2\appdata\roaming\python395\lib\site-packages\transformers\utils\hub.py", line 409, in cached_file resolved_file = hf_hub_download( File "c:\users\user2\appdata\roaming\python395\lib\site-packages\huggingface_hub\utils_validators.py", line 120, in _inner_fn return fn(*args, **kwargs) File "c:\users\user2\appdata\roaming\python395\lib\site-packages\huggingface_hub\file_download.py", line 1166, in hf_hub_download metadata = get_hf_file_metadata( File "c:\users\user2\appdata\roaming\python395\lib\site-packages\huggingface_hub\utils_validators.py", line 120, in _inner_fn return fn(*args, **kwargs) File "c:\users\user2\appdata\roaming\python395\lib\site-packages\huggingface_hub\file_download.py", line 1507, in get_hf_file_metadata hf_raise_for_status(r) File "c:\users\user2\appdata\roaming\python395\lib\site-packages\huggingface_hub\utils_errors.py", line 291, in hf_raise_for_status raise RepositoryNotFoundError(message, response) from e huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-6435fa98-4f286eab2944e2b62f9e1ae0)

Repository Not Found for url: https://huggingface.co/ykilcher/gpt-4chan/resolve/main/tokenizer_config.json. Please make sure you specified the correct repo_id and repo_type. If you are trying to access a private or gated repo, make sure you are authenticated. Invalid username or password.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "c:\users\user2\appdata\roaming\python395\lib\site-packages\server\app.py", line 252, in download_loop _ = AutoTokenizer.from_pretrained(model.name) File "c:\users\user2\appdata\roaming\python395\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 619, in from_pretrained tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs) File "c:\users\user2\appdata\roaming\python395\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 463, in get_tokenizer_config resolved_config_file = cached_file( File "c:\users\user2\appdata\roaming\python395\lib\site-packages\transformers\utils\hub.py", line 424, in cached_file raise EnvironmentError( OSError: ykilcher/gpt-4chan is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo with use_auth_token or log in with huggingface-cli login and pass use_auth_token=True.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "c:\users\user2\appdata\roaming\python395\lib\logging_init_.py", line 1083, in emit msg = self.format(record) File "c:\users\user2\appdata\roaming\python395\lib\logging_init_.py", line 927, in format return fmt.format(record) File "c:\users\user2\appdata\roaming\python395\lib\logging_init_.py", line 663, in format record.message = record.getMessage() File "c:\users\user2\appdata\roaming\python395\lib\logging_init_.py", line 367, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "c:\users\user2\appdata\roaming\python395\lib\threading.py", line 912, in _bootstrap self._bootstrap_inner() File "c:\users\user2\appdata\roaming\python395\lib\threading.py", line 954, in _bootstrap_inner self.run() File "c:\users\user2\appdata\roaming\python395\lib\threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "c:\users\user2\appdata\roaming\python395\lib\site-packages\server\app.py", line 266, in download_loop logger.error("error", e) Message: 'error' Arguments: (OSError("ykilcher/gpt-4chan is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\nIf this is a private repository, make sure to pass a token having permission to this repo with use_auth_token or log in with huggingface-cli login and pass use_auth_token=True."),) ERROR:server.app:Failed to download ykilcher/gpt-4chan from huggingface-local --- Logging error --- Traceback (most recent call last): File "c:\users\user2\appdata\roaming\python395\lib\site-packages\server\app.py", line 252, in download_loop _ = AutoTokenizer.from_pretrained(model.name) File "c:\users\user2\appdata\roaming\python395\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 676, in from_pretrained raise ValueError( ValueError: Tokenizer class LlamaTokenizer does not exist or is not currently imported.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "c:\users\user2\appdata\roaming\python395\lib\logging_init_.py", line 1083, in emit msg = self.format(record) File "c:\users\user2\appdata\roaming\python395\lib\logging_init_.py", line 927, in format return fmt.format(record) File "c:\users\user2\appdata\roaming\python395\lib\logging_init_.py", line 663, in format record.message = record.getMessage() File "c:\users\user2\appdata\roaming\python395\lib\logging_init_.py", line 367, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "c:\users\user2\appdata\roaming\python395\lib\threading.py", line 912, in _bootstrap self._bootstrap_inner() File "c:\users\user2\appdata\roaming\python395\lib\threading.py", line 954, in _bootstrap_inner self.run() File "c:\users\user2\appdata\roaming\python395\lib\threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "c:\users\user2\appdata\roaming\python395\lib\site-packages\server\app.py", line 266, in download_loop logger.error("error", e) Message: 'error' Arguments: (ValueError('Tokenizer class LlamaTokenizer does not exist or is not currently imported.'),) ERROR:server.app:Failed to download chavinlo/gpt4-x-alpaca from huggingface-local

merrychrishna avatar Apr 12 '23 00:04 merrychrishna

@merrychrishna That looks to be a private model/repo? We'd need to add the ability to specify your HF API key for that to work, but it should be optional since most models are publicly available.

AlexanderLourenco avatar Apr 12 '23 17:04 AlexanderLourenco

im not sure. i just only put my huggingfaces api key in openplayground then used the search in openplayground and those popped up so i ticked it. and uniticking it doesnt cancel it.

merrychrishna avatar Apr 12 '23 18:04 merrychrishna

I have the same issues

shism2 avatar Jun 23 '23 21:06 shism2