code-llama-for-vscode
code-llama-for-vscode copied to clipboard
Import Error with 'jinja2' Package
I followed your instructions and managed to fulfill the prerequisites of downloading and running CodeLlama using Meta's repo. Trying to run the command you provided:
[my userpath]/codellama$ torchrun --nproc_per_node 1 llamacpp_mock_api.py \
--ckpt_dir CodeLlama-7b-Instruct/ \
--tokenizer_path CodeLlama-7b-Instruct/tokenizer.model \
--max_seq_len 512 --max_batch_size 4
Yields the following error for me:
File "/home/fabian/Desktop/AI/Domains/NLP/CodeLlama_vsc/codellama/llamacpp_mock_api.py", line 4, in <module>
from flask import Flask, jsonify, request
File "/home/fabian/anaconda3/lib/python3.9/site-packages/flask/__init__.py", line 14, in <module>
from jinja2 import escape
ImportError: cannot import name 'escape' from 'jinja2' (/home/fabian/anaconda3/lib/python3.9/site-packages/jinja2/__init__.py)
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 9086) of binary: /home/fabian/anaconda3/bin/python
Traceback (most recent call last):
File "/home/fabian/anaconda3/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/fabian/anaconda3/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/home/fabian/anaconda3/lib/python3.9/site-packages/torch/distributed/run.py", line 794, in main
run(args)
File "/home/fabian/anaconda3/lib/python3.9/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/home/fabian/anaconda3/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/fabian/anaconda3/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
llamacpp_mock_api.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-08-27_08:18:29
host : lenovo-legion-7.lan
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 9086)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
See a solution here (using python -m torch.distributed.launch) https://github.com/pytorch/pytorch/issues/92132
It looks like something went wrong with your installation of flask. Try creating a new environment and starting from scratch. I use Python 3.10, but I'm not sure if that will make a difference.
Closing for now since this is so old and likely resolved. Let me know if you'd like to reopen it.