pip install -r requirements.txt ERROR llama.egg==info
pip install -r requirements.txt ERROR: Invalid requirement: 'llama.egg==info': Expected end or semicolon (after name and no valid version specifier) llama.egg==info ^ (from line 4 of requirements.txt)
Also, where should the mappingbased-objects_lang=en.ttl.bzip2 be placed?
I apologize for the issue with the requirements.txt. I'll look into that. In the meantime, you can try removing that line to see if it resolves the problem. Please ensure that the kg file is unzipped first and placed in a directory, then pass that path to the --kg_path argument.
Let me know if this works for you!
After commenting llama.egg, still got error
(grapheval) k1@k1b550v2:~/GraphEval$ pip install -r requirements.txt
Collecting datasets==2.18.0 (from -r requirements.txt (line 1))
Using cached datasets-2.18.0-py3-none-any.whl.metadata (20 kB)
Collecting flash_attn==2.5.3 (from -r requirements.txt (line 2))
Using cached flash_attn-2.5.3.tar.gz (2.5 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-scvla9d9/flash-attn_bb2318d3e667421da8d773449c5f9fdf/setup.py", line 19, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
I assume this is due to the flash_attn package requiring torch (PyTorch) to be installed before it can proceed with the setup process. This may be solved by install torch first
作者,您好,在运行python train.py 时get_training_triples的triples为None,导致程序报错( if triples is None or kg is None:
raise ValueError("triples and kg must be provided", 'Model name:', model_name, 'Test case name:',
test_case_name, 'Model type:', model_type)
),我在python collect.py时已经生成training_triples_standard_2k_judge.pt了,我检查发现train.py 没有把triples传递过去,我想问问是我哪里出错了吗?
(GraphEval) jupyter@task_3080_yangyuwei:~/HJ/GraphEval-master$ python train.py --model_name Llama-2-7b-chat-hf --kg_path data/mappingbased-objects_lang=en.ttl
loading KG from cache...
Traceback (most recent call last):
File "/home/jupyter/HJ/GraphEval-master/train.py", line 24, in torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
f= torch.load(path)
File loaded successfully from cache/dbpedia-en/questions.pt, time taken: 0.003487217240035534 seconds
File found at cache/dbpedia-en/answers.pt, loading...
File loaded successfully from cache/dbpedia-en/answers.pt, time taken: 0.0014036959037184715 seconds
Calling get_training_triples with kg: <utils.kg_utils.KG object at 0x7f1f10267a90>, triples: None, model_name: Llama-2-7b-chat-hf, model_type: llama
Traceback (most recent call last):
File "/home/jupyter/HJ/GraphEval-master/train.py", line 24, in