GraphEval icon indicating copy to clipboard operation
GraphEval copied to clipboard

pip install -r requirements.txt ERROR llama.egg==info

Open jli113 opened this issue 1 year ago • 4 comments

pip install -r requirements.txt ERROR: Invalid requirement: 'llama.egg==info': Expected end or semicolon (after name and no valid version specifier) llama.egg==info ^ (from line 4 of requirements.txt)

Also, where should the mappingbased-objects_lang=en.ttl.bzip2 be placed?

jli113 avatar Aug 29 '24 09:08 jli113

I apologize for the issue with the requirements.txt. I'll look into that. In the meantime, you can try removing that line to see if it resolves the problem. Please ensure that the kg file is unzipped first and placed in a directory, then pass that path to the --kg_path argument.

Let me know if this works for you!

xz-liu avatar Aug 30 '24 23:08 xz-liu

After commenting llama.egg, still got error

(grapheval) k1@k1b550v2:~/GraphEval$ pip install -r requirements.txt 
Collecting datasets==2.18.0 (from -r requirements.txt (line 1))
  Using cached datasets-2.18.0-py3-none-any.whl.metadata (20 kB)
Collecting flash_attn==2.5.3 (from -r requirements.txt (line 2))
  Using cached flash_attn-2.5.3.tar.gz (2.5 MB)
  Preparing metadata (setup.py) ... error
  error: subprocess-exited-with-error
  
  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [6 lines of output]
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "/tmp/pip-install-scvla9d9/flash-attn_bb2318d3e667421da8d773449c5f9fdf/setup.py", line 19, in <module>
          import torch
      ModuleNotFoundError: No module named 'torch'
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

jli113 avatar Aug 31 '24 08:08 jli113

I assume this is due to the flash_attn package requiring torch (PyTorch) to be installed before it can proceed with the setup process. This may be solved by install torch first

xz-liu avatar Sep 03 '24 18:09 xz-liu

作者,您好,在运行python train.py 时get_training_triples的triples为None,导致程序报错( if triples is None or kg is None: raise ValueError("triples and kg must be provided", 'Model name:', model_name, 'Test case name:', test_case_name, 'Model type:', model_type) ),我在python collect.py时已经生成training_triples_standard_2k_judge.pt了,我检查发现train.py 没有把triples传递过去,我想问问是我哪里出错了吗? (GraphEval) jupyter@task_3080_yangyuwei:~/HJ/GraphEval-master$ python train.py --model_name Llama-2-7b-chat-hf --kg_path data/mappingbased-objects_lang=en.ttl loading KG from cache... Traceback (most recent call last): File "/home/jupyter/HJ/GraphEval-master/train.py", line 24, in train_prompt_encoder(kg, model_name=model_name, test_case_name=test_case_name, model_type=model_type) TypeError: train_prompt_encoder() missing 1 required positional argument: 'triples' (GraphEval) jupyter@task_3080_yangyuwei:~/HJ/GraphEval-master$ python train.py --model_name Llama-2-7b-chat-hf --kg_path data/mappingbased-objects_lang=en.ttl loading KG from cache... model will be saved at: cache/prompt_encoders/dbpedia-en-filtered/Llama-2-7b-chat-hf/trained_prompt_encoder_standard_2k_judge.pt File found at cache/dbpedia-en/questions.pt, loading... /home/jupyter/HJ/GraphEval-master/utils/general_utils.py:231: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. f= torch.load(path) File loaded successfully from cache/dbpedia-en/questions.pt, time taken: 0.003487217240035534 seconds File found at cache/dbpedia-en/answers.pt, loading... File loaded successfully from cache/dbpedia-en/answers.pt, time taken: 0.0014036959037184715 seconds Calling get_training_triples with kg: <utils.kg_utils.KG object at 0x7f1f10267a90>, triples: None, model_name: Llama-2-7b-chat-hf, model_type: llama Traceback (most recent call last): File "/home/jupyter/HJ/GraphEval-master/train.py", line 24, in train_prompt_encoder(kg, model_name=model_name, test_case_name=test_case_name, model_type=model_type) File "/home/jupyter/HJ/GraphEval-master/trainer/train_prompt_encoder.py", line 28, in train_prompt_encoder data = data_gen.get_training_triples(kg, test_case_name=test_case_name, model_name=model_name, model_type=model_type) File "/home/jupyter/HJ/GraphEval-master/data_gen/triple_sampler.py", line 84, in get_training_triples raise ValueError("triples and kg must be provided", 'Model name:', model_name, 'Test case name:', ValueError: ('triples and kg must be provided', 'Model name:', 'Llama-2-7b-chat-hf', 'Test case name:', 'standard_2k_judge', 'Model type:', 'llama') (GraphEval) jupyter@task_3080_yangyuwei:~/HJ/GraphEval-master$

yywhsgnd avatar Nov 02 '24 12:11 yywhsgnd