evaluate icon indicating copy to clipboard operation
evaluate copied to clipboard

AttributeError: module 'evaluate' has no attribute 'utils'

Open mike-duran-mitchell opened this issue 2 years ago • 5 comments

Trying to load seqeval as per the huggingface docs https://huggingface.co/docs/transformers/v4.26.0/en/tasks/token_classification#evaluate

I am just running

import evaluate
seqeval = evaluate.load("seqeval")

The error I get seems to be coming from this part of the script:

--> 102 @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
    103 class Seqeval(evaluate.Metric):
    104     def _info(self):

AttributeError: module 'evaluate' has no attribute 'utils'

I've tried reinstalling most of the libraries but to no avail. Any idea what's going on? Thanks.

mike-duran-mitchell avatar Feb 07 '23 17:02 mike-duran-mitchell

I couldn't replicate this error in colab. Can you share the versions of python, seqeval, evaluate, datasets, and huggingface_hub?

osbm avatar Feb 27 '23 21:02 osbm

I think I have a similar problem and came here looking for a solution. I have/had existing code which worked for at least a couple of months:

datasets.utils.logging.set_verbosity(log_level)
datasets.utils.logging.disable_progress_bar()  # type: ignore
evaluate.utils.logging.set_verbosity(log_level)
transformers.utils.logging.set_verbosity(log_level)
transformers.utils.logging.enable_default_handler()
transformers.utils.logging.enable_explicit_format()

This week, running experiments started to fail, and when looking into it I saw at the terminal:

File "/home/stephan/code/molreactgen/src/molreactgen/train.py", line 557, in main
    evaluate.utils.logging.set_verbosity(log_level)
AttributeError: module 'evaluate' has no attribute 'utils'

Line 517 refers to the code above.

When I do this in the REPL, same virtual environment, it works, like so:

Python 3.9.16 (main, Mar  8 2023, 14:00:05) 
import evaluate
evaluate.utils.logging.set_verbosity(10)

Version info

  • evaluate: 0.4.0
  • datasets: 2.10.1
  • transformers: 4.27.3

Any ideas? For the moment I have commented the line out to continue with the experiments.

hogru avatar Apr 06 '23 10:04 hogru

Well, that hack lasted only until my metric = evaluate.load("accuracy") call which resulted in AttributeError: module 'evaluate' has no attribute 'load'

With this it started to dawn on me... that's embarrassing but I'll keep it here hoping it might be useful to somebody in the future. The last git pull downloaded - you guessed it - an evaluate.py into my src directory and python was importing this instead. Oh boy...

hogru avatar Apr 06 '23 12:04 hogru

Hello @mike-duran-mitchell, I was facing a similar problem, and I resolved it using the old function from the datasets package. I also add to remove '--upgrade' from my pip install commands. However, there is a chance that the function does not behave exactly like the one in the evaluate package.

from datasets import load_metric
metric = load_metric('rouge')

Hope it helps if you haven't resolved the issue already!

LargeWaffle avatar May 26 '23 21:05 LargeWaffle

I am still seeing this issue and when trying to run official HF examples. Specifically: sequence_classification.ipynb from Classification tutorial.

Hello @mike-duran-mitchell, I was facing a similar problem, and I resolved it using the old function from the datasets package. I also add to remove '--upgrade' from my pip install commands. However, there is a chance that the function does not behave exactly like the one in the evaluate package.

from datasets import load_metric
metric = load_metric('rouge')

Hope it helps if you haven't resolved the issue already!

This did not work for me. What did you mean by "I also add to remove '--upgrade' from my pip install commands"?

Did you add "--upgrade" from your commands or did you remove it?

ebegoli avatar Nov 12 '23 20:11 ebegoli