transformers
transformers copied to clipboard
Training Evaluation Display on VSCode
System Info
- OSX Ventura 13.2
- VSCode 1.77.1
- Chromium 102.0.5005.196
- Jupyter extension v2023.3.1000892223
- Transformers 4.26.1
Who can help?
Not sure. Please let me know if it is a VSCode issue
Information
- [ ] The official example scripts
- [X] My own modified scripts
Tasks
- [x] An officially supported task in the
examples
folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below)
Reproduction
https://github.com/huggingface/notebooks/blob/main/examples/text_classification.ipynb Run the notebook (I commented out the parts pushing to hub)
Expected behavior
The table of metrics during evaluation phase in training fail to show up as html object in VSCode. There seems to be no similar issue on colab or AWS
Currently, the output looks like this (repeated by the number of times evaluation is run during training)
0.3564084804084804
{'eval_loss': 1.6524937152862549, 'eval_f1': 0.3564084804084804, 'eval_accuracy': 0.36, 'eval_runtime': 4.6151, 'eval_samples_per_second': 10.834, 'eval_steps_per_second': 1.517, 'epoch': 0.26}
***** Running Evaluation *****
Num examples = 50
Batch size = 8
{'loss': 1.6389, 'learning_rate': 3.611111111111111e-05, 'epoch': 0.28}
We had specifically excluded VSCode in the past as the widgets were not properly working there. Could you try to install from source and see if commenting out those two lines result in a nice training?
What do you mean by install from source?
I installed the package from source. I can see the table formatted correctly now, but it stops updating after the first evaluation
I guess that is the widget problem you're referring to. Is there a workaround for people on VSCode so it doesn't print out a thousand lines of evaluation? Like hiding the printout and retrieving evaluation stats after training is done?
You can filter the log level of printed informations with transformers.utils.set_verbosity_warning()
(to avoid all infos like the logs of the evaluation results).
I have also encountered this problem, and for procedural reasons, I cannot install from source. It would be very helpful if this issue could be addressed, please :)
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
args = TrainingArguments(
"pokemon-habitat",
evaluation_strategy="epoch",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=num_epochs,
use_mps_device=True,
)
# Trainer
trainer = Trainer(
model,
args,
train_dataset=dataset["train"],
eval_dataset=dataset["test"],
compute_metrics=compute_metrics,
)
trainer.train()
transfomers: 4.30.2
I am having the exact same issues as @lainisourgod
I've just started having this issue - in vscode enviroment. It was working fine, and then suddenly stopped working and started printing out raw dict again. It may have started after I silenced some warnings using
from transformers import logging as transformers_logging
transformers_logging.set_verbosity_error()
I am also having the exact same issue as @lainisourgod, it looks terrible
cc @muellerzr
Gentle ping @muellerzr