Susan Xueqing Liu
Susan Xueqing Liu
Hi @patrickvonplaten, I have one basic conceptual NLP question regarding the evaluation for NER. According to [run_ner.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner.py), the ground truth label is truncated to max_seq_length during prediction. However, this means...
Update Research.md
Update the nlp notebook with headsup on the time budget and the small example using FLAML Tune.
Find the optimal search space for NLP, potentially replacing the current continuous search space.
ray tune keep_checkpoints_num is ineffective, debug to make it effective.
Update text summarization example in notebook to include the prediction
Update TransformersEstimator._compute_metrics_by_dataset_name so it can merge the two cases into one function: (1) when the metric is a string, (2) when the metric is a custom function. Use self._metric(eval_pred, self)...
Add TransformersEstimatorForModelSelection model_path search space based on memory cost
PR #368 has some issues, when testing test_distilling_emo(), it has the following error: `Traceback (most recent call last): File "test_autohf_distilling.py", line 180, in test_distilling_emo() File "test_autohf_distilling.py", line 157, in test_distilling_emo...