transformers
transformers copied to clipboard
Name Error: "Partial State" is not defind
Parital State is not defined
-
In your recent release: 4.29.0.dev.0 has some issues with the code. The function or method "Partial State" is not defined. Today - I am not able to train my model. I just downloaded 4.28.0 to resolve this issue. Can you kindly check ASAP?
-
This error I am getting in the "Training arguments" method.
-
The training arguments script does not define or import the "Partial State" method or function.
Solution:
- For now, install the previous stable version of transformers.
pip install transformers==4.28.0
cc @muellerzr
@RAravindDS Thanks for reporting. I suspect the issue is coming from the version of accelerate in your environment. Could you:
- Share the running environment info: copy-paste the output from running
transformers-cli env
in your terminal THEN - Upgrade accelerate using
pip install --upgrade accelerate
- Retry
As @amyeroberts mentions, please try following those steps. I'll also look at changing the min Accelerate needed/add a check.
@amyeroberts I ran the code on Colab, and while training the LLM (LMV3), I got the error, Then I downloaded the previous version of the transformer, and it worked fine.Â
@RAravindDS Yes, this is because the PartialState
import was added as a dependency on the transformers development branch yesterday. PartialState
was added in the 0.17.0 release in accelerate, and so for the development branch of transformers, accelerate >= 0.17.0 is required.
Downgrading the transformers version removes the code which is importing PartialState
.
I am using the following version of transformer, datasets and huggingface_hub.
I am running into the following error:
NameError: name 'PartialState' is not defined.
How to resolve this issue to work with my versions of the transformer, datasets and huggingface_hub ?
@gli-mrunal please do pip install git+https://github.com/huggingface/accelerate
to install the dev version, or pip install accelerate -U
if you are not using multi-GPUs (such as in Colab).
@gli-mrunal sorry for the typo, there are two c's for accelerate :)
Bro, you don't need to worry too much. Please downgrade the version. They are having stable version. Don't stress too much. Previous version working as usual. We changed all our requirements today. Hectic process :(
True. !pip install transformers==4.28.0
for previous version is easier solution. The newer version runs into dependency issues.
I tried to run using the following training arguments in Colab.
training_args = TrainingArguments( output_dir=*, num_train_epochs=num_train_epochs, learning_rate=learning_rate, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, weight_decay=weight_decay, evaluation_strategy="epoch", disable_tqdm=False, logging_steps=logging_steps, push_to_hub=False, log_level="error", save_strategy="epoch", load_best_model_at_end=True, )
Then the following error occured.
NameError: name 'PartialState' is not defined
I attempted all of above advice, but this error wasn't resolved. Please tell me how to fix this error.
Hi @creek411 install version 4.28.0 of transformers by running this code !pip install transformers==4.28.0
. Then restart and run all the code( if ur using colab).
Thank you for your reply.
I tried to install 4.28.0 and run the code. However, this error recurred.
In this code, I install and use transformers datasets
.
So should I install transformers datasets
of previsous version?
@creek411 the solution would be to do pip install accelerate
(and as now we have a release, it works OOTB with the normal pypi install), however the fact you have the error means you still probably are installing from dev and there's some cache working in there. You can try pip uninstall transformers -y
, run your code, make sure it fails because transformers
isn't installed, then install transformers
again, either 4.28.0 or 4.29.0 and do pip install accelerate
as well
I attempted to do your solution and could avoid the error. I appreciate for your advise.
@creek411 the solution would be to do
pip install accelerate
(and as now we have a release, it works OOTB with the normal pypi install), however the fact you have the error means you still probably are installing from dev and there's some cache working in there. You can trypip uninstall transformers -y
, run your code, make sure it fails becausetransformers
isn't installed, then installtransformers
again, either 4.28.0 or 4.29.0 and dopip install accelerate
as well
I get the same error with
Requirement already satisfied: accelerate in /usr/local/lib/python3.10/dist-packages (0.19.0)
Requirement already satisfied: transformers in /usr/local/lib/python3.10/dist-packages (4.29.1)
on Colab
I had to install accelerate manually.
!pip install torch "argilla" datasets accelerate transformers setfit
I'm getting the same error while using the Transfor4rec library from Nvidia. All the solutions proffered here didn't work for me. I tried to provide training argument here "train_args = T4RecTrainingArguments(local_rank = -1,..."
Esto me funciono en colab, pero es importante reiniciar el entorno de ejecución
!pip uninstall -y -r transformers accelerate !pip install transformers==4.29.0 !pip install git+https://github.com/huggingface/accelerate
This worked for me in colab, but it is important to restart the execution environment
!pip uninstall -y -r transformers accelerate !pip install transformers==4.29.0 !pip install git+https://github.com/huggingface/accelerate
Gracias amigo
I came from the same error, but the previous is like……Did this mean it's not set to "cuda" (I run my code with GPU
''' python
File ~/miniconda3/lib/python3.8/site-packages/transformers/training_args.py:1333, in TrainingArguments.post_init(self)
1327 if version.parse(version.parse(torch.version).base_version) == version.parse("2.0.0") and self.fp16:
1328 raise ValueError("--optim adamw_torch_fused with --fp16 requires PyTorch>2.0")
1330 if (
1331 self.framework == "pt"
1332 and is_torch_available()
-> 1333 and (self.device.type != "cuda")
1334 and (get_xla_device_type(self.device) != "GPU")
1335 and (self.fp16 or self.fp16_full_eval)
1336 ):
1337 raise ValueError(
1338 "FP16 Mixed precision training with AMP or APEX (--fp16
) and FP16 half precision evaluation"
1339 " (--fp16_full_eval
) can only be used on CUDA devices."
1340 )
1342 if (
1343 self.framework == "pt"
1344 and is_torch_available()
(...)
1349 and (self.bf16 or self.bf16_full_eval)
1350 ):
File ~/miniconda3/lib/python3.8/site-packages/transformers/training_args.py:1697, in TrainingArguments.device(self) 1693 """ 1694 The device used by this process. 1695 """ 1696 requires_backends(self, ["torch"]) -> 1697 return self._setup_devices
File ~/miniconda3/lib/python3.8/site-packages/transformers/utils/generic.py:54, in cached_property.get(self, obj, objtype) 52 cached = getattr(obj, attr, None) 53 if cached is None: ---> 54 cached = self.fget(obj) 55 setattr(obj, attr, cached) 56 return cached
File ~/miniconda3/lib/python3.8/site-packages/transformers/training_args.py:1631, in TrainingArguments._setup_devices(self) 1629 self._n_gpu = 1 1630 else: -> 1631 self.distributed_state = PartialState(backend=self.ddp_backend) 1632 self._n_gpu = 1 1633 if not is_sagemaker_mp_enabled():
NameError: name 'PartialState' is not defined '''
For those having issues, can you tell me more about if you are working in Jupyter, Colab, or in regular Python? Again the solution hasn't changed: in the correct environment you need to make sure that accelerate
is installed and viewable. To test this in your environment you can try importing it import accelerate
. If it fails, it's not installed correctly.
I'm using Jupyter (as well as the VS Code notebooks extension, which is essentially the same) on Python 3.11 with no venv and the interpreter provided by asdf
.
On re-test, accelerate
0.19 did work with transformers
4.29, as it turned out; I'm just not accustomed to notebooks and forgot that I needed to restart the kernel to freshen the dependencies. Classic n00b mistake.
I'm still a bit mystified as to why I had an older accelerate
, as I had created my entire Python environment on the same day I commented. Possibly, it was a transitive dependency of something else I'd already installed.
Please also remember to restart the kernel ( Given you are using Colab/Jupyter ) ( I know it is silly but yes )
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
What do I do after installing all the packages I am still getting this error I havve been working on kaggle and this is what I am geeting after running the code.How do I solve this?
NameError Traceback (most recent call last) Cell In[39], line 1 ----> 1 training_args = Seq2SeqTrainingArguments( 2 output_dir="M2M101", 3 evaluation_strategy="epoch", 4 learning_rate=2e-5, 5 per_device_train_batch_size=16, 6 per_device_eval_batch_size=16, 7 weight_decay=0.01, 8 save_total_limit=3, 9 num_train_epochs=5, 10 predict_with_generate=True, 11 fp16=True, 12 push_to_hub=True, 13 ) 14 trainer = Seq2SeqTrainer( 15 model=model, 16 args=training_args, (...) 21 compute_metrics=compute_metrics, 22 ) 23 # for starting the training of model
NameError: name 'Seq2SeqTrainingArguments' is not defined