ProFusion icon indicating copy to clipboard operation
ProFusion copied to clipboard

'CLIPTextTransformer' object has no attribute '_build_causal_attention_mask'

Open adhikjoshi opened this issue 1 year ago • 5 comments

Screenshot 2023-06-09 at 1 15 21 PM

White running test.ipynb file, we are running into this error.

'CLIPTextTransformer' object has no attribute '_build_causal_attention_mask'

Followed same process, as installation.

adhikjoshi avatar Jun 09 '23 07:06 adhikjoshi

Subscribing to this issue, I'm encountering the same problem

owoshch avatar Jun 09 '23 12:06 owoshch

I'm also encountering the same problem

smko77 avatar Jun 12 '23 08:06 smko77

pip install --upgrade transformers==4.25.1 did the job for me.

By default, the line from repo pip install transformers >= 4.25.1 installs transformers 4.30.2. By forcing it to be 4.25.1 I managed to run the cell below without the error.

kwargs = sampling_kwargs(prompt = prompt,
                         step = 50,
                         cfg = 5.0,
                         fusion = False,
                        )
image = pipe(ref_image_latent=gt_latents, ref_image_embed=vision_hidden_states, **kwargs).images[0]
image_pretrained_model = get_concat_h(input_img_, image)
print("Results before fine-tuning")
image_pretrained_model.show()

owoshch avatar Jun 14 '23 12:06 owoshch

pip install --upgrade transformers==4.25.1

This worked for me as well.

jadechip avatar Jun 27 '23 08:06 jadechip

API of Transformers are changed. This should work:

from transformers.models.clip.modeling_clip import _make_causal_mask
causal_attention_mask = _make_causal_mask((pseudo_hidden_states.shape[0],pseudo_hidden_states.shape[1]),
                                                          pseudo_hidden_states.dtype,
                                                          device=pseudo_hidden_states.device)

syguan96 avatar Nov 05 '23 12:11 syguan96