LMOps icon indicating copy to clipboard operation
LMOps copied to clipboard

Structured Prompting: GPT_neo_modeling.py

Open amurtadha opened this issue 1 year ago • 2 comments

causal_mask = self.bias[:, :, key_length - query_length : key_length, :key_length]

but in Structured Prompting the key_length exceeds the max_positions.

How to address this issue. Thank you.

amurtadha avatar Sep 04 '23 04:09 amurtadha

I tried this code, but it doesn't work

        if prefix_parallel  and prefix_parallel > 1 :
            key_length_ = ((key_length - query_length) // prefix_parallel) + query_length          
            causal_mask = self.bias[:, :, key_length_ - query_length: key_length_, :key_length_]
            context_mask = torch.ones(1, 1, query_length, key_length - causal_mask.shape[-1]).to(torch.bool).to(attn_weights.device)
            causal_mask = torch.cat([context_mask, causal_mask], dim=-1)
        else:
            causal_mask = self.bias[:, :, key_length - query_length: key_length, :key_length]

amurtadha avatar Sep 05 '23 04:09 amurtadha

Hi, do you use hf or fairseq? Could you please provide the source file path of "GPT_neo_modeling.py"?

YRdddream avatar Sep 28 '23 01:09 YRdddream