LMOps
LMOps copied to clipboard
Structured Prompting: GPT_neo_modeling.py
causal_mask = self.bias[:, :, key_length - query_length : key_length, :key_length]
but in Structured Prompting the key_length exceeds the max_positions.
How to address this issue. Thank you.
I tried this code, but it doesn't work
if prefix_parallel and prefix_parallel > 1 :
key_length_ = ((key_length - query_length) // prefix_parallel) + query_length
causal_mask = self.bias[:, :, key_length_ - query_length: key_length_, :key_length_]
context_mask = torch.ones(1, 1, query_length, key_length - causal_mask.shape[-1]).to(torch.bool).to(attn_weights.device)
causal_mask = torch.cat([context_mask, causal_mask], dim=-1)
else:
causal_mask = self.bias[:, :, key_length - query_length: key_length, :key_length]
Hi, do you use hf or fairseq? Could you please provide the source file path of "GPT_neo_modeling.py"?