prompt-optimizer
prompt-optimizer copied to clipboard
Entropy optim uses BERT - how to address situations where token length gets above 512?
max_position_embeddings = 512 for most encoding models I see, how can we address optimising prompts on length > 512 tokens?
hmm I should've seen this coming. Breaking the input into chunks of max_len and running entropy optim on all them separately and then combining them is the way to go.