OpenPrompt icon indicating copy to clipboard operation
OpenPrompt copied to clipboard

An Open-Source Framework for Prompt-Learning.

Results 108 OpenPrompt issues
Sort by recently updated
recently updated
newest added

I am using OpenPrompt on UnifiedQA (https://github.com/allenai/unifiedqa), which generates the answer using T5 without a mask token. I tried a template without {"mask"}, but this is not allowed in OpenPrompt....

Hello, when I use the config that "plm, tokenizer, model_config, WrapperClass = load_plm('t5', 't5-3b'), PrefixTuningTemplate", it will rise the error like below. But if I run the programe using model...

It happened in tutorial 2.1. Details are as follows: Traceback (most recent call last): File "condional_prompt.py", line 112, in loss = prompt_model(inputs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result =...

Hi, how can I save my model so that I can load it in a transformers way? for example: ``` from transformers import AutoModelForMaskedLM model = AutoModelForMaskedLM.from_pretrained(the model fine-tuned with...

I'm trying to use the [T5TemplateGenerator](https://thunlp.github.io/OpenPrompt/modules/prompt_generator.html#t5templategenerator) to generate prompt templates on text classification tasks. However, after I initiated the T5TemplateGenerator and call the generate() function [it requires a config.template object](https://github.com/thunlp/OpenPrompt/blob/52245d523dbdfccd240e531768d0b738bbeae446/openprompt/prompt_base.py#L316)....

比如使用AutomaticVerbalizer,在执行trainer.py中的verbalizer.optimizer_to_initialize()时,probs_buffer和labels_buffer为None,但是在执行这个操作之前,已经通过verbalizer.process_outputs()给这两个参数赋过值了。单卡就不会出现这个问题,猜测是DP的问题导致的BUG。

According to the SoftVerbalizer script and my general understanding of what is desired in a frozen PLM training setting, the grouped_parameters_1 of the SoftVerbalizer should be frozen. However, in the...

bug

https://github.com/thunlp/OpenPrompt/blob/52245d523dbdfccd240e531768d0b738bbeae446/openprompt/prompts/prototypical_verbalizer.py#L333 it seems like this error would happen when some label-class with 0 samples. in "learning_setting: full" mode.

wrapped_tokenizer is a "WrapperClass" object I defined. When I run "wrapped_tokenizer.tokenize_one_example" I meet with an error at 48th line in mlm.py: if piece['loss_ids']==1:, I find my wrapped sample is like...

In the following two scenarios, ```py for step, inputs in enumerate(cycle(train_dataloader)): inputs = inputs.to(device) ... ``` or ```py dataloader = cycle(iter(train_dataloader)) for step in range(max_steps): input = next(dataloader) inputs =...