pet
pet copied to clipboard
This repository contains the code for "Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference"
Hi I am trying to run the generative model on CNN-dailymail, however I find this file is missing: datasets.py and in the tasks.py file we need **load_dataset()** function # from...
Hello. I am not clear about the loss calculation of pet during training (as described in Section 3.1 in the paper: Exploiting Cloze Questions for Few-Shot Text Classification and Natural...
@timoschick The PET results are very impressive for fewshot learning. For our scenario, we have big collection of data in classes that are related to our target fewshot classes, so...
I am trying to reproduce your work. While in your paper, I only know that you use three seeds and get average of them. But could you tell me what...
Hi @timoschick, thanks for your generously providing us good code reproduction enviroments. I have to stop my experiments sometimes for some resons, but to find there is no way to...
Hi, Thanks for publishing PET's excellent results on RAFT benchmark in your recent ["True Few-Shot Learning with Prompts – A Real-World Perspective"](https://arxiv.org/abs/2111.13440) paper. The best practices in the paper are...

> If you want to reproduce our exact results and none of the above helps, you can check out the `v1.1.0` branch that contains [the script](https://github.com/timoschick/pet/blob/v1.1.0/scripts/ipet.sh) But this script only...
For MNLI, on the blog https://huggingface.co/blog/how_many_data_points/ - reported accuracy is 0.83 for 1000 data samples. In the paper (https://arxiv.org/pdf/2001.07676.pdf), (table 1), for MNLI, accuracy reported is 0.85 for 1000 data...
Hi As I train the model, I see some outputs like below, could you kindly comment how I can visualize the eval accuracies and remove these outputs? thanks ``` 2021-08-21...