pet icon indicating copy to clipboard operation
pet copied to clipboard

This repository contains the code for "Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference"

Results 31 pet issues
Sort by recently updated
recently updated
newest added

Hi I am trying to run the generative model on CNN-dailymail, however I find this file is missing: datasets.py and in the tasks.py file we need **load_dataset()** function # from...

Hello. I am not clear about the loss calculation of pet during training (as described in Section 3.1 in the paper: Exploiting Cloze Questions for Few-Shot Text Classification and Natural...

@timoschick The PET results are very impressive for fewshot learning. For our scenario, we have big collection of data in classes that are related to our target fewshot classes, so...

I am trying to reproduce your work. While in your paper, I only know that you use three seeds and get average of them. But could you tell me what...

Hi @timoschick, thanks for your generously providing us good code reproduction enviroments. I have to stop my experiments sometimes for some resons, but to find there is no way to...

Hi, Thanks for publishing PET's excellent results on RAFT benchmark in your recent ["True Few-Shot Learning with Prompts – A Real-World Perspective"](https://arxiv.org/abs/2111.13440) paper. The best practices in the paper are...

![1637408331(1)](https://user-images.githubusercontent.com/80802796/142724954-ccb44584-fa25-4bc1-a21e-4b2c7088d5ae.png)

> If you want to reproduce our exact results and none of the above helps, you can check out the `v1.1.0` branch that contains [the script](https://github.com/timoschick/pet/blob/v1.1.0/scripts/ipet.sh) But this script only...

For MNLI, on the blog https://huggingface.co/blog/how_many_data_points/ - reported accuracy is 0.83 for 1000 data samples. In the paper (https://arxiv.org/pdf/2001.07676.pdf), (table 1), for MNLI, accuracy reported is 0.85 for 1000 data...

Hi As I train the model, I see some outputs like below, could you kindly comment how I can visualize the eval accuracies and remove these outputs? thanks ``` 2021-08-21...