paperswithcode-client icon indicating copy to clipboard operation
paperswithcode-client copied to clipboard

How to access to the unlisted datasets in PWC?

Open zhimin-z opened this issue 1 year ago • 12 comments

I discovered that the main dataset page mentions the availability of up to 9,753 machine-learning datasets: image However, upon navigating through the pages from page 1 to page 100, I found no way to access the datasets not listed within the first 100 pages. Even when I manually attempted to access pages beyond 100, the website returned the same dataset list as page 100. image Could you please advise if there is a method to retrieve datasets beyond the first 100 pages? Your assistance in this matter would be greatly appreciated. @alefnula @lambdaofgod @rstojnic @mkardas

zhimin-z avatar Apr 23 '24 03:04 zhimin-z

We temporarily disabled dataset browsing because someone was DDOS-ing the website using a bot. It looks like they are running a broken bot that's trying all kinds of nonsensical dataset filters, which is why we've disabled them for now. Should be back shortly after we fully identify and block them.

rstojnic avatar Apr 23 '24 07:04 rstojnic

We temporarily disabled dataset browsing because someone was DDOS-ing the website using a bot. It looks like they are running a broken bot that's trying all kinds of nonsensical dataset filters, which is why we've disabled them for now. Should be back shortly after we fully identify and block them.

Dear @rstojnic ,

I hope this message finds you well. After reading your comment, I wanted to reach out and clarify that the activity you've observed might potentially be related to my research efforts (but this is not 100% sure). I've been collecting dataset information for research on dataset evolution, which involves gathering data from various sources, including your platform. Here is my code:

from paperswithcode import PapersWithCodeClient

client = PapersWithCodeClient(token=XXXX)

page = 1
scrape = True
dataset_full = {}

while scrape:
    try:
        dataset_page = client.dataset_list(page=page)
        for dataset in dataset_page.results:
            dataset_full[dataset.id] = {
                'name': dataset.name,
                'url': dataset.url,
            }
    except:
        scrape = False
    page += 1

Please note that my intentions are purely academic, and I sincerely apologize for any unintended strain my actions may have placed on your website. I can assure you that I am not engaged in any malicious activity, such as DDOS-ing.

Would there be a more appropriate method for me to collect this dataset information for research purposes without causing any issues to your platform? Your guidance and support in this matter would be greatly appreciated.

Thank you for your understanding, and I look forward to hearing from you.

Best regards, Jimmy

zhimin-z avatar Apr 23 '24 07:04 zhimin-z

We temporarily disabled dataset browsing because someone was DDOS-ing the website using a bot. It looks like they are running a broken bot that's trying all kinds of nonsensical dataset filters, which is why we've disabled them for now. Should be back shortly after we fully identify and block them.

We temporarily disabled dataset browsing because someone was DDOS-ing the website using a bot. It looks like they are running a broken bot that's trying all kinds of nonsensical dataset filters, which is why we've disabled them for now. Should be back shortly after we fully identify and block them.

Dear @rstojnic ,

I hope this message finds you well. After reading your comment, I wanted to reach out and clarify that the activity you've observed might potentially be related to my research efforts. I've been collecting dataset information for research on dataset evolution, which involves gathering data from various sources, including your platform. Here is my code:

from paperswithcode import PapersWithCodeClient

client = PapersWithCodeClient(token=XXXX)

page = 1
scrape = True
dataset_full = {}

while scrape:
    try:
        dataset_page = client.dataset_list(page=page)
        for dataset in dataset_page.results:
            dataset_full[dataset.id] = {
                'name': dataset.name,
                'url': dataset.url,
            }
    except:
        scrape = False
    page += 1
    
with open(f'{path_meta}/dataset_full.pkl', 'wb') as f:
    pickle.dump(dataset_full, f) 

Please note that my intentions are purely academic, and I sincerely apologize for any unintended strain my actions may have placed on your website. I can assure you that I am not engaged in any malicious activity, such as DDOS-ing.

Would there be a more appropriate method for me to collect this dataset information for research purposes without causing any issues to your platform? Your guidance and support in this matter would be greatly appreciated.

Thank you for your understanding, and I look forward to hearing from you.

Best regards, Jimmy

I indeed wrote up an email clarifying this a few days ago, but there is no reply yet so I just collect them using this API for a chance.

zhimin-z avatar Apr 23 '24 07:04 zhimin-z

Hi @zhimin-z there is no need to scrape the website, all the data is available on: https://github.com/paperswithcode/paperswithcode-data

rstojnic avatar Apr 23 '24 07:04 rstojnic

The repo itself is old because it's just a README. The links point back to our S3 bucket that should be updated every day.

rstojnic avatar Apr 23 '24 07:04 rstojnic

The repo itself is old because it's just a README. The links point back to our S3 bucket that should be updated every day.

Thanks, but I found some datasets are not available in the downloadable json files. For example, HELM and HEIM are not in the Datasets. That is the reason why I thought these files might be obsolete initially. I just wonder what are the criteria for your generating the Datasets and other Evaluation tables files?

zhimin-z avatar Apr 23 '24 08:04 zhimin-z

They should all be there. If they are not, the export might be stuck. @alefnula @andrewkuanop

rstojnic avatar Apr 23 '24 08:04 rstojnic

There seem to be a lot of leaderboards missing in the Evaluation tables from the website.

Here is what Evaluation tables gives (9238 datasets in total): image

Here is what I collected (within 100 displayable pages of the PWC datasets, 4800 datasets in total):

  1. number of evaluation records from paper mining: image
  2. number of evaluation records from model card: image

Overall, at least ten thousand level records are missing from your online archive, and this does not even take into account the evaluations from the datasets beyond 100 pages from the PWC website. @rstojnic @alefnula @andrewkuanop

zhimin-z avatar Apr 23 '24 09:04 zhimin-z

Hi Jimmy,

It should be fixed by now. Can you check if it works. Thanks.

Sent from Outlook for Androidhttps://aka.ms/AAb9ysg


From: JIMMY ZHAO @.> Sent: Thursday, April 25, 2024 1:57:00 AM To: paperswithcode/paperswithcode-client @.> Cc: Andrew Kuan @.>; Mention @.> Subject: Re: [paperswithcode/paperswithcode-client] How to access to the unlisted datasets in PWC? (Issue #24)

Any update? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned. Message ID: <paperswithcode/paperswithcode-client/issues/24/2075522995@ github. com> ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ZjQcmQRYFpfptBannerStart This Message Is From an External Sender

ZjQcmQRYFpfptBannerEnd

Any update?

— Reply to this email directly, view it on GitHubhttps://github.com/paperswithcode/paperswithcode-client/issues/24#issuecomment-2075522995, or unsubscribehttps://github.com/notifications/unsubscribe-auth/A4TEPHOLP3SKWK67WF5HFR3Y67W6ZAVCNFSM6AAAAABGT62YWGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANZVGUZDEOJZGU. You are receiving this because you were mentioned.Message ID: @.***>

andrewkuanop avatar Apr 25 '24 02:04 andrewkuanop

https://paperswithcode.com/sota/abstractive-dialogue-summarization-on-samsum

Thanks for your reply, @andrewkuanop

For evaluation tables, I found https://paperswithcode.com/sota/text-classification-on-glue is available in the Evaluation tables, but https://paperswithcode.com/sota/abstractive-dialogue-summarization-on-samsum is still not.

For datasets, I found both HELM and HEIM are not in the Datasets.

I think the issue still persists...

zhimin-z avatar Apr 25 '24 04:04 zhimin-z

Hi Jimmy,

I believe it should be solved. Can you give it a try.

Thanks, Andrew

From: JIMMY ZHAO @.> Date: Thursday, 25 April 2024 at 12:56 To: paperswithcode/paperswithcode-client @.> Cc: Andrew Kuan @.>, Mention @.> Subject: Re: [paperswithcode/paperswithcode-client] How to access to the unlisted datasets in PWC? (Issue #24) https: //paperswithcode. com/sota/abstractive-dialogue-summarization-on-samsum Thanks for your reply, @andrewkuanop For evaluation tables, I found https: //paperswithcode. com/sota/text-classification-on-glue is available in the Evaluation tables, ZjQcmQRYFpfptBannerStart This Message Is From an External Sender

ZjQcmQRYFpfptBannerEnd

https://paperswithcode.com/sota/abstractive-dialogue-summarization-on-samsumhttps://paperswithcode.com/sota/abstractive-dialogue-summarization-on-samsum

Thanks for your reply, @andrewkuanophttps://github.com/andrewkuanop

For evaluation tables, I found https://paperswithcode.com/sota/text-classification-on-gluehttps://paperswithcode.com/sota/text-classification-on-glue is available in the Evaluation tableshttps://production-media.paperswithcode.com/about/evaluation-tables.json.gz, but https://paperswithcode.com/sota/abstractive-dialogue-summarization-on-samsumhttps://paperswithcode.com/sota/abstractive-dialogue-summarization-on-samsum is still not.

For datasets, I found both HELMhttps://paperswithcode.com/dataset/helm and HEIMhttps://paperswithcode.com/dataset/heim are not in the Datasetshttps://production-media.paperswithcode.com/about/datasets.json.gz.

I think the issue still persists...

— Reply to this email directly, view it on GitHubhttps://github.com/paperswithcode/paperswithcode-client/issues/24#issuecomment-2076356268, or unsubscribehttps://github.com/notifications/unsubscribe-auth/A4TEPHIDGTWCJSBRMB64MXTY7CEHRAVCNFSM6AAAAABGT62YWGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANZWGM2TMMRWHA. You are receiving this because you were mentioned.Message ID: @.***>

andrewkuanop avatar Apr 26 '24 14:04 andrewkuanop

https://paperswithcode.com/sota/abstractive-dialogue-summarization-on-samsum

Thanks, @andrewkuanop

After checking, I found the dataset issue is solved. Both HELM and HEIM are in the Datasets now.

I found https://paperswithcode.com/sota/text-classification-on-glue is available in the Evaluation tables, but https://paperswithcode.com/sota/abstractive-dialogue-summarization-on-samsum is still not.

I think the issue still persists for specific evaluation tables.

zhimin-z avatar May 26 '24 04:05 zhimin-z

Hmm...still not available, any further update?

zhimin-z avatar Nov 21 '24 23:11 zhimin-z