evals icon indicating copy to clipboard operation
evals copied to clipboard

Eval for named entities in social media text in JSON format

Open napsternxg opened this issue 2 years ago • 1 comments

Thank you for contributing an eval! ♥️

🚨 Please make sure your PR follows these guidelines, failure to follow the guidelines below will result in the PR being closed automatically. Note that even if the criteria are met, that does not guarantee the PR will be merged nor GPT-4 access granted. 🚨

PLEASE READ THIS:

In order for a PR to be merged, it must fail on GPT-4. We are aware that right now, users do not have access, so you will not be able to tell if the eval fails or not. Please run your eval with GPT-3.5-Turbo, but keep in mind as we run the eval, if GPT-4 gets higher than 90% on the eval, we will likely reject since GPT-4 is already capable of completing the task.

We plan to roll out a way for users submitting evals to see the eval performance on GPT-4 soon. Stay tuned! Until then, you will not be able to see the eval performance on GPT-4. We encourage partial PR's with ~5-10 example that we can then run the evals on and share the results with you so you know how your eval does with GPT-4 before writing all 100 examples.

Eval details 📑

Eval name

[Insert Eval name here] wnut17_ner

Eval description

[Insert a short description of what your eval does here]

  • Build an evaluation using a tough sample of 100 items from WNUT 2017 dataset for social media named entity recognition.
  • Model is supposed to accept a valid JSON as input and return a valid JSON making it a stricter eval

What makes this a useful eval?

[Insert why this eval is worth including and any additional context]

  • This eval tests the capabilities of ChatGPT models on information extraction tasks for noisy social media text.
  • General performance of these datasets is really low, I have take a harder sample from the WNUT 2017 NER test set to ensure stratified sampling of the NER token labels making the evaluation harder.
  • Furthermore, the requirement of the input and output to be a valid JSON object makes the evaluation harder and assesses the GPT model's capabilities in generating a structured response.

Criteria for a good eval ✅

Below are some of the criteria we look for in a good eval. In general, we are seeking cases where the model does not do a good job despite being capable of generating a good response (note that there are some things large language models cannot do, so those would not make good evals).

Your eval should be:

  • [x] Thematically consistent: The eval should be thematically consistent. We'd like to see a number of prompts all demonstrating some particular failure mode. For example, we can create an eval on cases where the model fails to reason about the physical world.
  • [x] Contains failures where a human can do the task, but either GPT-4 or GPT-3.5-Turbo could not.
  • [x] Includes good signal around what is the right behavior. This means either a correct answer for Basic evals or the Fact Model-graded eval, or an exhaustive rubric for evaluating answers for the Criteria Model-graded eval.
  • [x] Include at least 100 high quality examples (it is okay to only contribute 5-10 meaningful examples and have us test them with GPT-4 before adding all 100)

If there is anything else that makes your eval worth including, please document it below.

Unique eval value

Insert what makes your eval high quality that was not mentioned above. (Not required)

  • This eval tests the capabilities of ChatGPT models on information extraction tasks for noisy social media text.
  • General performance of these datasets is really low, I have take a harder sample from the WNUT 2017 NER test set to ensure stratified sampling of the NER token labels making the evaluation harder.
  • Furthermore, the requirement of the input and output to be a valid JSON object makes the evaluation harder and assesses the GPT model's capabilities in generating a structured response.

Eval structure 🏗️

Your eval should

  • [x] Check that your data is in evals/registry/data/{name}
  • [x] Check that your yaml is registered at evals/registry/evals/{name}.yaml
  • [x] Ensure you have the right to use the data you submit via this eval

(For now, we will only be approving evals that use one of the existing eval classes. You may still write custom eval classes for your own cases, and we may consider merging them in the future.)

Final checklist 👀

Submission agreement

By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (https://platform.openai.com/docs/usage-policies).

  • [x] I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies.

Email address validation

If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the merged pull request.

  • [x] I acknowledge that GPT-4 access will only be granted, if applicable, to the email address used for my merged pull request.

Limited availability acknowledgement

We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR.

  • [x] I understand that opening a PR, even if it meets the requirements above, does not guarantee the PR will be merged nor GPT-4 access granted.

Submit eval

  • [x] I have filled out all required fields in the evals PR form
  • [x] (Ignore if not submitting code) I have run pip install pre-commit; pre-commit install and have verified that black, isort, and autoflake are running when I commit and push

Failure to fill out all required fields will result in the PR being closed.

Eval JSON data

Since we are using Git LFS, we are asking eval submitters to add in as many Eval Samples (at least 5) from their contribution here:

View evals in JSON

Eval

{"input": [{"role": "system", "content": "You are a Social Media Information Extraction Assistant whose goal is to idenitify named entities in a social media text.\nA user will provide you with a valid JSON containing list of tokens from a social media text.\nYou response should only contain a valid JSON containing a list of tags, corresponding to each token from the user input.\nYour tags are IOB style tags for classifying tokens for named entity recognition. \nYou should only use the following tags in generating your response: O, B-corporation, I-corporation, B-creative-work, I-creative-work, B-group, I-group, B-location, I-location, B-person, I-person, B-product, I-product\n"}, {"role": "user", "content": "[\"RT\", \"@\", \"SouthernHomo\", \":\", \"It\", \"'\", \"s\", \"like\", \"y\", \"'\", \"all\", \"elected\", \"that\", \"damn\", \"Cloyd\", \"Rivers\", \"account\", \"to\", \"be\", \"president\", \".\", \"I\", \"swear\", \"to\", \"GOD\", \"I\", \"'\", \"d\", \"rather\", \"have\", \"Dory\", \".\", \"https://t.co\\u2026\"]"}], "ideal": "[\"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"B-person\", \"I-person\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"B-person\", \"O\", \"O\"]"}
{"input": [{"role": "system", "content": "You are a Social Media Information Extraction Assistant whose goal is to idenitify named entities in a social media text.\nA user will provide you with a valid JSON containing list of tokens from a social media text.\nYou response should only contain a valid JSON containing a list of tags, corresponding to each token from the user input.\nYour tags are IOB style tags for classifying tokens for named entity recognition. \nYou should only use the following tags in generating your response: O, B-corporation, I-corporation, B-creative-work, I-creative-work, B-group, I-group, B-location, I-location, B-person, I-person, B-product, I-product\n"}, {"role": "user", "content": "[\"Want\", \"to\", \"pretend\", \"you\", \"\\u2019\", \"re\", \"performing\", \"in\", \"front\", \"of\", \"10\", \",\", \"000\", \"people\", \"?\", \"Then\", \"decide\", \"#\", \"whatshouldplaynext\", \"on\", \"Power\", \"95\", \".\", \"3\", \"https://t.co/ojpWdx1gIT\"]"}], "ideal": "[\"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"B-product\", \"I-product\", \"I-product\", \"I-product\", \"O\"]"}
{"input": [{"role": "system", "content": "You are a Social Media Information Extraction Assistant whose goal is to idenitify named entities in a social media text.\nA user will provide you with a valid JSON containing list of tokens from a social media text.\nYou response should only contain a valid JSON containing a list of tags, corresponding to each token from the user input.\nYour tags are IOB style tags for classifying tokens for named entity recognition. \nYou should only use the following tags in generating your response: O, B-corporation, I-corporation, B-creative-work, I-creative-work, B-group, I-group, B-location, I-location, B-person, I-person, B-product, I-product\n"}, {"role": "user", "content": "[\"The\", \"bodies\", \"of\", \"the\", \"soldiers\", \"were\", \"recovered\", \"after\", \"the\", \"concerted\", \"efforts\", \"of\", \"the\", \"Avalanche\", \"Rescue\", \"Teams\", \"(\", \"ART\", \")\", \",\", \"which\", \"is\", \"equipped\", \"to\", \"work\", \"in\", \"inhospitable\", \"terrain\", \"and\", \"weather\", \"conditions\", \".\"]"}], "ideal": "[\"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"B-group\", \"I-group\", \"I-group\", \"O\", \"B-group\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\"]"}
{"input": [{"role": "system", "content": "You are a Social Media Information Extraction Assistant whose goal is to idenitify named entities in a social media text.\nA user will provide you with a valid JSON containing list of tokens from a social media text.\nYou response should only contain a valid JSON containing a list of tags, corresponding to each token from the user input.\nYour tags are IOB style tags for classifying tokens for named entity recognition. \nYou should only use the following tags in generating your response: O, B-corporation, I-corporation, B-creative-work, I-creative-work, B-group, I-group, B-location, I-location, B-person, I-person, B-product, I-product\n"}, {"role": "user", "content": "[\"&\", \"gt\", \";\", \"*\", \"Police\", \"last\", \"week\", \"evacuated\", \"80\", \"villagers\", \"from\", \"Waltengoo\", \"Nar\", \"where\", \"dozens\", \"were\", \"killed\", \"after\", \"a\", \"series\", \"of\", \"avalanches\", \"hit\", \"the\", \"area\", \"in\", \"2005\", \"in\", \"the\", \"south\", \"of\", \"the\", \"territory\", \".\"]"}], "ideal": "[\"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"B-location\", \"I-location\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\"]"}
{"input": [{"role": "system", "content": "You are a Social Media Information Extraction Assistant whose goal is to idenitify named entities in a social media text.\nA user will provide you with a valid JSON containing list of tokens from a social media text.\nYou response should only contain a valid JSON containing a list of tags, corresponding to each token from the user input.\nYour tags are IOB style tags for classifying tokens for named entity recognition. \nYou should only use the following tags in generating your response: O, B-corporation, I-corporation, B-creative-work, I-creative-work, B-group, I-group, B-location, I-location, B-person, I-person, B-product, I-product\n"}, {"role": "user", "content": "[\"Visuals\", \"of\", \"the\", \"avalanche\", \"site\", \"in\", \"Gurez\", \"sector\", \".\"]"}], "ideal": "[\"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"B-location\", \"I-location\", \"O\"]"}

napsternxg avatar Apr 03 '23 03:04 napsternxg

CC: @andrew-openai

napsternxg avatar Apr 03 '23 16:04 napsternxg

Closing the PR due to inactivity; please reopen if you get a chance to address comments.

usama-openai avatar Jun 13 '23 21:06 usama-openai