FinQA
FinQA copied to clipboard
Data and code for EMNLP 2021 paper "FinQA: A Dataset of Numerical Reasoning over Financial Data"
Is the ‘model_input’ field in the train.json generated by RoBERTa-large or BERT-base results?
Hi @czyssrs, thanks for your excellent work I only found these descriptions available in the README.file. ```markdown "pre_text": the texts before the table; "post_text": the text after the table; "table":...
``` def convert_single_mathqa_example(example, is_training, tokenizer, max_seq_length, max_program_length, op_list, op_list_size, const_list, const_list_size, cls_token, sep_token): """Converts a single MathQAExample into an InputFeature.""" features = [] question_tokens = example.question_tokens if len(question_tokens) > max_seq_length...
I looked through the evaluate.py code, which looked like a direct match between the model inference output and the correct answer, and calculated the accuracy.
I have use the zip -r predictions.json submission.zip to zip the prediction file and uploaded to the leaderboard. But always error File "/tmp/codalab/tmpfj2omU/run/program/evaluation.py", line 13 print os.listdir(submit_dir) ^ SyntaxError: invalid...
Hello, I want to convert these files as Q&A in a csv and then use them to train the GPT models. can you please help suggest how to do this.