AMRBART
AMRBART copied to clipboard
input for huggingface AMRBART - AMR2Text
Hi,
I am very happy to see the pre-trained model in huggingface. I have a little question about AMRBART(AMR2Text) what is the input for this? does that mean we still need to follow AMR-process?
thanks,
Yes, you need to follow AMR-process to linearize AMRs before feeding them into AMRBART-AMR2Text.
ok thanks
I want to use the hugging face AMR2text pre-trained model. This is my code.
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from model_interface.tokenization_bart import AMRBartTokenizer
import torch
from transformers import pipeline
pipe = pipeline("text2text-generation", model="xfbai/AMRBART-large-finetuned-AMR2.0-AMR2Text-v2", tokenizer= AMRBartTokenizer.from_pretrained("xfbai/AMRBART-large-finetuned-AMR2.0-AMR2Text-v2"))
text = "( <pointer:0> date-entity :month 9 :day 11 :year 2010 )"
ans = pipe(text)
print(ans)
the text
is your example input data. However, the results look weird.
The results is [{'generated_text': ' ( <pointer:0> date-entity :month 9 :day 11 : year 2010'}]
what's the problem in here?
Hi, @ting-chih
Our code does not support transformer pipeline
, you can try inference-text.sh to generate text from AMR graphs.
Maybe it's related this issue, for the huggingface inference api, the following error comes up:
Link: https://huggingface.co/xfbai/AMRBART-base?text=test
Would it be possible to fix, for allowing access through api?
Hi, @flipz357
AMRBART does not support the huggingface inference api as we have modified the tokenizer and input format. You can follow the instructions here to inference on your own data, so that the quailty can be ensured.
I see, Thank you @goodbai-nlp!