UniversalPetrarch
UniversalPetrarch copied to clipboard
Make UniveralPetrarch pipeline compatible
Now that UP is a little more stable, we need to start thinking about making it usable in production pipelines. In order for people (specifically the Spanish and Arabic teams) to be able to produce event data, UP needs to fit into our existing pipelines. This requires a few things:
- making UDPipe consume the JSON/Mongo format that the OEDA pipelines use, rather than XML
- writing custom code to fit UDPipe into, e.g. the stanford_pipeline. It should output OEDA-formatted JSON to store back in a Mongo.
- greatly simplifying the UDPipe installation process and ensuring that the correct versions are used.
- making sure UniveralPetrarch can take in JSON. I added code to do this (see test code here) but this should be tested with the actual output from 1 and 2.
@JingL1014, didn't we already do already do this with Sayeed? Did that code get pushed back here yet?
Following up on this. Are any of these (1, 2, 3, 4) complete? This is necessary for us to produce Arabic event data.
-
We have updated UniversalPetrarch to consume JSON formatted data. The UD-Petrarch coder for English is running side-by-side with Petrarch2 event coder in the SPEC Pipeline.
-
We are not using stanford-pipeline project to generate event coding from raw text data. Instead, we running our distributed framework. So we haven't tested adding UD-Petrarch to that pipeline.
-
We currently use ufal-udpipe package for Python to do the parsing. https://pypi.org/project/ufal.udpipe. So use the parser, we need to install the package and download language-specific model files which can be automated. (i.e using requirements.txt and some programming)
-
Yes, we are already using UD-Petrarch to code the English sentences and both input and output are in JSON format. Any incompatibility with OEDA format can be addressed. We are using MongoDB to store that data.