sec-parser
sec-parser copied to clipboard
Make HighlightedTextClassifier work with `<b>` tags
Discussed in https://github.com/orgs/alphanome-ai/discussions/56
Originally posted by Elijas November 24, 2023
Example document
https://www.sec.gov/Archives/edgar/data/1675149/000119312518236766/d828236d10q.htm
<p style="margin-top:9pt; margin-bottom:0pt; text-indent:4%; font-size:10pt; font-family:Times New Roman">
Options to purchase 1 million shares of common stock at a weighted average exercise price of $36.28 were
outstanding as of June 30, 2017, but were not included in the computation of diluted EPS because they were anti-dilutive, as the exercise prices of the options were greater than the average market price of Alcoa Corporations common stock.
</p>
<p style="margin-top:13pt; margin-bottom:0pt; font-size:10pt; font-family:Times New Roman">
<b>
G. Accumulated Other Comprehensive Loss
</b>
</p>
<p style="margin-top:6pt; margin-bottom:0pt; text-indent:4%; font-size:10pt; font-family:Times New Roman">
The following table details the activity of the three components that comprise Accumulated other comprehensive loss for both Alcoa
Corporations shareholders and Noncontrolling interest:
</p>
Goal
The "G. Accumulated Other Comprehensive Loss" should be recognized as HighlightedTextElement (and therefore, TitleElement).
Most likely, you will have to get a percentage of text that is covered inside the <b>
tag, by reusing the parts implemented in the HighlightedTextElement. This will help you avoid situations where text text text <b>bold</b> text text
is recognized as higlighted
I would like to work on this issue.
I noticed that there is a need for a middle ground between synthetic unit tests and entire-document end-to-end tests. Let's call them integration tests.
So I propose to have a special type of unit tests, where input is a HTML snippet and expected output is stored in a JSON file.
This will allow for a very easy unit test creation. Just paste the snippet from a document of interest, then automatically generate a JSON file, then proceed to manually edit it to what will be the expected output.
As for the fully-annotated documents (used in the "accuracy tests"), having a few of these integration tests, then fixing them will help us reach a point where creating the fully annotated document becomes much easier as all of the major issues will be fixed.
Otherwise, it takes a lot of time to manually annotate all the different issues in the full document, so we're annotating them in these small integration tests.
Let me know if this makes sense!
TL;DR
- In folder tests/integration create a file .html with problematic HTML source code snippet
- Run task unit-tests -- --create-missing to generate the .json with the current problematic output from sec-parser
- Modify the .json manually to the desired state. (Running task unit-tests will now start failing)
- Improve the sec-parser until running task unit-tests succeeds
This solves wasting time when annotating entire documents, when there is a single bug recurring hundreds of times in a single document
So we just take one instance (or a few instances) of it and put in these little tests
And the file-oriented structure makes it much easier to manage, than keeping the inputs and outputs in the source code itself (as would be the case in regular unit tests)
Sorry, I was too busy to notify you that I will no longer be able to work on this issue due to my obligations.
Sorry, I was too busy to notify you that I will no longer be able to work on this issue due to my obligations.
No worries, thanks for letting us know!
I'd like to work on this