argilla
argilla copied to clipboard
[Bug] Token Classification emojis cause overlapping spans error & wrong annotations
Describe the bug
If there is a prediction annotation mismatch + an emoji ๐ (haven't tested with other emojis)
on the UI this shows error:
I was told clearing all annotations and then annotating and saving works sometimes!
on the server side caused by ValueError: IOB tags cannot handle overlapping spans!
argilla_1 | ERROR: Exception in ASGI application
argilla_1 | Traceback (most recent call last):
argilla_1 | File "/usr/local/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 419, in run_asgi
argilla_1 | result = await app( # type: ignore[func-returns-value]
argilla_1 | File "/usr/local/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
argilla_1 | return await self.app(scope, receive, send)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/fastapi/applications.py", line 270, in __call__
argilla_1 | await super().__call__(scope, receive, send)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/starlette/applications.py", line 124, in __call__
argilla_1 | await self.middleware_stack(scope, receive, send)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 184, in __call__
argilla_1 | raise exc
argilla_1 | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 162, in __call__
argilla_1 | await self.app(scope, receive, _send)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
argilla_1 | raise exc
argilla_1 | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
argilla_1 | await self.app(scope, receive, sender)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
argilla_1 | raise e
argilla_1 | File "/usr/local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
argilla_1 | await self.app(scope, receive, send)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 706, in __call__
argilla_1 | await route.handle(scope, receive, send)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 443, in handle
argilla_1 | await self.app(scope, receive, send)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/fastapi/applications.py", line 270, in __call__
argilla_1 | await super().__call__(scope, receive, send)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/starlette/applications.py", line 124, in __call__
argilla_1 | await self.middleware_stack(scope, receive, send)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 184, in __call__
argilla_1 | raise exc
argilla_1 | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 162, in __call__
argilla_1 | await self.app(scope, receive, _send)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/brotli_asgi/__init__.py", line 85, in __call__
argilla_1 | await responder(scope, receive, send)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 44, in __call__
argilla_1 | await self.app(scope, receive, self.send_with_gzip)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/cors.py", line 92, in __call__
argilla_1 | await self.simple_response(scope, receive, send, request_headers=headers)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/cors.py", line 147, in simple_response
argilla_1 | await self.app(scope, receive, send)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
argilla_1 | raise exc
argilla_1 | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
argilla_1 | await self.app(scope, receive, sender)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
argilla_1 | raise e
argilla_1 | File "/usr/local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
argilla_1 | await self.app(scope, receive, send)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 706, in __call__
argilla_1 | await route.handle(scope, receive, send)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 276, in handle
argilla_1 | await self.app(scope, receive, send)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 66, in app
argilla_1 | response = await func(request)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/fastapi/routing.py", line 235, in app
argilla_1 | raw_response = await run_endpoint_function(
argilla_1 | File "/usr/local/lib/python3.9/site-packages/fastapi/routing.py", line 161, in run_endpoint_function
argilla_1 | return await dependant.call(**values)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/argilla/server/apis/v0/handlers/token_classification.py", line 125, in bulk_records
argilla_1 | result = await service.add_records(
argilla_1 | File "/usr/local/lib/python3.9/site-packages/argilla/server/services/tasks/token_classification/service.py", line 64, in add_records
argilla_1 | failed = await self.__storage__.store_records(
argilla_1 | File "/usr/local/lib/python3.9/site-packages/argilla/server/services/storage/service.py", line 66, in store_records
argilla_1 | record.metrics = metrics.record_metrics(record)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/argilla/server/services/tasks/token_classification/metrics.py", line 296, in record_metrics
argilla_1 | annotated_tags = cls._compute_iob_tags(span_utils, record.annotation) or []
argilla_1 | File "/usr/local/lib/python3.9/site-packages/argilla/server/services/tasks/token_classification/metrics.py", line 340, in _compute_iob_tags
argilla_1 | return span_utils.to_tags(spans)
argilla_1 | File "/usr/local/lib/python3.9/site-packages/argilla/utils/span_utils.py", line 163, in to_tags
argilla_1 | raise ValueError("IOB tags cannot handle overlapping spans!")
argilla_1 | ValueError: IOB tags cannot handle overlapping spans!
Steps to reproduce I'm using char tokens:
text='๐abcd๐efgjklm.04 / abcde fgjklm (aaaaa)'
tokens=['๐','a','b','c','d','๐','e','f','g','j','k','l','m','.','0','4','/','a', 'b','c','d','e','f','g','j','k','l', 'm','(','a','a','a','a','a',')']
entities= [('A', 6, 13), ('B', 25, 31), ('C', 33, 38)]
record = rg.TokenClassificationRecord(
text=text,
tokens=tokens,
prediction=entities,
prediction_agent="annotator",
# annotation=entities,
# annotation_agent="old",
metadata= {},
status='Default',
id=0
)
โ ๏ธ This also causes the pred alignment to be off by 1char on the UI
Environment (please complete the following information):
- OS [e.g. iOS]: linux
- Browser [e.g. chrome, safari]: chrome
- Argilla Version [e.g. 1.0.0]: 1.2.1
- ElasticSearch Version [e.g. 7.10.2]: elasticsearch:8.5.3
- Docker Image (optional) [e.g. argilla:v1.0.0]: argilla-server:v1.2.1
- Python 3.9.12
Additional context If you are validating multiple records at once and one of them fails the others fail too, and if the annotator is not careful about the toast error message shown moves on to the next page then their annotation on the prev page are lost. Maybe we can show an alert() popup when there are unsaved annotations and the user is trying to navigate out?
I've found a related issue.
import argilla as rg
records = [
rg.TokenClassificationRecord(
text="I โค๏ธ you", tokens=["I", "โค๏ธ", "you"], prediction=[("I", 0, 1), ("emoji", 2, 4), ("you", 5, 8)]
),
rg.TokenClassificationRecord(
text="I ๐ you", tokens=["I", "๐", "you"], prediction=[("I", 0, 1), ("emoji", 2, 3), ("you", 4, 7)]
),
rg.TokenClassificationRecord(
text="I h you", tokens=["I", "h", "you"], prediction=[("I", 0, 1), ("emoji", 2, 3), ("you", 4, 7)]
),
]
rg.delete("issue_2353_emoji")
rg.log(records, "issue_2353_emoji")
produces:
Note also the following awkward Python behaviour:
>>> len("h")
1
>>> len("๐")
1
>>> len("โค๏ธ")
2
>>>
emoji, halfwidth chars, double chars are never easy to work with ๐ฅฒ
I've expanded on @cceyda's useful example with the red heart to show that the issue only exists with the colored one:
import argilla as rg
rg.delete("issue_2353_emoji")
tokens = ["๐", "a", "b", "c", "d", "๐", "e", "f", "g", "j", "k", "l"]
text = "๐abcd๐efgjkl"
entities = [("A", 6, 9)]
assert text[6:9] == "efg"
record = rg.TokenClassificationRecord(
text=text,
tokens=tokens,
prediction=entities,
)
rg.log(record, "issue_2353_emoji")
tokens = ["โค๏ธ", "a", "b", "c", "d", "โค๏ธ", "e", "f", "g", "j", "k", "l"]
text = "โค๏ธabcdโค๏ธefgjkl"
entities = [("A", 8, 11)]
assert text[8:11] == "efg"
record = rg.TokenClassificationRecord(
text=text,
tokens=tokens,
prediction=entities,
)
rg.log(record, "issue_2353_emoji")
produces:
And this seems to be the issue. In Python, the green heart only has length 1, while on the JS side it has length 2. As a result, the required prediction start and end spans on Python simply slice differently on JS.
Note for example that in my last comment I had to use different prediction spans to select only "efg".
I saw there was a related PR 1year ago https://github.com/argilla-io/argilla/commit/8b570fbb514e2d823a9b8efb839f80d69d93ecac
Maybe @leiyre can give more context
https://hsivonen.fi/string-length/ "๐คฆ๐ผโโ๏ธ","๐คฆ๐ผ","๐", "๐", "๐", "๐", "โฃ๏ธ", "โจ". I think converting between js<->py lengths can solve this. there are too many emojis and strange width chars when working with multiple languages & this would be critical
Hi, in https://github.com/argilla-io/argilla/commit/8b570fbb514e2d823a9b8efb839f80d69d93ecac we included stringz in the front to avoid the problem with unicode and javascript. @tomaarsen could I see your example in dev or pre to explore a bit more the behavior?
@leiyre I've pushed my latest example to dev under issue_2353_emoji
. I think you should be able to access it there now.
Here is another thing:
https://user-images.githubusercontent.com/15624271/219418599-f0879a98-fa39-4fd7-8499-f9ddd58d54c2.mov
I mean I understand why it happens but not how to fix it ๐คฃ . Just playing around in a notebook you can see why
emojis=["๐", "๐", "๐", "๐","๐","โฃ๏ธ", "โจ","๐คฆ๐ผโโ๏ธ","๐คฆ๐ผ"]
for e in emojis:
print(e,len(e),list(e))

I don't really care about the visual artifact on the UI as long as the annotation boundries are passed correctly when using in python! like text[annotation.start:annotation.end]
label_map={
"A" : "location","B" : "organization","C" : "building", "D" : "vegetable","E" : "fruit","F" : "street","G" : "apt", "J" : "no",
"K" : "food","L" : "drink","M" : "alcohol","N" : "name","O" : "phone","P" : "etc","R" : "blah","S" : "something","T" : "somethingelse"
}
labels=list(label_map.keys())
texts=[
'ABCDEFG ๐คฆ๐ผโโ๏ธ jklmnop',
'ABCDEFG ๐ jklmnop',
'ABCDEFG ๐jklmnop',
'ABCDEFG (โฃ๏ธ) jklmnop'
]
records=[]
for i,text in enumerate(texts):
record = rg.TokenClassificationRecord(
text=text,
tokens=list(text.replace(" ","")),
prediction=[(labels[i],i,i+1)],
prediction_agent="model",
# annotation=entities,
# annotation_agent="old",
metadata= { },
status='Default',
id=i
)
records.append(record)
rg.log(records=records, name="test-emoji")
Also question would this tokens=list(text.replace(" ",""))
be the way to pass chars to do char level token annotation.
(maybe I'm doing it wrong, no spaces right?)
Hola !
When looking at the problem, we saw with @leiyre that some updates need to be done in front. But, as pointed by @tomaarsen, the green heart and the red heart don't have the same length in python this is why I guess we received an offset of 2 in the prediction of the second sentence of the image. So a correction in the back is missing too to not have this offset.
yes javascript uses UTF-16 encoding to calculate string lengths. While python counts codepoints(or utf-8 encoding bytes)
The key concepts to understand are unicode code points,graphemes and utf-16 encoding.
I meant to write a more detailed explanation... but I hope the below code will do a better job than words to show what the problem is. Hope this clarifies how chars are handled in python when list("blahblah")
or when doing mystring[start_ind:end_ind]
and differences with JS
import grapheme
string="๐ฉโโค๏ธโ๐โ๐ฉ๐ฆํABเฎคเฎฎเฎฟเฎดเฏ๐๐คฆ๐ผโโ๏ธ" #example problematic strings I gathered from around the internet.Just need to be composed of more than one unicode codepoint to cause problems.
chars=list(grapheme.graphemes(string))
for char in chars:
print(f"original:{char}")
print(f"codepoints:{list(char)}")
for l in list(char):
print(l,ord(l))
for i,b in enumerate(bytes(l,encoding='utf-8')):
print(i,b)
print("-----")
print("#############")
Also learned that in JS if you use array expansion(?) you can get the number of codepoints accurately (same as python)
[..."๐คฆ๐ผโโ๏ธ"].length
I think you mean Spread operator. I found this article in connection to your comment, but we need to be carefull with this approach, since for example :
[..."๐ฉโ๐ป"].length = 3
but that is how python counts too! It count's code points. How humans perceive a single letter(A,B,C etc)(can think of this as the grapheme) and how a single grapheme is represented (by one or more unicode points), and how those points are encoded (UTF-16, UTF-8) are all different things. Also the problem is not just emojis. It is everything that is represented by more than 1 codepoint (basically some non-english letters, accented chars, emojis) But emojis are an easy way to debug this

also was helpful: https://stackoverflow.com/a/51422499/3726119
On the UI side we want graphemes (using Intl.Segmenter() polyfill can work for this), on the backend we want to calculate codepoints = python. whereas JS natively (just doing "str".length) calculates UTF-16 encoding length.
@cceyda If that is indeed consistently equivalent to the Python lengths, then perhaps we can use that to place the spans correctly by only changing the frontend? Or would we strictly need to use e.g. the grapheme
library to extract the lengths on the Python side rather than using len(...)
?
I would heavily suggest not straying from the norm of using len()
list()
on the python side (ie counting codepoints), because that is basically how most tokenization libraries work (transformers,spacy... etc). Also grapheme
library can become outdated if a new Unicode version gets released. I saw a very old PR for a native grapheme splitter on the cpython library, but who knows when that will be adopted.
We should use graphemes only on the UI side because that is what makes sense to human eyes ๐ ๐
This issue is stale because it has been open for 90 days with no activity.
bump as still important
This issue is stale because it has been open for 90 days with no activity.
bump