label-studio-transformers
label-studio-transformers copied to clipboard
not showing predictions after training
Describe the bug There are 2 problems:
- After training ML backend model, I cannot find the model predictions in the UI when labelling.
- Often cannot train all 100 epochs, the system will crash at middle, 30-70 epochs although dataset is small (50) and have GPU.
- Error shows that: get latest job results from work dir doesn’t exist
- Sometimes, when 3 not occurs, other issue is that: unable to load weight from pytorch checkpoint file.
To reproduce Steps to reproduce the behaviour
- Import pre annotated data
- Manually label some of them
- Go to ML UI in Setting, connect model (BERT classifier) and start training
- After finishing, come back to Label UI. In prediction tab, only the pre annotated predictions are shown.
Expected behaviour ML training should be completed and new predictions should be shown in UI
Facing similar issue. Note: I am a beginner with Django and labelstudio so please forgive my naivety.
- Using BERT NER with sample data from here
- At the end of training, it says "POST /train HTTP/1.1←[0m" 201" Which I understand, is confirmation of successful training.
- On the UI, In the filters, I select prediction results and prediction score. [See screenshot]
Questions:
- Is it correct way of viewing the predictions?
- If so, how to get prediction scores?
check you code, whether the variable pred_labels and pred_scores in you code.
@kbv72 sorry for a late answer :slightly_frowning_face:
Is it correct way of viewing the predictions?
yes, it looks ok.
If so, how to get prediction scores?
You have to put float "score" field into the root of the prediction dict.
Hello there, I am facing the same issue with the prediction scores.
As you can see, i have converted my scores to float but it is still not showing up in my frontend.
@aczy99 Try to set model version on Project Settings => Machine Learning. Also it's better to include "model_version" field to the prediction root too.
Facing the same issue, scores are provided but not shown in the tasks list. Selecting the Model version doesn't seem to do anything - displays "Saved!", however is removed upon refreshing the page. It is also confusing where exactly the model version is retrieved from as it is not the same as I provide together with the predictions.
is not the same as I provide together with the predictions.
what do you mean by this?
Please show your tasks with predictions.
what do you mean by this?
Please show your tasks with predictions.
I am using the format provided as example in docs: https://labelstud.io/guide/predictions.html#Example-JSON Scores per result (individual label) are shown in the Labeling view, however the overall score of a task (predictions.score) is not displayed in the project Tasks view Prediction score column (at least this is where I would expect it to be displayed).
As for the model version, I would also expect the string set from predictions.model_version to be listed under Project Settings => Machine Learning, however there are instead some numeric combinations and 'INITIAL'. A new version (the numeric combination) appears any time I rebuild the backend Docker container.
Most likely you have a mistake in the prediction/task format.
Most likely you have a mistake in the prediction/task format.
So I pulled out the list of predictions made on my tasks, and this is one example stored in the Label Studio database:
{
"id": 2053,
"model_version": "INITIAL",
"created_ago": "1 day, 19 hours",
"result": [
{
"from_name": "label",
"image_rotation": 0,
"original_height": 720,
"original_width": 1280,
"score": 0.7231900691986084,
"to_name": "image",
"type": "rectanglelabels",
"value": {
"height": 9.309395684136284,
"rectanglelabels": [
"Visual defects"
],
"rotation": 0,
"width": 8.777930736541748,
"x": 35.5985426902771,
"y": 25.251723395453556
}
},
{
"from_name": "label",
"image_rotation": 0,
"original_height": 720,
"original_width": 1280,
"score": 0.6727777123451233,
"to_name": "image",
"type": "rectanglelabels",
"value": {
"height": 9.74336412217882,
"rectanglelabels": [
"Visual defects"
],
"rotation": 0,
"width": 5.079851150512695,
"x": 24.418318271636963,
"y": 33.53565639919705
}
},
...
],
"score": 0.5981751024723053,
"cluster": null,
"neighbors": null,
"mislabeling": 0,
"created_at": "2023-01-03T18:40:23.579377Z",
"updated_at": "2023-01-03T18:40:23.579422Z",
"task": 600
}
As we can see, the score is in fact stored in the database and therefore simply not displayed. This leads me to believe that the issue is not caused by some formatting mistake in the backend output. As the model version string is defined right after the score in my backend output, it is odd that this isn't stored in the database, though.
@35grain I see you missed "id" in results:
"result": [
{
"id": "random123", <====
"from_name": "label",
"image_rotation": 0,
"original_height": 720,
"original_width": 1280,
"score": 0.7231900691986084,
"to_name": "image",
"type": "rectanglelabels",
"value": {
"height": 9.309395684136284,
"rectanglelabels": [
"Visual defects"
],
"rotation": 0,
"width": 8.777930736541748,
"x": 35.5985426902771,
"y": 25.251723395453556
}
},
Fixed it for me using this code:
prediction.append({
'result': [{
"value": {
"start": 1593866042000,
"end": 1593966345000,
"instant": False,
"timeserieslabels": [
"test"
]
},
"id": "jT4DkFmczt",
"from_name": "label",
"to_name": "ts",
"type": "timeserieslabels",
}],
})
But score is not working yet or I dont found out where to put the score
right
@35grain I see you missed "id" in results:
"result": [ { "id": "random123", <==== "from_name": "label", "image_rotation": 0, "original_height": 720, "original_width": 1280, "score": 0.7231900691986084, "to_name": "image", "type": "rectanglelabels", "value": { "height": 9.309395684136284, "rectanglelabels": [ "Visual defects" ], "rotation": 0, "width": 8.777930736541748, "x": 35.5985426902771, "y": 25.251723395453556 } },
Finally had time to play around with this and unfortunately adding an ID did not solve it. Both score and model version are still missing / invalid.
{
"id":3353,
"model_version":"INITIAL",
"created_ago":"0 minutes",
"result":[
{
"from_name":"label",
"id":"0",
"image_rotation":0,
"original_height":720,
"original_width":1280,
"score":0.9651663303375244,
"to_name":"image",
"type":"rectanglelabels",
"value":{
"height":12.63888888888889,
"rectanglelabels":[
"Visual defects"
],
"rotation":0,
"width":12.65625,
"x":30.390625,
"y":24.444444444444443
}
},
{
"from_name":"label",
"id":"1",
"image_rotation":0,
"original_height":720,
"original_width":1280,
"score":0.9236611127853394,
"to_name":"image",
"type":"rectanglelabels",
"value":{
"height":10.555555555555555,
"rectanglelabels":[
"Visual defects"
],
"rotation":0,
"width":7.8125,
"x":36.796875,
"y":74.86111111111111
}
},
...
{
"from_name":"label",
"id":"23",
"image_rotation":0,
"original_height":720,
"original_width":1280,
"score":0.3958974778652191,
"to_name":"image",
"type":"rectanglelabels",
"value":{
"height":7.638888888888889,
"rectanglelabels":[
"Visual defects"
],
"rotation":0,
"width":4.84375,
"x":52.5,
"y":83.19444444444444
}
}
],
"score":0.8146782057980696,
"cluster":null,
"neighbors":null,
"mislabeling":0.0,
"created_at":"2023-01-24T15:30:13.252432Z",
"updated_at":"2023-01-24T15:30:13.252489Z",
"task":77
}
@35grain did you ever find a solution, i'm having the same issue of no prediction score despite the prediction result structure looking correct.
@35grain did you ever find a solution, i'm having the same issue of no prediction score despite the prediction result structure looking correct.
Unfortunately not. Didn't have time to dig any deeper either :/
I've found that if you attach scores to both the result and alongside the result, the score will appear in the interface. I think that there are some recently introduced bugs that we're working on sorting out, and hopefully they will be cleared up soon.