DroneAid icon indicating copy to clipboard operation
DroneAid copied to clipboard

Create a model to detect hand-drawn "SOS"

Open krook opened this issue 4 years ago • 9 comments

Is your feature request related to a problem? Please describe. We assume that the person in need will have a kit with the printed symbols available. We should improve the system to demonstrate how a person could hand-recreate the symbols, and in turn, make the recognition more sensitive to those symbols.

krook avatar Oct 07 '19 03:10 krook

Very timely: https://www.cnn.com/2019/10/17/world/missing-australian-woman-sos-rescued-trnd/

krook avatar Oct 18 '19 12:10 krook

I can work on this

anushkrishnav avatar Apr 23 '21 22:04 anushkrishnav

Thanks @anushkrishnav

krook avatar Apr 26 '21 13:04 krook

Is this issue resolved, as I can see it open and unassigned? My proposal for the resolving it is: If we already have the models for our symbols from the kit ready, and a hand-written SoS is not one of them, it could be a good idea to instead use a pretrained model for handwritten letter recognition and integrate it with our present models.

sarrah-basta avatar Dec 19 '21 06:12 sarrah-basta

Hi @sarrah-basta. This hasn't been worked on yet. Please feel free to take a shot. Thank you! And I agree, if we can reuse a model that would be ideal. Maybe the Model Asset Exchange has something to build upon.

krook avatar Dec 21 '21 14:12 krook

Hi, @krook hope you are doing well! As the issue is opened and unassigned, I can work on this. I would propose to simply determine the alphabets written on the ground, as it could be possible that someone in need might write some other information as well, like for reference "INJURED" or something else. And based on the determined text, we can classify it. For this, we can use a pre-trained model, and as mentioned by you Model Asset Exchange seems a good choice, we can also take help from mediapie for detecting if the human is present their or not, and in what state.

bhavyagoel avatar Jan 04 '22 19:01 bhavyagoel

@krook @bhavyagoel yes as both of you'll have mentioned, we can use Optical character Recognition from Model Asset Exchange to detect the text. It can be further classified using either manually or using a classification model such as Naive Bayes classifier or the like. And adding the Face Detection Model from MediaPipe as mentioned would also be a huge plus.

sarrah-basta avatar Jan 05 '22 10:01 sarrah-basta

Thanks for your interest everyone. So since @sarrah-basta replied first to the latest request, why don't you take the first pass at it? If you need any feedback or review of the proposed approach, then you can tag @bhavyagoel and I. Sound like a plan?

krook avatar Jan 06 '22 14:01 krook

Yep sure, I'll start by finalising the models among ones we were proposing and try looking at our codebase to understand how it can be integrated. Thanks

Thanks for your interest everyone. So since @sarrah-basta replied first to the latest request, why don't you take the first pass at it? If you need any feedback or review of the proposed approach, then you can tag @bhavyagoel and I. Sound like a plan?

sarrah-basta avatar Jan 06 '22 15:01 sarrah-basta