Prometeo-Pyrrha icon indicating copy to clipboard operation
Prometeo-Pyrrha copied to clipboard

Designer - Articulate Storyboards for phase 1

Open JSegrave-IBM opened this issue 5 years ago • 2 comments

Articulate Storyboards early on to enable firefighters, designers, software & hardware people review them before development and identify conflicts, issues and refinements (as well as have a shared sense of scope).

e.g. ’Firefighter with a watch and smartphone receives ‘status red’ alert’ - the storyboards can state things like:

  • What the firefighter will see / hear / feel when alerted during field conditions (e.g. can take into account 8% of men are red-green color-blind).
  • Where the visible things are and what they're doing - visibility of the sensor LED / Smart Watch / Phone under field conditions.
  • What happens when 1 device (sensor) is in play - how the alert goes off, how a firefighter responds to or cancels it.
  • What happens when 1 combinations of devices are in play (sensor / phone / watch). How a firefighter responds to or cancels these. Which device is mainly responsible for alerts? (would multiple devices alerting be OK / confusing?) etc...

Likewise ’Command Center leader receives ‘status red’ alert for a Firefighter’ - the storyboards can state things like:

  • Does the response depend on knowing what the conditions are? e.g. will the next actions be different if ** toxic level of NO2 for a few seconds, but stopped ** experiencing a combination of {high temp + CO} right now ** has been exposed to too much NO2 cumulatively over the last 15 minutes ** etc...

Additional value from storyboards:

  • Put them on github for onboarding new O/S contributors - help them get quickly up to speed and feel engaged
  • Initial outline for Communications to spec out a promo / video / demo

JSegrave-IBM avatar Jun 24 '20 16:06 JSegrave-IBM

Note: in order to work out the tech (algos / ML) behind the alerts, we're going to need to do some of this storyboarding.

e.g. (just one illustrative example) In the command center, an alert goes red for Alfonso. What happens next?

  • Does Alfonso's response depend on knowing what the underlying condition, or conditions were? e.g. will he react differently if ** a sensor just saw a toxic level of NO2 for a few seconds and then stopped ** a firefighter is experiencing a combination of {high temp + CO}? ** a firefighter has been exposed to too much NO2 cumulatively over the last 15 minutes?

Labelling data is one of the big costs in machine learning and the 'type' of explainability required determines how we do the labels (as well as how feedback is gathered at runtime.) e.g. red / yellow /green is simple, but is it sufficiently-explained to enable correct follow-up actions? When context/explanation is essential, we often choose to use machine learning to learn the explanations (like 'critical exposure to NO2 over 15 mins') rather than the decisions ('red: get the firefighter out'). Need to know this up-front, as labelling 100s/1000s of examples is expensive is usually not cost-effective to repeat / fix.

JSegrave-IBM avatar Jun 24 '20 17:06 JSegrave-IBM

This should be updated for our October 1 MVP.

krook avatar Sep 08 '20 20:09 krook