dpv icon indicating copy to clipboard operation
dpv copied to clipboard

Revise Automation and HumanInvolvement concepts

Open coolharsh55 opened this issue 2 years ago • 14 comments

As per the paper "Sources of Risk of AI Systems" https://doi.org/10.3390/ijerph19063641 by Steimers and Schneider, ISO/IEC 22989 has 7 degrees or levels of automation. DPV should reflect these concepts for Automation and Human Involvement taxonomies.

Instead of focusing solely on processing, these concepts should be moved to TECH to reflect automation of all technologies. All concepts are part of the taxonomy under AutomationOfTechnology concept, and are associated using hasAutomation.

Automation concepts:

  1. Autonomous (Human out of the loop): The system is capable of modifying its operation domain or its goals without external intervention, control or oversight
  2. FullAutomation (Human in/out the loop): The system is capable of performing its entire mission without external intervention
  3. HighAutomation (Human in the loop): The system performs parts of its mission without external intervention
  4. ConditionalAutomation (Human in the loop): Sustained and specific performance by a system, with an external agent ready to take over when necessary
  5. PartialAutomation (Human in the loop): Some sub-functions of the system are fully automated while the system remains under the control of an external agent
  6. AssistiveAutomation (Human in the loop): The system assists an operator.
  7. NonAutomated (Human in the loop): The operator fully controls the system

HumanInvolvement concepts:

HumanInvolvement is a concept within the AutomationOfTechnology taxonomy, and is associated using hasHumanInvolvement. Where relevant, the Automation concepts will be a subtype of HumanInvolvement concepts. E.g. ConditionalAutomation will be a subtype of HumanInLoop as it always has a human in the loop.

  1. HumanInLoop - humans are involved a. HumanControlled - human has full control over the processing or system. This maps to None, Assistive, and Partial automation models. b. HumanIntervention - human has ability to intervene in the processing or system operation. This maps to Conditional and High automation models. c. HumanOversight - human has ability to oversee the processing or system operation. This does not by itself mean that human has ability to intervene. d. HumanInput - human has ability to decide or provide inputs to the operations. This can be at any stage. e. HumanDecision - human has the ability to make decisions in the operation of the system. This can be at any stage. Note that decisions are about controlling the operation, and are distinct from input (data or parameters). f. HumanVerification - human has the ability to verify the decisions or outputs of a system, typically at the end. Verification means asserting that the decision or output is correct or acceptable.
  2. HumanOutOfLoop - humans are not involved

coolharsh55 avatar Aug 20 '23 09:08 coolharsh55

Delaram has pointed out that Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment lists different definitions for human involvement as:

  1. Human-in-the-loop refers to the capability for human intervention in every decision cycle of the system.
  2. Human-on-the-loop refers to the capability for human intervention during the design cycle of the system and monitoring the system’s operation.
  3. Human-in-command refers to the capability to oversee the overall activity of the AI system (including its broader economic, societal, legal and ethical impact) and the ability to decide when and how to use the AI system in any particular situation. The latter can include the decision not to use an AI system in a particular situation to establish levels of human discretion during the use of the system, or to ensure the ability to override a decision made by an AI system.

These are not compatible with other uses of "human in the loop" which can refer to humans only providing interactions rather than having capacity for "interventions" during "decisions". Personally I prefer having explicit concepts detailing what the role of human involvement is, within a broader label of Human in/on loop to encourage picking the most appropriate concept.

coolharsh55 avatar Aug 21 '23 08:08 coolharsh55

Discussed in meeting 11 OCT that this will be discussed in the next meeting on 18 OCT.

coolharsh55 avatar Oct 12 '23 21:10 coolharsh55

  • concept HumanInvolvement
  • relation hasHumanInvolvement
  • Human in the Loop - humans are involved in the process/operation
    • HumanControlled - humans can have control over functioning
    • HumanIntervention - humans can intervene in functioning
    • HumanOversight - humans have oversight of functioning
    • HumanInput - humans can provide input to operations
    • HumanDecision - humans can make decisions in operations
    • HumanVerification - humans can verify functioning or output
    • HumanChallenge - humans can challenge the functioning or output
    • HumanCorrection - humans can correct the functioning or output
    • HumanReversion - humans revert or reverse the output
    • HumanOptIn - humans can opt-in or decide to be subjected to the system
    • HumanOptOut - humans can opt-out or decide to not be subjected to the system
    • HumanObjection - humans can object to the system or being subjected to it
  • Human out of the Loop - humans are not involved

coolharsh55 avatar Apr 16 '24 11:04 coolharsh55

  • concept AutomationLevel - these are best understood with the application of automation in cars
  • relation hasAutomationLevel
  • levels: Level 0 represents /No Automation/
    1. Assistive Automation: the automation is limited to parts of the system or a specific part of the system in a manner that does not change the control of the human in using/driving the system
    2. Partial Automation: the automation is present in multiple parts of the system or in a manner that does not require the human to contro/use these parts while still retaining control over the system
    3. Conditional Automation: the level of automation is sufficient to perform most tasks of the system with the human present to take over where necessary
    4. High Automation: the system is capable of performing all its tasks within specific controlled conditions without human involvement
    5. Full Automation: the system is capable of performing all its tasks regardless of the conditions without human involvement
    6. Fully Autonomous: the system is capable of modifying its operation domain or its goals without external intervention, control or oversight

coolharsh55 avatar Apr 16 '24 12:04 coolharsh55

For HumanInvovement and in general Invovlvement, revised concepts:

Requirement: indicate what control over the activities/context the entity has in terms of being subjected to it, opting in or opting out from it, objecting to it, withdrawing from it, challenging it, correcting it, or reversing its effects. (HumanInvovement and hasHumanInvovement are specialised forms of this where the entity is a human. Otherwise these controls can be for any entity e.g. organisation or agent e.g. AI operated systems). See emails by Delaram https://lists.w3.org/Archives/Public/public-dpvcg/2023Oct/0020.html and Dave https://lists.w3.org/Archives/Public/public-dpvcg/2023Oct/0025.html

Base concepts:

  • Involvement: Ability of an entity to control its involvement
  • hasInvolvement with range Involvement

Specialisations as Permissive and NonPermissive involvements.

  • PermissiveInvolvement: Ability where entity can control its involvement
  • NonPermissiveInvolvement: Ability where entity cannot control its involvement

Specialisations of Permissive Involvement

  • OptIn: entity can opt in
  • OptOut: entity can opt out
  • ObjectToActivity: entity can object to activity
  • WithdrawFromActivity: entity can withdraw from activity (withdraw is for previously given assent, opt-out does not require prior assent)
  • ChallengeActivity: entity can challenge an activity (in terms of how it is being conducted or implemented - where challenge refers to raising questions about validity, necessity, correctness, or other similar 'trustworthiness' attributes)
  • ChallengeOutput: entity can challenge the output of an activity (instead of how the activity was conducted, this refers to the output of the activity e.g. where entity is not aware of activity implementation details but can only see the output)
  • CorrectActivity: entity can correct how an activity is conducted or implemented
  • CorrectOutput: entity can correct output of an activity
  • EntityReverseEffects: entity can reverse the effects of an activity, where effects can be outputs or the impact of those outputs

Specialisations of Non-Permissive Involvement as inability to have permissive involvement

  • CannotOptIn: entity cannot opt in
  • CannotOptOut: entity cannot opt out
  • CannotObjectToActivity: entity cannot object to activity
  • CannotWithdrawFromActivity: entity cannot withdraw from activity (withdraw is for previously given assent, opt-out does not require prior assent)
  • CannotChallengeActivity: entity cannot challenge an activity (in terms of how it is being conducted or implemented - where challenge refers to raising questions about validity, necessity, correctness, or other similar 'trustworthiness' attributes)
  • CannotChallengeOutput: entity cannot challenge the output of an activity (instead of how the activity was conducted, this refers to the output of the activity e.g. where entity is not aware of activity implementation details but can only see the output)
  • CannotCorrectActivity: entity cannot correct how an activity is conducted or implemented
  • CannotCorrectOutput: entity cannot correct output of an activity
  • CannotEntityReverseEffects: entity cannot reverse the effects of an activity, where effects can be outputs or the impact of those outputs

Update APR-23: fixed typo Invovlement -> Involvement, and correct NonPermissiveInvolvement to use 'cannot' instead of 'can' - from https://github.com/w3c/dpv/issues/108#issuecomment-2072867257

coolharsh55 avatar Apr 20 '24 09:04 coolharsh55

@delaramglp could you please check whether the above makes sense for your use-cases?

coolharsh55 avatar Apr 23 '24 11:04 coolharsh55

They make sense, just minor errors need to be fixed:

Typos PermissiveInvovlement --> PermissiveInvolvement NonPermissiveInvovlement -->NonPermissiveInvolvement

Definitions NonPermissiveInvovlement: Ability where entity cannot control its involvement ?

It seems to me that there is an element of time tied to the type of involvement, e.g. you can correct the output after it is generated, but not sure if there is any value in modelling it. 

DelaramGlp avatar Apr 23 '24 16:04 DelaramGlp

Hi, thanks for fixing the typos. I'll update them. For Correct Output, I left it intentionally vague on whether the output is already generated (ex-post) or the correction is before the output is generated (ex-ante) so the concept can be useful for both. E.g. the output says 3 and I correct it to 2 (ex-post) to fix something - vs - if the output will say 3 then correct it to 2 (ex-ante) as a precaution. Does this make sense?

coolharsh55 avatar Apr 23 '24 17:04 coolharsh55

Discussed with Delaram, and concluded the following:

  • change the wording of concepts so they are indicative of an action, e.g. OptIn becomes OptingIn
  • change permissive terms e.g. EntityReverseEffects to ReversingEffects and add separately ReversingOutputs
  • the non-permissive or restrictive terms stay the same e.g. CannotOptIn

Specialisations of Permissive Involvement

  • OptingIn: entity can opt in
  • OptingOut: entity can opt out
  • ObjectingToActivity: entity can object to activity
  • WithdrawingFromActivity: entity can withdraw from activity (withdraw is for previously given assent, opt-out does not require prior assent)
  • ChallengingActivity: entity can challenge an activity (in terms of how it is being conducted or implemented - where challenge refers to raising questions about validity, necessity, correctness, or other similar 'trustworthiness' attributes)
  • ChallengingOutput: entity can challenge the output of an activity (instead of how the activity was conducted, this refers to the output of the activity e.g. where entity is not aware of activity implementation details but can only see the output)
  • CorrectingActivity: entity can correct how an activity is conducted or implemented
  • CorrectingOutput: entity can correct output of an activity
  • ReversingEffects: entity can reverse the effects of an activity
  • ReversingOutput: entity can reverse the output of an activity

Specialisations of Non-Permissive Involvement as inability to have permissive involvement

  • CannotOptIn: entity cannot opt in
  • CannotOptOut: entity cannot opt out
  • CannotObjectToActivity: entity cannot object to activity
  • CannotWithdrawFromActivity: entity cannot withdraw from activity (withdraw is for previously given assent, opt-out does not require prior assent)
  • CannotChallengeActivity: entity cannot challenge an activity (in terms of how it is being conducted or implemented - where challenge refers to raising questions about validity, necessity, correctness, or other similar 'trustworthiness' attributes)
  • CannotChallengeOutput: entity cannot challenge the output of an activity (instead of how the activity was conducted, this refers to the output of the activity e.g. where entity is not aware of activity implementation details but can only see the output)
  • CannotCorrectActivity: entity cannot correct how an activity is conducted or implemented
  • CannotCorrectOutput: entity cannot correct output of an activity
  • CannotReverseEffects: entity cannot reverse the effects of an activity
  • CannotReverseOutput: entity cannot reverse the output of an activity

coolharsh55 avatar Apr 24 '24 11:04 coolharsh55

How is HumanCorrection different from HumanReversion ? It seems like reversion is just a form of correction.

steve-hickman-epistimis avatar Apr 24 '24 12:04 steve-hickman-epistimis

How is HumanCorrection different from HumanReversion ? It seems like reversion is just a form of correction.

My read is this:

  • In HumanReversion, human can only accept (pass/no revert) or reject (revert) what is suggested by the system.

  • Apart from that "accept" and "reject", human cannot give other input to the system; human cannot directly specify what the output should looks like.

  • In HumanCorrection, human can directly specify what the output should looks like.

  • Given enough time, unlimited quota for human rejections, and the ability of the system to learn, a set of reverts could be see as a correction. But a single individual revert can hardly be a correction.

bact avatar Apr 24 '24 20:04 bact

Thanks Art, that's a much cleaner/nicer explanation than what I mentioned on the call. To add to that:

  • Correct Output means to provide the correction e.g. it should be 2 and not 1
  • Reversing Output means to undo the output to its previous state e.g. 2 is wrong, go back to what it was earlier (which was 1, but we don't need to provide it)
  • Reverse Effects means that the system has produced some (side-)effects which must be rolled back e.g. my bank account was frozen, so undo this.

coolharsh55 avatar Apr 24 '24 21:04 coolharsh55

But a single individual revert can hardly be a correction.

I fail to see why not. Human intervention, whether it changes a single value or many, whether it goes back to the just-previous value (the simplest revert), or to a long-previous value (still possible to be a revert) because some sensor has been discovered to have been misbehaving for many hours (or however long) — all of these seem to me to be HumanCorrection.

TallTed avatar Apr 24 '24 22:04 TallTed

Continuing discussion from https://w3id.org/dpv/meetings/meeting-2024-05-15

coolharsh55 avatar May 17 '24 07:05 coolharsh55

This issue will be automatically closed with the commit as per discussion in meeting MAY-22.

coolharsh55 avatar May 23 '24 11:05 coolharsh55