dpv
dpv copied to clipboard
Revise Automation and HumanInvolvement concepts
As per the paper "Sources of Risk of AI Systems" https://doi.org/10.3390/ijerph19063641 by Steimers and Schneider, ISO/IEC 22989 has 7 degrees or levels of automation. DPV should reflect these concepts for Automation and Human Involvement taxonomies.
Instead of focusing solely on processing, these concepts should be moved to TECH to reflect automation of all technologies. All concepts are part of the taxonomy under AutomationOfTechnology concept, and are associated using hasAutomation.
Automation concepts:
Autonomous(Human out of the loop): The system is capable of modifying its operation domain or its goals without external intervention, control or oversightFullAutomation(Human in/out the loop): The system is capable of performing its entire mission without external interventionHighAutomation(Human in the loop): The system performs parts of its mission without external interventionConditionalAutomation(Human in the loop): Sustained and specific performance by a system, with an external agent ready to take over when necessaryPartialAutomation(Human in the loop): Some sub-functions of the system are fully automated while the system remains under the control of an external agentAssistiveAutomation(Human in the loop): The system assists an operator.NonAutomated(Human in the loop): The operator fully controls the system
HumanInvolvement concepts:
HumanInvolvement is a concept within the AutomationOfTechnology taxonomy, and is associated using hasHumanInvolvement. Where relevant, the Automation concepts will be a subtype of HumanInvolvement concepts. E.g. ConditionalAutomation will be a subtype of HumanInLoop as it always has a human in the loop.
HumanInLoop- humans are involved a.HumanControlled- human has full control over the processing or system. This maps to None, Assistive, and Partial automation models. b.HumanIntervention- human has ability to intervene in the processing or system operation. This maps to Conditional and High automation models. c.HumanOversight- human has ability to oversee the processing or system operation. This does not by itself mean that human has ability to intervene. d.HumanInput- human has ability to decide or provide inputs to the operations. This can be at any stage. e.HumanDecision- human has the ability to make decisions in the operation of the system. This can be at any stage. Note that decisions are about controlling the operation, and are distinct from input (data or parameters). f.HumanVerification- human has the ability to verify the decisions or outputs of a system, typically at the end. Verification means asserting that the decision or output is correct or acceptable.HumanOutOfLoop- humans are not involved
Delaram has pointed out that Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment lists different definitions for human involvement as:
Human-in-the-looprefers to the capability for human intervention in every decision cycle of the system.Human-on-the-looprefers to the capability for human intervention during the design cycle of the system and monitoring the system’s operation.Human-in-commandrefers to the capability to oversee the overall activity of the AI system (including its broader economic, societal, legal and ethical impact) and the ability to decide when and how to use the AI system in any particular situation. The latter can include the decision not to use an AI system in a particular situation to establish levels of human discretion during the use of the system, or to ensure the ability to override a decision made by an AI system.
These are not compatible with other uses of "human in the loop" which can refer to humans only providing interactions rather than having capacity for "interventions" during "decisions". Personally I prefer having explicit concepts detailing what the role of human involvement is, within a broader label of Human in/on loop to encourage picking the most appropriate concept.
Discussed in meeting 11 OCT that this will be discussed in the next meeting on 18 OCT.
- concept HumanInvolvement
- relation hasHumanInvolvement
- Human in the Loop - humans are involved in the process/operation
- HumanControlled - humans can have control over functioning
- HumanIntervention - humans can intervene in functioning
- HumanOversight - humans have oversight of functioning
- HumanInput - humans can provide input to operations
- HumanDecision - humans can make decisions in operations
- HumanVerification - humans can verify functioning or output
- HumanChallenge - humans can challenge the functioning or output
- HumanCorrection - humans can correct the functioning or output
- HumanReversion - humans revert or reverse the output
- HumanOptIn - humans can opt-in or decide to be subjected to the system
- HumanOptOut - humans can opt-out or decide to not be subjected to the system
- HumanObjection - humans can object to the system or being subjected to it
- Human out of the Loop - humans are not involved
- concept AutomationLevel - these are best understood with the application of automation in cars
- relation hasAutomationLevel
- levels: Level 0 represents /No Automation/
- Assistive Automation: the automation is limited to parts of the system or a specific part of the system in a manner that does not change the control of the human in using/driving the system
- Partial Automation: the automation is present in multiple parts of the system or in a manner that does not require the human to contro/use these parts while still retaining control over the system
- Conditional Automation: the level of automation is sufficient to perform most tasks of the system with the human present to take over where necessary
- High Automation: the system is capable of performing all its tasks within specific controlled conditions without human involvement
- Full Automation: the system is capable of performing all its tasks regardless of the conditions without human involvement
- Fully Autonomous: the system is capable of modifying its operation domain or its goals without external intervention, control or oversight
For HumanInvovement and in general Invovlvement, revised concepts:
Requirement: indicate what control over the activities/context the entity has in terms of being subjected to it, opting in or opting out from it, objecting to it, withdrawing from it, challenging it, correcting it, or reversing its effects. (HumanInvovement and hasHumanInvovement are specialised forms of this where the entity is a human. Otherwise these controls can be for any entity e.g. organisation or agent e.g. AI operated systems). See emails by Delaram https://lists.w3.org/Archives/Public/public-dpvcg/2023Oct/0020.html and Dave https://lists.w3.org/Archives/Public/public-dpvcg/2023Oct/0025.html
Base concepts:
Involvement: Ability of an entity to control its involvementhasInvolvementwith rangeInvolvement
Specialisations as Permissive and NonPermissive involvements.
PermissiveInvolvement: Ability where entity can control its involvementNonPermissiveInvolvement: Ability where entity cannot control its involvement
Specialisations of Permissive Involvement
OptIn: entity can opt inOptOut: entity can opt outObjectToActivity: entity can object to activityWithdrawFromActivity: entity can withdraw from activity (withdraw is for previously given assent, opt-out does not require prior assent)ChallengeActivity: entity can challenge an activity (in terms of how it is being conducted or implemented - where challenge refers to raising questions about validity, necessity, correctness, or other similar 'trustworthiness' attributes)ChallengeOutput: entity can challenge the output of an activity (instead of how the activity was conducted, this refers to the output of the activity e.g. where entity is not aware of activity implementation details but can only see the output)CorrectActivity: entity can correct how an activity is conducted or implementedCorrectOutput: entity can correct output of an activityEntityReverseEffects: entity can reverse the effects of an activity, where effects can be outputs or the impact of those outputs
Specialisations of Non-Permissive Involvement as inability to have permissive involvement
CannotOptIn: entity cannot opt inCannotOptOut: entity cannot opt outCannotObjectToActivity: entity cannot object to activityCannotWithdrawFromActivity: entity cannot withdraw from activity (withdraw is for previously given assent, opt-out does not require prior assent)CannotChallengeActivity: entity cannot challenge an activity (in terms of how it is being conducted or implemented - where challenge refers to raising questions about validity, necessity, correctness, or other similar 'trustworthiness' attributes)CannotChallengeOutput: entity cannot challenge the output of an activity (instead of how the activity was conducted, this refers to the output of the activity e.g. where entity is not aware of activity implementation details but can only see the output)CannotCorrectActivity: entity cannot correct how an activity is conducted or implementedCannotCorrectOutput: entity cannot correct output of an activityCannotEntityReverseEffects: entity cannot reverse the effects of an activity, where effects can be outputs or the impact of those outputs
Update APR-23: fixed typo Invovlement -> Involvement, and correct NonPermissiveInvolvement to use 'cannot' instead of 'can' - from https://github.com/w3c/dpv/issues/108#issuecomment-2072867257
@delaramglp could you please check whether the above makes sense for your use-cases?
They make sense, just minor errors need to be fixed:
Typos
PermissiveInvovlement --> PermissiveInvolvement
NonPermissiveInvovlement -->NonPermissiveInvolvement
Definitions
NonPermissiveInvovlement: Ability where entity cannot control its involvement ?
It seems to me that there is an element of time tied to the type of involvement, e.g. you can correct the output after it is generated, but not sure if there is any value in modelling it.
Hi, thanks for fixing the typos. I'll update them. For Correct Output, I left it intentionally vague on whether the output is already generated (ex-post) or the correction is before the output is generated (ex-ante) so the concept can be useful for both. E.g. the output says 3 and I correct it to 2 (ex-post) to fix something - vs - if the output will say 3 then correct it to 2 (ex-ante) as a precaution. Does this make sense?
Discussed with Delaram, and concluded the following:
- change the wording of concepts so they are indicative of an action, e.g.
OptInbecomesOptingIn - change permissive terms e.g.
EntityReverseEffectstoReversingEffectsand add separatelyReversingOutputs - the non-permissive or restrictive terms stay the same e.g.
CannotOptIn
Specialisations of Permissive Involvement
OptingIn: entity can opt inOptingOut: entity can opt outObjectingToActivity: entity can object to activityWithdrawingFromActivity: entity can withdraw from activity (withdraw is for previously given assent, opt-out does not require prior assent)ChallengingActivity: entity can challenge an activity (in terms of how it is being conducted or implemented - where challenge refers to raising questions about validity, necessity, correctness, or other similar 'trustworthiness' attributes)ChallengingOutput: entity can challenge the output of an activity (instead of how the activity was conducted, this refers to the output of the activity e.g. where entity is not aware of activity implementation details but can only see the output)CorrectingActivity: entity can correct how an activity is conducted or implementedCorrectingOutput: entity can correct output of an activityReversingEffects: entity can reverse the effects of an activityReversingOutput: entity can reverse the output of an activity
Specialisations of Non-Permissive Involvement as inability to have permissive involvement
CannotOptIn: entity cannot opt inCannotOptOut: entity cannot opt outCannotObjectToActivity: entity cannot object to activityCannotWithdrawFromActivity: entity cannot withdraw from activity (withdraw is for previously given assent, opt-out does not require prior assent)CannotChallengeActivity: entity cannot challenge an activity (in terms of how it is being conducted or implemented - where challenge refers to raising questions about validity, necessity, correctness, or other similar 'trustworthiness' attributes)CannotChallengeOutput: entity cannot challenge the output of an activity (instead of how the activity was conducted, this refers to the output of the activity e.g. where entity is not aware of activity implementation details but can only see the output)CannotCorrectActivity: entity cannot correct how an activity is conducted or implementedCannotCorrectOutput: entity cannot correct output of an activityCannotReverseEffects: entity cannot reverse the effects of an activityCannotReverseOutput: entity cannot reverse the output of an activity
How is HumanCorrection different from HumanReversion ? It seems like reversion is just a form of correction.
How is HumanCorrection different from HumanReversion ? It seems like reversion is just a form of correction.
My read is this:
-
In
HumanReversion, human can only accept (pass/no revert) or reject (revert) what is suggested by the system. -
Apart from that "accept" and "reject", human cannot give other input to the system; human cannot directly specify what the output should looks like.
-
In
HumanCorrection, human can directly specify what the output should looks like. -
Given enough time, unlimited quota for human rejections, and the ability of the system to learn, a set of reverts could be see as a correction. But a single individual revert can hardly be a correction.
Thanks Art, that's a much cleaner/nicer explanation than what I mentioned on the call. To add to that:
- Correct Output means to provide the correction e.g. it should be 2 and not 1
- Reversing Output means to undo the output to its previous state e.g. 2 is wrong, go back to what it was earlier (which was 1, but we don't need to provide it)
- Reverse Effects means that the system has produced some (side-)effects which must be rolled back e.g. my bank account was frozen, so undo this.
But a single individual revert can hardly be a correction.
I fail to see why not. Human intervention, whether it changes a single value or many, whether it goes back to the just-previous value (the simplest revert), or to a long-previous value (still possible to be a revert) because some sensor has been discovered to have been misbehaving for many hours (or however long) — all of these seem to me to be HumanCorrection.
Continuing discussion from https://w3id.org/dpv/meetings/meeting-2024-05-15
This issue will be automatically closed with the commit as per discussion in meeting MAY-22.