dffml
dffml copied to clipboard
df: memory: Locking must be dynamic
Current implementation of MemoryLockNetworkContext
is such that it locks all parent inputs and the input itself.
What it should do is look at the parents and let descendants of a parent all operate on descendant Input
items at the same time, but not let anyone operate on that parent item until all the operations working on descendant Input
s have completed.
Effectively, all operations working on the descendant Intput
s "share" the lock on that parent they descend from.
https://github.com/intel/dffml/blob/2c671985d3aef0886554397c729aa2b667884be1/dffml/df/memory.py#L466-L492
- We could have overlays which work in tandem with
Union/Lock/Export/SystemContext
style type annotation extras on definitions. - We could then inspect on apply by having the Python
@op
decorator use the extras to add some metadata which says at a minimum:- Modify the dataflow having overlays applied to it purely based on
dataflow.flow
inspection at time of overlay apply (still TODO on best practice methodologies around ordering overlay applications)- Add additional metadata which the locking network within the orchestrator config will understand (overlays for different locking networks, generic default which works with
dffml.df.memory
) - Implement support within the locking network to understand this new metadata.
- Have it use the metadata to establish an
Input
s relationship to it'slinks
. - If the
Input
has a relationship (relationships and types could be stored within metadata) with a link where the link is it's parent, the current locking implementation will lock the parent.- We need to be able to say to the locking network: the metadata we added via the overlay is asking you to check the sequence of operations, or definitions.
- We ideally we implement this check via an operation itself, perhaps this is a deployment option (just an operation, an
ActiveSystemContext
method) of the dataflow for the context of the operation, the check returns some, or maybe even better implements some locking policy (it could take the lock network as an argument from the caller)- This would enable custom locking policies, the first one we could implement could be the shared one. Okay this is the plan, see below
- Add additional metadata which the locking network within the orchestrator config will understand (overlays for different locking networks, generic default which works with
- Modify the dataflow having overlays applied to it purely based on
- We could eventually extend to patching in new operations to the flows, this probably has overlap if not the same pattern/methodology we'll need to develop for #1400
- Make existing taking of locks an operation
- Make operation in dataflow / system context
- Call system context which just does old lock taking code
- Put system context within config of lock network
- Now users can just change the
upstream
to replace the operation entirely with a full fledged decision tree on what to do with different metadata. (test this)
- Flip mode ⛓
- @pdxjohnny changed the title df: memory: Locking must be hierarchical df: memory: Locking must be dynamic 2 minutes ago
- We are going to use operations now and dataflows / system contexts to take locks, this means they can interact with lctx through octx, or it should probably be given as a definition given as an Input to the flow