The argo agent will request two plugin interfaces at the same time for the same task
If I have two plugins set up at the same time, on ports 4355 and 5678, then the argo agent pod will request the interface of each of these two containers for each task. Is there any way to circumvent this?
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
name: test-wf-argo-plugin
namespace: argo
spec:
arguments: {}
entrypoint: hello
podGC:
strategy: OnPodCompletion
serviceAccountName: argo
templates:
- dag:
tasks:
- arguments:
parameters:
- name: taskName
value: "aaa"
name: task-packing
template: test-plugin-one
- arguments:
parameters:
- name: args
value: "aaa"
name: async-job-a
template: test-plugin-two
inputs: {}
metadata: {}
name: hello
outputs: {}
- inputs:
parameters:
- name: taskName
metadata: {}
name: test-plugin-one
outputs: {}
plugin:
test-plugin-one:
taskName: '{{inputs.parameters.taskName}}'
- inputs:
parameters:
- name: args
metadata: {}
name: test-plugin-two
outputs: {}
plugin:
test-plugin-two:
args: '{{inputs.parameters.args}}'
test-plugin-two was listening on port 6789, but instead he requested 4355

@GhangZh Did you configure your plugin port on ExecutorPlugin 6789?
https://github.com/argoproj/argo-workflows/blob/master/docs/executor_plugins.md
@GhangZh Did you configure your plugin port on
ExecutorPlugin6789? https://github.com/argoproj/argo-workflows/blob/master/docs/executor_plugins.md @sarabala1979 Yes, I'm sure, and the plugin works.
apiVersion: v1
data:
sidecar.automountServiceAccountToken: "false"
sidecar.container: |
args:
- test_plugin_two.py
image: docker.xxx.com/argo-plugins:2023-01-03-16-18
name: test-plugin-two-plugin
ports:
- containerPort: 6789
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 250m
memory: 64Mi
securityContext:
runAsNonRoot: true
runAsUser: 65534
kind: ConfigMap
metadata:
labels:
workflows.argoproj.io/configmap-type: ExecutorPlugin
name: test-plugin-two-plugin
namespace: argo
argo-agent pod

Look at the code here it will be executed for each plugin

If a plugin cannot execute a task, it should return nil node.
If a plugin cannot execute a task, it should return nil node.
I don't think this is very friendly either, it should be executed according to the matching plugin
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If this is a mentoring request, please provide an update here. Thank you for your contributions.
If a plugin cannot execute a task, it should return nil node. agree with you .. why we conisder the chain case , it's the pulugin of argo.. we should matching the plugin according to the peoples' defination...
If a plugin cannot execute a task, it should return nil node.
and this is very not friendly too. plugin is developed by different team, i dont' think they will consider that whether they should deal with different case ...
Is it possible that only execute the plugin whose name matches the plugin name in the template?
how about not creating unnecessary sidecar containers for unused plugins? https://github.com/argoproj/argo-workflows/blob/fec39fad72ed22031fac305a208fd67fd011fa3b/workflow/controller/agent.go#L271-L290
@alexec Can I filter unnecessary ExecutorPlugins by plugin value key? https://github.com/luyang93/argo-workflows/blob/51154bfed2110525f4046c2bcfc938d8ccc051f4/workflow/controller/agent.go#L148-L153
how about not creating unnecessary sidecar containers for unused plugins?
https://github.com/argoproj/argo-workflows/blob/fec39fad72ed22031fac305a208fd67fd011fa3b/workflow/controller/agent.go#L271-L290
Yes, this will be really helpful. All executor plugins in both workflow and controller namespace will be loaded into the agent pod as sidecars now which contains many unused plugins.
https://github.com/luyang93/argo-workflows/blob/51154bfed2110525f4046c2bcfc938d8ccc051f4/workflow/controller/agent.go#L148-L153
Some executor plugins may be lost, if the workflow spec uses templateRef to import plugin template.
https://github.com/luyang93/argo-workflows/blob/51154bfed2110525f4046c2bcfc938d8ccc051f4/workflow/controller/agent.go#L148-L153
Some executor plugins may be lost, if the workflow spec uses
templateRefto import plugin template.
Yep, I'm stuck on how to recursively list plugins that use ref in the workflow.
This seems reasonable. It does not make sense for two plugins to execute the same task.
https://github.com/luyang93/argo-workflows/blob/51154bfed2110525f4046c2bcfc938d8ccc051f4/workflow/controller/agent.go#L148-L153
Some executor plugins may be lost, if the workflow spec uses
templateRefto import plugin template.Yep, I'm stuck on how to recursively list plugins that use ref in the workflow.
Although the agent pod loads many unused plugins, during execution, it is easy to select the corresponding plugin based on the plugin name specified in the template, instead of iterating through all of them. as a workaround? But it would be best to avoid loading unnecessary plugins to save on scheduling and resource overhead.
I came across the same issue while working on a solution for https://github.com/argoproj/argo-workflows/issues/13026 which makes this problem even more relevant, because you may end up with a lot of unnecessary volumes.
My worry is that some users may already rely on this undocumented behaviour. For example with the current implementation one custom plugin can serve multiple plugin RPC calls. If we start filtering which plugin will be loaded in the agent pod based on the name alone some workflows may break.
My worry is that some users may already rely on this undocumented behaviour. For example with the current implementation one custom plugin can serve multiple plugin RPC calls. If we start filtering which plugin will be loaded in the agent pod based on the name alone some workflows may break.
What's worse is that the plugin actually has two names: one is the name of the executor plugin configmap (without the '-executor-plugin' suffix), and the other is the name of the sidecar container.
Moreover, they are both in use: the former is used to retrieve the service account, while the latter is written into agent pod env EnvVarPluginNames, which is extremely confusing.