operator-sdk icon indicating copy to clipboard operation
operator-sdk copied to clipboard

Ansible operator-sdk: allow customization of proxy-server port - Issues persist after #5642

Open vittico opened this issue 2 years ago • 3 comments

Incomming requests are still pointing to the old default port 8888

When we change the proxy port to e.g 8889 the proxy is starting as expected with the new defined port, see:

{"level":"info","ts":1656584774.865534,"logger":"proxy","msg":"Starting to serve","Address":"127.0.0.1:8889"}

So far so good, but now a few lines further we get a lot of issues with incomming requests, related to the default port which was 8888, see:

 TASK [get current cr information] ******************************** 
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to get client due to HTTPConnectionPool(host='localhost', port=8888): Max retries exceeded with url: /version (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fd687981ee0>: Failed to establish a new connection: [Errno 111] Connection refused'))"}

So incomming requests will still go to the previous default port 8888 instead of the newly defined 8889. Do you know how and where I can also change this setting to have incomming requests using the new defined port?

Full log:

[ansible_deployer@openshift-de1qua-jump ~]$ oc logs -c manager -f project-manager-7b88db6df4-tbrnd
Error from server (BadRequest): container "manager" in pod "project-manager-7b88db6df4-tbrnd" is waiting to start: ContainerCreating
[ansible_deployer@openshift-de1qua-jump ~]$ oc logs -c manager -f project-manager-7b88db6df4-tbrnd
Flag --metrics-addr has been deprecated, use --metrics-bind-address instead
Flag --enable-leader-election has been deprecated, use --leader-elect instead
{"level":"info","ts":1656584772.0595105,"logger":"cmd","msg":"Version","Go Version":"go1.18.3","GOOS":"linux","GOARCH":"amd64","ansible-operator":"v1.22.0","commit":"9e95050a94577d1f4ecbaeb6c2755a9d2c231289"}
{"level":"info","ts":1656584772.0600734,"logger":"cmd","msg":"Watch namespaces not configured by environment variable WATCH_NAMESPACE or file. Watching all namespaces.","Namespace":""}
I0630 10:26:13.110431       7 request.go:601] Waited for 1.039674158s due to client-side throttling, not priority and fairness, request: GET:https://192.168.128.1:443/apis/machine.openshift.io/v1beta1?timeout=32s
{"level":"info","ts":1656584774.863734,"logger":"controller-runtime.metrics","msg":"Metrics server is starting to listen","addr":"127.0.0.1:23456"}
{"level":"info","ts":1656584774.864434,"logger":"watches","msg":"Environment variable not set; using default value","envVar":"ANSIBLE_VERBOSITY_PROJECTDETAILS_PIC2_XXXX_COM","default":2}
{"level":"info","ts":1656584774.8645031,"logger":"cmd","msg":"Environment variable not set; using default value","Namespace":"","envVar":"ANSIBLE_DEBUG_LOGS","ANSIBLE_DEBUG_LOGS":false}
{"level":"info","ts":1656584774.8645232,"logger":"ansible-controller","msg":"Watching resource","Options.Group":"pic2.XXXX.com","Options.Version":"v1alpha1","Options.Kind":"ProjectDetails"}
{"level":"info","ts":1656584774.865534,"logger":"proxy","msg":"Starting to serve","Address":"127.0.0.1:8889"}
{"level":"info","ts":1656584774.8655586,"logger":"apiserver","msg":"Starting to serve metrics listener","Address":"localhost:5050"}
{"level":"info","ts":1656584774.8657513,"msg":"Starting server","path":"/metrics","kind":"metrics","addr":"127.0.0.1:23456"}
{"level":"info","ts":1656584774.865769,"msg":"Starting server","kind":"health probe","addr":"[::]:12348"}
I0630 10:26:14.865930       7 leaderelection.go:248] attempting to acquire leader lease ci-project-operator/project...
I0630 10:26:44.934831       7 leaderelection.go:258] successfully acquired lease ci-project-operator/project
{"level":"info","ts":1656584804.9350224,"msg":"Starting EventSource","controller":"projectdetails-controller","source":"kind source: *unstructured.Unstructured"}
{"level":"info","ts":1656584804.9350774,"msg":"Starting Controller","controller":"projectdetails-controller"}
{"level":"info","ts":1656584805.035514,"msg":"Starting workers","controller":"projectdetails-controller","worker count":1}

TASK [projectdetails : check if project size has been provided and add if not] ***
task path: /opt/ansible/roles/projectdetails/tasks/main.yml:5
-------------------------------------------------------------------------------
{"level":"info","ts":1656584806.621747,"logger":"logging_event_handler","msg":"[playbook task start]","name":"ci-network-testing","namespace":"","gvk":"pic2.XXXX.com/v1alpha1, Kind=ProjectDetails","event_type":"playbook_on_task_start","job":"8383812812602931106","EventData.Name":"projectdetails : check if project size has been provided and add if not"}

TASK [projectdetails : get current cr information] *****************************
task path: /opt/ansible/roles/projectdetails/tasks/add_spec_project_size.yml:2

-------------------------------------------------------------------------------
{"level":"info","ts":1656584806.641586,"logger":"logging_event_handler","msg":"[playbook task start]","name":"ci-network-testing","namespace":"","gvk":"pic2.XXXX.com/v1alpha1, Kind=ProjectDetails","event_type":"playbook_on_task_start","job":"8383812812602931106","EventData.Name":"projectdetails : get current cr information"}

--------------------------- Ansible Task StdOut -------------------------------

 TASK [get current cr information] ******************************** 
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to get client due to HTTPConnectionPool(host='localhost', port=8888): Max retries exceeded with url: /version (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fd687981ee0>: Failed to establish a new connection: [Errno 111] Connection refused'))"}

-------------------------------------------------------------------------------

vittico avatar Jul 25 '22 09:07 vittico

Looks like we are using the proxyport for starting the proxy, but not for connecting to it...

https://github.com/operator-framework/operator-sdk/blob/87cdc50247832a53e26b713a2ecab6e2215bdb52/internal/ansible/controller/reconcile.go#L147

asmacdo avatar Aug 01 '22 17:08 asmacdo

https://xkcd.com/2652/

asmacdo avatar Aug 17 '22 14:08 asmacdo

Is this still open? Would like to take a crack at it.

jcho02 avatar Sep 09 '22 16:09 jcho02

hey @jcho02, AFAIK this is not closed and is labeled with "help wanted", so that'll be awesome if you want to give it a shot!

tlwu2013 avatar Oct 12 '22 19:10 tlwu2013

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

openshift-bot avatar Jan 11 '23 01:01 openshift-bot

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten /remove-lifecycle stale

openshift-bot avatar Feb 10 '23 08:02 openshift-bot

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen. Mark the issue as fresh by commenting /remove-lifecycle rotten. Exclude this issue from closing again by commenting /lifecycle frozen.

/close

openshift-bot avatar Mar 13 '23 00:03 openshift-bot

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen. Mark the issue as fresh by commenting /remove-lifecycle rotten. Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

openshift-ci[bot] avatar Mar 13 '23 00:03 openshift-ci[bot]