python icon indicating copy to clipboard operation
python copied to clipboard

The library always uses port 80 when using the K8S_AUTH_KUBECONFIG env var

Open odra opened this issue 5 years ago • 30 comments

What happened (please include outputs or screenshots):

Created a kubeconfig file stored in a different folder than the default one and I get the following error:

fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to get client due to HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /version (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f6f87772f10>: Failed to establish a new connection: [Errno 111] Connection refused'))"}

What you expected to happen:

A successful request.

How to reproduce it (as minimally and precisely as possible):

  • pip install kubernetes==12.0.1
  • create a kubeconfig in another folder (changing the port if possible)
  • set K8S_AUTH_KUBECONFIG to the new kubeconfig file path
  • try to run a simple integration, such as creating a namespace
  • the error should show up

Anything else we need to know?

I am using the k8s ansible module but it works if I use an older version of the library (11.0.0).

Environment:

  • Kubernetes version (kubectl version):
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:17:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}

Kind Container: latest-1.16

  • OS (e.g., MacOS 10.13.6): Linux
  • Python version (python --version): Python 3.7.7
  • Python client version (pip list | grep kubernetes): -12.0.1

odra avatar Nov 24 '20 15:11 odra

Hi @odra. Are you sure it worked in previous version of the package? Does kubectl work in your environment? Try to set KUBECONFIG (instead of K8S_AUTH_KUBECONFIG) to load your kube-config.

tomplus avatar Nov 24 '20 21:11 tomplus

I tried to set both vars (KUBECONFIG and K8S_AUTH_KUBECONFIG) and it did not work.

Yes kubectl does work in my environment and it works with the version 11.x of the library.

I was actually wondering if there were any changes in the expected kubeconfig file format (for 12.x) in a way that it is trying to retrieve the server url from another property.

odra avatar Nov 24 '20 22:11 odra

i'm seeing the same issue in my environments. one is Fedora33, the other centos8. both received package updates recently and the ptyhon3-kubernetes client got bumped to 12.0.1

setting the kubeconfig via env var, or directly on the module did not work.

jeichler avatar Nov 27 '20 11:11 jeichler

I am also seeing the same issue in our environment, we are usinh CentOS 7.9. Kubernetes client got upgraded to 12.0.1 and that's breaking our pipeline. We use ansible kubernetes collection to deploy applications into kubernetes, with this change even on passing kubeconfig directly, it always fails complaining that request is http.

ankur-gupta-guavus avatar Nov 27 '20 12:11 ankur-gupta-guavus

I am seeing the same issue with our molecule tests using python-kubernetes.

jmontleon avatar Dec 01 '20 00:12 jmontleon

I'm experiencing the same problem. I'm using Ansible's k8s module and have v12.0.1 of the Kubernetes Python client installed.

AuditeMarlow avatar Feb 04 '21 14:02 AuditeMarlow

Experienced the same issue. Pinned the k8s module to v11.0 and it's back to working. Also had to pin an openshift pkg which depends on k8s v12.0 to v0.11

genevieve avatar Feb 26 '21 01:02 genevieve

yep, same error for me as well. Downgraded to v11 to make it work

sownak avatar Feb 26 '21 12:02 sownak

Credits to @genevieve for the find. For all those having the same issue: pip3 install -Iv kubernetes==11.0.0

jeroentorrekens avatar Feb 26 '21 13:02 jeroentorrekens

Thank you for the workaround @jeroentorrekens

typ-ex avatar Feb 27 '21 17:02 typ-ex

Faced the same issue. Downgrading kuberenets to v11 helped. Thanks! Now my packages are kubernetes==11.0.0 openshift==0.11.0

Note: if you install openshift with 12, then it automatically reinstalls kubernetes to v12. So keep that in mind

arjunkrishnasb avatar Mar 29 '21 06:03 arjunkrishnasb

Another voice to the list of impacted folks. We had to roll back to 11.0.0. The lack of fixes is concerning as we are about to migrate to k8s 1.18 and soon 1.19 and 11.0.0 is not guaranteed to work well beyong k8s 1.17.

@tomplus there are several reports for this regression. Is there a fix planned in the near future?

sodul avatar Apr 08 '21 05:04 sodul

plus one on the impact of this bug

cjreyn avatar May 06 '21 12:05 cjreyn

Just tested with kubernetes==17.17.0 and openshift==0.12.0.

We are still getting Failed to get client due to HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /version (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f08f3c410a0>: Failed to establish a new connection: [Errno 111] Connection refused')

sodul avatar May 19 '21 08:05 sodul

We were able to move forward with the latest versions of the client (17.17.0), with the OpenShift library (0.12.0), by upgrading Ansible from 2.x to the latest 4.x (4.0.0). The rest of our code had no issue.

sodul avatar May 28 '21 02:05 sodul

We were able to move forward with the latest versions of the client (17.17.0), with the OpenShift library (0.12.0), by upgrading Ansible from 2.x to the latest 4.x (4.0.0). The rest of our code had no issue.

You da real MVP for continuous testing.

typ-ex avatar May 28 '21 02:05 typ-ex

Hi,

Today I have faced the same issue. As per the above I have installed the pip3 install -Iv kubernetes==11.0.0 in my master node. Once this has been installed I can create the POD using my ansible playbook.

vinshika avatar Jun 20 '21 19:06 vinshika

We were able to move forward with the latest versions of the client (17.17.0), with the OpenShift library (0.12.0), by upgrading Ansible from 2.x to the latest 4.x (4.0.0). The rest of our code had no issue.

Those versions don't seem to work for me:

jtorreke@jtorreke-laptop:~$ pip3 freeze | egrep "(ansible|openshift|kubernetes)"
ansible==4.1.0
ansible-core==2.11.2
kubernetes==17.17.0
openshift==0.12.0

And still getting the same error message.

jeroentorrekens avatar Jul 02 '21 15:07 jeroentorrekens

@jeroentorrekens This is what I have on my machine and we do not have issues:

> pip3 freeze | egrep "(ansible|openshift|kubernetes)"
ansible==4.1.0
ansible-core==2.11.2
azure-mgmt-redhatopenshift==0.1.0
kubernetes==17.17.0
openshift==0.12.1

I'm not saying it will solve your problem in your case, just that it works for us. Good luck.

sodul avatar Jul 04 '21 09:07 sodul

Problem disappeared after I installed the following versions:

ansible==4.5.0 ansible-core==2.11.4 ansible-runner==1.4.7 ansible-runner-http==1.0.0 kubernetes==12.0.1 openshift==0.12.1

adalziso avatar Sep 13 '21 16:09 adalziso

Problem disappeared after I installed the following versions:

ansible==4.5.0 ansible-core==2.11.4 ansible-runner==1.4.7 ansible-runner-http==1.0.0 kubernetes==12.0.1 openshift==0.12.1

Are you sure? With the following, it still happens here

ansible-4.5.0.tar.gz
ansible-core-2.11.4.tar.gz
ansible_runner-1.4.7-py3-none-any.whl
ansible_runner_http-1.0.0-py2.py3-none-any.whl
kubernetes-12.0.1-py2.py3-none-any.whl
openshift-0.12.1.tar.gz

origliante avatar Sep 24 '21 20:09 origliante

@origliante as an updated datapoint the problem is now gone for us and we have these versions:

> pip3 freeze | egrep "(ansible|openshift|kubernetes)"
ansible==4.4.0
ansible-core==2.11.3
azure-mgmt-redhatopenshift==1.0.0
kubernetes==18.20.0
openshift==0.12.1

sodul avatar Sep 24 '21 20:09 sodul

@origliante as an updated datapoint the problem is now gone for us and we have these versions:

> pip3 freeze | egrep "(ansible|openshift|kubernetes)"
ansible==4.4.0
ansible-core==2.11.3
azure-mgmt-redhatopenshift==1.0.0
kubernetes==18.20.0
openshift==0.12.1

How can you have that?

$ poetry add [email protected] [email protected]

Updating dependencies
Resolving dependencies... (0.2s)

  SolverProblemError

  Because openshift (0.12.1) depends on kubernetes (>=12.0,<13.0)
   and pltlib depends on kubernetes (18.20.0), openshift is forbidden.
  So, because pltlib depends on openshift (0.12.1), version solving failed.

pip, same story:

openshift 0.12.1 requires kubernetes~=12.0, but you have kubernetes 18.20.0 which is incompatible.

origliante avatar Sep 24 '21 20:09 origliante

We consider the new pip resolver to be broken, it takes several minutes for it to decide how to resolve the dependency tree for example, and we have to use poorly managed packages such as Azure. We do not actually use openshift, but azure insists on importing it.

Try --use-deprecated=legacy-resolver the next time you pip install. You'll notice that it is much much faster and will result in less installation errors. I do understand that the new resolver is more 'correct' but until all package maintainers get saner dependencies (Azure again), and pip fixes the horrendous performance of the new resolver we will stay clear from it.

sodul avatar Sep 24 '21 20:09 sodul

There's a workaround (using older version) but what about a proper fix?

zdzichu avatar Dec 12 '21 14:12 zdzichu

@zdzichu try the following out:

From: https://docs.ansible.com/ansible/latest/collections/community/kubernetes/k8s_module.html

Please update your tasks to use the new name kubernetes.core.k8s instead. It will be removed in version 3.0.0 of community.kubernetes.

By updating the tasks to kubernetes.core.k8s, the underlying module is fixed.

ansible-core-2.12.0
ansible_runner-2.0.3
kubernetes-18.20.0
(no openshift)

origliante avatar Dec 13 '21 11:12 origliante

any updates on this?

i'm using python venv to setup the environment to run ansible in with the following specs, but it just doesn't work. I tested the K8S_AUTH_KUBECONFIG and KUBECONFIG envs, with and without each other, and my kubeconfig-file does work when using the kubectl cli.

ansible-galaxy collections:

Collection           Version
-------------------- -------
amazon.aws           3.0.0  
community.general    4.3.0  
community.kubernetes 2.0.1  
kubernetes.core      2.2.3 

pip packages:

ansible==4.10.0
ansible-compat==1.0.0
ansible-core==2.11.8
ansible-lint==5.4.0
arrow==1.2.2
bcrypt==3.2.0
binaryornot==0.4.4
boto3==1.20.54
botocore==1.23.54
bracex==2.2.1
cachetools==5.0.0
Cerberus==1.3.2
certifi==2021.10.8
cffi==1.15.0
chardet==4.0.0
charset-normalizer==2.0.12
click==8.0.3
click-help-colors==0.9.1
colorama==0.4.4
commonmark==0.9.1
cookiecutter==1.7.3
cryptography==36.0.1
enrich==1.2.7
google-auth==2.6.0
idna==3.3
Jinja2==3.0.3
jinja2-time==0.2.0
jmespath==0.10.0
kubernetes==11.0.0
MarkupSafe==2.0.1
molecule==3.6.0
oauthlib==3.2.0
packaging==21.3
paramiko==2.9.2
pathspec==0.9.0
pkg_resources==0.0.0
pluggy==1.0.0
poyo==0.5.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.21
Pygments==2.11.2
PyNaCl==1.5.0
pyparsing==3.0.7
python-dateutil==2.8.2
python-slugify==5.0.2
PyYAML==6.0
requests==2.27.1
requests-oauthlib==1.3.1
resolvelib==0.5.4
rich==11.2.0
rsa==4.8
ruamel.yaml==0.17.21
ruamel.yaml.clib==0.2.6
s3transfer==0.5.1
six==1.16.0
subprocess-tee==0.3.5
tenacity==8.0.1
text-unidecode==1.3
urllib3==1.26.8
wcmatch==8.3
websocket-client==1.2.3
yamllint==1.26.3

OlGe404 avatar Feb 14 '22 12:02 OlGe404

Have you looked at the .kube/config file? There is a cmd line for token aquire:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: [...]
    server: https://[...].gr7.us-east-1.eks.amazonaws.com
  name: arn:aws:eks:us-east-1:[...]:cluster/my-cluster
contexts:
- context:
    cluster: arn:aws:eks:us-east-1:[...]:cluster/my-cluster
    user: arn:aws:eks:us-east-1:[...]:cluster/my-cluster
  name: arn:aws:eks:us-east-1:[...]:cluster/my-cluster
current-context: arn:aws:eks:us-east-1:[...]:cluster/my-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:[...]:cluster/my-cluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-east-1
      - eks
      - get-token
      - --cluster-name
      - my-cluster
      command: aws

This was causing issues when using wait condition for kubernetes.core.k8s_info. You shoud change the exec part and use static token.

Mionsz avatar Apr 28 '22 15:04 Mionsz

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 27 '22 16:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 26 '22 16:08 k8s-triage-robot