python icon indicating copy to clipboard operation
python copied to clipboard

flask app multithread using kubernetes-client: ApiExecption(0) Handshake status 200 OK

Open seuponder opened this issue 5 years ago • 11 comments

Traceback: Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 2309, in call return self.wsgi_app(environ, start_response) File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 2295, in wsgi_app response = self.handle_exception(e) File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1741, in handle_exception reraise(exc_type, exc_value, tb) File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 2292, in wsgi_app response = self.full_dispatch_request() File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1815, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1718, in handle_user_exception reraise(exc_type, exc_value, tb) File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1813, in full_dispatch_request rv = self.dispatch_request() File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1799, in dispatch_request return self.view_functionsrule.endpoint File "/usr/local/lib/python2.7/dist-packages/flask_login/utils.py", line 261, in decorated_view return func(*args, **kwargs) File "/home/h37/X_Server/ServerController/app/views/user_view.py", line 78, in user_center navigations=view_util.get_navi_path(proj_name), File "/home/h37/X_Server/ServerController/app/views/view_util.py", line 98, in get_navi_path branches = KubeCtl().get_project_hold_server_branches(project_name) File "/home/h37/X_Server/ServerController/app/kubectl/KubeCtl.py", line 381, in get_project_hold_server_branches pod_lst = self.v1_api.list_namespaced_pod(project_name) File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/apis/core_v1_api.py", line 12372, in list_namespaced_pod (data) = self.list_namespaced_pod_with_http_info(namespace, **kwargs) File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/apis/core_v1_api.py", line 12472, in list_namespaced_pod_with_http_info collection_formats=collection_formats) File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", line 334, in call_api _return_http_data_only, collection_formats, _preload_content, _request_timeout) File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", line 168, in __call_api _request_timeout=_request_timeout) File "/usr/local/lib/python2.7/dist-packages/kubernetes/stream/stream.py", line 31, in _intercept_request_call return ws_client.websocket_call(config, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/kubernetes/stream/ws_client.py", line 260, in websocket_call raise ApiException(status=0, reason=str(e)) ApiException: (0) Reason: Handshake status 200 OK

In my flask application, using multithread mode.

And, if one request need execute a connect_post_namespaced_pod_exec function (I invoke it by using stream.). The connect_post_namespaced_pod_exec will cost minutes to finished. While it is not finished, some other normal request comming, for example, list_namespaced_pod.

It throws the trace shown above. I find the stream implement is replace the api_client.request by ws_client like:

` def stream(func, *args, **kwargs): """Stream given API call using websocket"""

def _intercept_request_call(*args, **kwargs):
    # old generated code's api client has config. new ones has
    # configuration
    try:
        config = func.__self__.api_client.configuration
    except AttributeError:
        config = func.__self__.api_client.config

    return ws_client.websocket_call(config, *args, **kwargs)

prev_request = func.__self__.api_client.request
try:
    func.__self__.api_client.request = _intercept_request_call
    return func(*args, **kwargs)
finally:
    func.__self__.api_client.request = prev_request

` So, when connect_post_namespaced_pod_exec not finished, later requests will throw trace.

My question: It is kubernetes client does not support multi-thread scene? Or,it there a method to solve this problem.

My bottom line plan is to execute commands from os.system('cmd'). It will limit my flask application must be deploy on cluster master node.

seuponder avatar Aug 13 '19 13:08 seuponder

/assign @roycaihw

yliaog avatar Aug 13 '19 17:08 yliaog

I agree the "replacing api_client.request" behavior doesn't play well when you multithreading a stream request with non-stream requests.

@tomplus Does the asyncio client solve this problem?

Also this is a general problem when you reuse a client for non-stream requests after a stream request

/kind bug

roycaihw avatar Aug 13 '19 20:08 roycaihw

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Nov 11 '19 21:11 fejta-bot

/remove-lifecycle stale

I may be running into a similar situation when attempting to use a proposed ansible k8s_exec module with a slow-running command via connect_get_namespaced_pod_exec.

Related issue: https://github.com/geerlingguy/tower-operator/issues/5#issuecomment-554507041

geerlingguy avatar Nov 15 '19 20:11 geerlingguy

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Feb 13 '20 20:02 fejta-bot

/remove-lifecycle stale

geerlingguy avatar Feb 13 '20 20:02 geerlingguy

/unassign /lifecycle frozen

roycaihw avatar Feb 13 '20 23:02 roycaihw

guys, hello Any updates here ? Or may be some work around in case of multithreading ?

anikin-aa avatar Mar 18 '20 12:03 anikin-aa

This is horrible and I hate that this works for my purposes:

try

command=['longscript.sh', '&']

This "&" let me fork my long running processes to a background task in the shell rather than wait for a return. I would prefer an async return but when flask gives you lemons...

Tokugero avatar Mar 31 '20 23:03 Tokugero

I've been having the same issue and believe its related to #1158 , because the connection never closes socket becomes unusable by parallel applications.

Instead of creating a CoreV1Api or whatever you are using, try referencing it and instantiating the object every time. Still not a pretty solution, but I think it's better than restarting your application from time to time.

Here is a script with a never ending connection

import os
import subprocess
import time

from kubernetes import client as k8s_client
from kubernetes import config as k8s_config

# This is how you would normally do
k8s_config.load_kube_config()
core_api = k8s_client.CoreV1Api()

core_api.list_namespace()

# Change this to whatever you like, just to prove that connection never closes
time.sleep(5)

r = subprocess.check_output(["lsof", "-n", "-p", str(os.getpid()), "-a", "-i4"])

print(r.decode())

Here is a script that closes the connection:

import os
import subprocess
import time

from kubernetes import client as k8s_client
from kubernetes import config as k8s_config

# Reference instead of creating the object
k8s_config.load_kube_config()
core_api = k8s_client.CoreV1Api

# Create object everytime you need to use it
core_api().list_namespace()

r = subprocess.check_output(["lsof", "-n", "-p", str(os.getpid()), "-a", "-i4"])

print(r.decode())

Edits were just formatting (new to this, sorry)

g-crocker avatar Aug 10 '20 16:08 g-crocker

Unfortunately g-croker's approach didn't work for me, I had to save the kube context and reconstruct the CoreApiV1 and/or AppsV1Api when I wanted to make further API calls after making calls on background threads.

peterhorsley avatar Aug 29 '23 09:08 peterhorsley