python
python copied to clipboard
support for asyncio
Hi guys.
Do you consider providing this client library with support for asyncio ?
Regards, Tomasz Prus
(copy of https://github.com/kubernetes-client/python-base/issues/27)
I've prepared the Pull request where I show how callbacks can be used with asyncio. Let me know what you think about it. Thanks.
I just experimented with Swagger v2.3.1 which supports asyncio+aiohttp based client library generation, so that we could use a "native" async interface instead of interpolating callbacks to futures.
What I have done to make it working, starting from the kubernetes-client/gen repository:
- Set
SWAGGER_CODEGEN_COMMITinopenapi/python.shto"v2.3.1" - Add
<library>asyncio</library>underopenapi/python.xml's<configuration>element. - Run the generator script as instructed.
- my config was:
export KUBERNETES_BRANCH="release-1.8" export CLIENT_VERSION="1.0" # just a random value export PACKAGE_NAME="kubernetes"
- my config was:
- After generation, restructure the generated client library to match the structure guide
— UPDATE: these are the "small changes" in @tomplus comment in #324.- Relocate the generated
kubernetespython package tokubernetes/client. - Add python-base as
kubernetes/basegit submodule. - Create symbolic links
kubernetes/stream,kubernetes/config,kubernetes/watchlike this repo - Create
kubernetes/__init__.pylike this repo - Replace all absolute imports to use relative imports (e.g.,
from kubernetes.api_client.apps_v1beta1_deployment import AppsV1beta1Deploymentin the generated code tofrom .apps_v1beta1_deployment import AppsV1beta1Deployment) - Merge requirements of python-base and the generated library
pip install -e .inside a virtualenv- Fix up circular import errors related to
V1beta1JSONSchemaProps - NOTE: I think
findandsedcommands in the generator'spython.shscript should do the above changes automatically, but it didn't work out of the box. So I did manually.
- Relocate the generated
- Comment out
ssl_verifyargument inkubernetes/client/api_client.pysince aiohttp forbids passing bothssl_contextandssl_verifyoption (the later is included in the former)
Then I could run the following example:
import asyncio
from kubernetes import client, config
loop = asyncio.get_event_loop()
async def do():
config.load_kube_config()
v1 = client.CoreV1Api()
ret = await v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
loop.run_until_complete(do())
loop.close()
So the majority of APIs can be converted to use asyncio + aiohttp without much efforts, thanks to Swagger. I hope my "manual" fix-ups could be automated somehow! Still, the python-base streaming wrappers should be rewritten from scratch.
asyncio+aiohttp support would be really appreciated !
I use aiohttp as a server which accesses the python-client so it will be very benefitial to me as well.
Here's how you can do an async watch with the using client (minus error handling):
class Connector(object):
def __init__(self, q: asyncio.Queue, loop: asyncio.AbstractEventLoop, timeout: int = 30):
client_config = type.__call__(Configuration)
config.load_kube_config(client_configuration=client_config, persist_config=False)
client = ApiClient(configuration=client_config)
self.__configuration = client_config
self.__v1 = CoreV1Api(client)
self.__extv1beta1 = ExtensionsV1beta1Api(client)
self.__queue = q
self.__loop = loop
async def start(self):
async with aiohttp.ClientSession() as session:
await asyncio.gather(
self.__watch(session=session, api=self.__v1, url='/api/v1/endpoints', kind='V1Endpoints'),
self.__watch(session=session, api=self.__v1, url='/api/v1/services', kind='V1Service'),
self.__watch(session=session, api=self.__extv1beta1, url='/apis/extensions/v1beta1/ingresses', kind='V1beta1Ingress'),
loop=self.__loop)
async def __watch(self, session: aiohttp.ClientSession, api, url: str, kind: str):
params = {'watch': 'true'}
async with session.get(self.__configuration.host + url, params=params,
headers=self.__configuration.api_key) as response:
while True:
chunk = await response.content.readline()
if not chunk:
break
j = json.loads(chunk)
j['object'] = api.api_client._ApiClient__deserialize(data=j['object'], klass=kind)
await self.add_event(j)
At last I’ve prepared the new library kubernetes_asyncio which is based on this Python client, but uses the asyncio generator from swagger-codegen. I've added a script to create this client to kubernetes-client/gen repo in this PR https://github.com/kubernetes-client/gen/pull/60. I've also decided to incorporate functionality from sub-repository kubernetes-client/python-base because a lot of changes according to asyncio were needed.
Please take a look, thanks.
Inspired by @tomplus I have followed in his footsteps and created aiokubernetes.
Unlike kubernetes_asyncio, it is not backwards compatible. The user visible changes are actually minor but allowed me to remove unused Python 2.x code paths and drop Python 3.5 in favor of Python 3.7 (perhaps relevant for #558).
I added documentation and usage examples and would be grateful for any feedback - thank you.
Please see https://github.com/kubernetes-client/python/pull/324#issuecomment-408554991 for discussion about experimenting asyncio library in this repo.
@olitheolix we need to be backwards compatible for this package. I haven't seen reasons for dropping Python 3.5 (and even strong needs for fully supporting 3.7) yet. What has been changed?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Just curious but, @tomplus what was your motivation for doing this?
Can't sync libraries be used drop-in inside an async application/context?
I'm not Tom, but I used kubernetes_asyncio in the past to build a highly stateful nginx plus controller based on watches, which requires quite a bit of coordination between different parts of the system. Having a native async library makes this task a lot easier.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale