kubetest
kubetest copied to clipboard
Setup Kubernetes resource for running tests against
Hi everyone!
What I am trying to do is the following: setup MinIO and run multiple tests against it. With a bit of googling I found out that there is a setup_class
which could be a good starting point for what I am trying to do. However I need the kube
fixture which is not possible to be used in setup_class
, so basically this:
import pytest
from kubetest import client
class TestS3():
@pytest.mark.applymanifests('../examples/minio')
def setup_class(self,kube: client.TestClient):
print("setting up and waiting minio to be ready")
minio_deployment = kube.get_deployments(labels={"app":"minio"})["minio"]
minio_deployment.wait_until_ready(3*60)
# minio_service = kube.get_services(labels={"app":"minio"})
# print(minio_service)
def test_one(self):
print("one")
assert True
def test_two(self):
print("two")
assert True
complains about TypeError: setup_class() missing 1 required positional argument: 'kube'
Would really appreciate any suggestions about how I can work around that and setup a minio deployment before running the tests.
Thanks!
Hi, thanks for opening an issue. kubetest
is a pytest plugin, so its intended to be run for tests run with pytest. I don't believe pytest has a notion of setup_class
-- I believe that is something which is part of the python standard library's unitttest
.
The equivalent in pytest would be to use a fixture. A fixture can be used as a class or function setup/teardown, depending on the scope of the fixture. For example, for a test class:
class TestS3:
def test_one(self):
assert True
you could define a fixture for the class
@pytest.fixture(scope='class')
def setup_deployment(kube):
minio_deployment = kube.load_deployment('../examples/minio/deploy.yaml')
minio_deployment.wait_until_ready(3*60)
and then it can be consumed by the usefixtures marker on the test class
@pytest.mark.usefixtures('setup_deployment')
class TestS3:
def test_one(self):
assert True
So now, it will effectively be the same as a setup class. If you also want teardown capabilities, in the same fixture, just yield and add teardown logic after the yield
@pytest.fixture(scope='class')
def setup_deployment(kube):
# Setup logic goes before the yield
minio_deployment = kube.load_deployment('../examples/minio/deploy.yaml')
minio_deployment.wait_until_ready(3*60)
yield
# Teardown logic goes after the yield
Something I'm seeing now which is a bit unfortunate is that there is no equivalent to the applymanifests marker as a direct call off of the TestClient. It should be relatively easy to add, since the logic for it mostly already exists for the marker, but there just is no function
kube.load('./all/manifests')
which will work, so instead, you'll have to target each manifest individually with the resource-based loaders, e.g.
deploy = kube.load_deployment('deployment.yaml')
svc = kube.load_service('service.yaml')
Hope that was helpful -- let me know if you have any other questions!
Hi @edaniszewski thanks for your prompt response! This version:
import pytest
@pytest.fixture(scope='class')
def setup_deployment(kube):
minio_deployment = kube.load_deployment('../examples/minio/deploy.yaml')
minio_deployment.wait_until_ready(3*60)
@pytest.mark.usefixtures('setup_deployment')
class TestS3():
def test_one(self):
assert True
def test_two(self):
assert True
seems to be problematic as well as it throws:
_____________________________________________________________________________________ ERROR at setup of TestS3.test_one ______________________________________________________________________________________
ScopeMismatch: You tried to access the 'function' scoped fixture 'kube' with a 'class' scoped request object, involved factories
Workspaces/hybrid-storage/dataset-lifecycle-framework/tests/test_s3.py:3: def setup_deployment(kube)
.local/share/virtualenvs/dataset-lifecycle-framework-aVqSm6wp/lib/python3.6/site-packages/kubetest/plugin.py:433: def kube(kubeconfig, request) -> kubetest.client.TestClient
Oh! Right. Sorry, I ran into this scoping issue before and just completely forgot about it.
The alternative to get things working would instead be to scope the fixture at a function level (which is the default scope), and then to apply the fixture at the function level.
import pytest
@pytest.fixture()
def setup_deployment(kube):
minio_deployment = kube.load_deployment('../examples/minio/deploy.yaml')
minio_deployment.wait_until_ready(3*60)
class TestS3():
@pytest.mark.usefixtures('setup_deployment')
def test_one(self):
assert True
@pytest.mark.usefixtures('setup_deployment')
def test_two(self):
assert True
This is probably less than ideal because it is creating a new deployment for each test case instead of each class of tests, so there is a good deal of overhead in that. I've been on and off working on figuring out how to resolve this scoping issue, but it gets a bit deep into pytest mechanics and is sadly not as easy a problem to solve as I would hope, so I don't see support for class-level scoping being available for a while
Ah yes yes, somehow I was able to achieve what you are suggesting here, but this is exactly what I wanted to avoid. Setting up minio for every test case. I think I would prefer this approach instead:
from kubetest import client
class TestS3():
def setup_class(self):
print("setting up and waiting minio to be ready")
# OBTAIN THE NAMESPACE THE TEST IS RUN
# USING KUBERNETES PYTHON CLIENT CREATE THE MINIO DEPLOYMENT
# WAIT FOR IT TO BE READY
def test_one(self,kube: client.TestClient):
print("one")
assert True
def test_two(self,kube: client.TestClient):
print("two")
assert True
so follow-up question, is there a way to retrieve the namespace the test is run so I can pass it to the kubernetes python client?
Unfortunately, not within the scope of the setup_class
function here. The namespace is autogenerated and uses a timestamp to help make it unique, so its not something that could be easily determined without being passed the value. The TestClient has a namespace
attribute, but the TestClient is only available within function scope since the namespace and k8s objects are created just before the test is run.
The initial design of kubetest and having it be function-centric instead of class-centric is pretty limiting and something that I do want to remedy, but I just haven't had much time to rework all the pieces, so it'll take a while.
I agree, class based fixtures would be the right way to go and it's pretty compelling given the usecase
Yeap, I think it would be an extremely useful improvement. One final attempt for a workaround. Would it be possible to disable the teardown hook on a function? Was thinking something like this:
def setup_but_not_teardown():
.......
def test_against_deployment():
.......
I haven't tried it, but I think the answer is technically yes, but I believe there are a lot of hoops to jump through in order to get that to work, and even then I'm not positive it would work, so I'm going to say probably not.
One thing I was thinking about was that it seems like the sticking point here is around doing the manifest apply before tests run. While annoying and possibly somewhat counter to the point of having a kubernetes testing plugin, I think you should be able to start the deployment/service/etc external to kubetest (e.g. via some shell script which runs prior to executing tests, or via a python script which maybe uses the kubernetes client or something to do the manifest apply), then you should be able to run tests with kubetest against that namespace
To do so, you could use the @pytest.mark.namespace
marker to disable creating a new namespace and use an existing one instead, e.g.
class TestS3:
@pytest.mark.namespace(create=False, name='test-ns')
def test_one(self):
assert True
Its a bit of a headache and workaround for some poor initial planning, but at this point is kinda seems like any other option would be as well
Yes I agree that this is probably the "cleanest" workaround. Thanks again!
I hope it works! Sorry for all the roundabout implementation for these "solutions". I'm hoping to do a bit of a revamp of the project to make it more functional/useful, so going through your use case and feeling the pain points of it does help inform the areas I need to focus on