ceph-csi icon indicating copy to clipboard operation
ceph-csi copied to clipboard

Improve e2e tests to help nbd tests

Open pkalever opened this issue 3 years ago • 5 comments

Describe the bug

Improve e2e tests to help nbd tests:

Add the below intelligence to the utilities

  • Check all the nodes for kernel versions that are stable enough for nbd
  • If any of the nodes is having expected kernel version
  • Schedule the application pod to the specific node

Depends on:

  • [ ] https://github.com/ceph/ceph-csi/issues/857

pkalever avatar Aug 24 '21 08:08 pkalever

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

github-actions[bot] avatar Sep 23 '21 21:09 github-actions[bot]

@Madhu-1 I remember this was your suggestion (comment on some PR review)? If you still remember was it for e2e only or for actual workflow?

If it is for actual workflow, who can check for the preferred kernel version in all the hosts in ceph-csi, is there anything currently in our framework which can help run the uname -r command in all the nodes easily and return the output? Can you please provide more technical details for the implementation front?

Also, how do we set the affinity for the real application pods from ceph-csi?

Maybe I'm thinking in the wrong direction or forget the gist for raising this issue :-)

Thanks!

pkalever avatar Oct 06 '21 12:10 pkalever

@Madhu-1 Given our e2e runs on a single node minikube instance, I don't think we need this now?

pkalever avatar Oct 27 '21 12:10 pkalever

for now its minikube. the plan is to run with a multinode minikube cluster.

Madhu-1 avatar Oct 27 '21 12:10 Madhu-1

for now its minikube. the plan is to run with a multinode minikube cluster.

@Madhu-1

Yes, I understand this part. I mean to say, this might need some work in our e2e after we have a multinode minikube cluster. And there is nothing to be done with/for existing e2e runs.

There are two things that we can do about this issue:

  1. wait until we have mutinode minikube cluster and address this, which is a long-term wait.
  2. Close this now as we do not have mutinode minikube cluster currently (for some extent we started worrying about something that doesn't exist :-) )

I will leave it up to you!

Thanks!

pkalever avatar Oct 27 '21 13:10 pkalever