odo
odo copied to clipboard
Fix: ServiceBinding resources are not deployed with odo deploy
Signed-off-by: Parthvi Vala [email protected]
What type of PR is this: /kind bug
What does this PR do / why we need it: ServiceBinding resources are now allowed to be deployed on the cluster with odo deploy.
Which issue(s) this PR fixes:
Fixes #5883
PR acceptance criteria:
-
[ ] Unit test
-
[x] Integration test
-
[ ] Documentation
How to test changes / Special notes to the reviewer: 1.
odo init --devfile go --name my-go-app --starter go-starter
- Copy the contents of the devfile.yaml as mentioned in #5883
- Deploy the bindable kind instance as mentioned in #5883
- Run
odo deploy
.
PODMAN_CMD=echo odo deploy
Deploy Preview for odo-docusaurus-preview canceled.
Name | Link |
---|---|
Latest commit | 8fdafc5579b9db593357cd60da324f2c34eded2d |
Latest deploy log | https://app.netlify.com/sites/odo-docusaurus-preview/deploys/63063c974ce490000aecc929 |
Unit Tests on commit a66b4eab6bde4607fd3de45c03e414ea0e97b748 finished successfully. View logs: TXT HTML
Validate Tests on commit a66b4eab6bde4607fd3de45c03e414ea0e97b748 finished successfully. View logs: TXT HTML
Kubernetes Tests on commit a66b4eab6bde4607fd3de45c03e414ea0e97b748 finished successfully. View logs: TXT HTML
OpenShift Tests on commit a66b4eab6bde4607fd3de45c03e414ea0e97b748 finished successfully. View logs: TXT HTML
Windows Tests (OCP) on commit ee0dae9e249080ec2790a7dd40402466ffa647ad finished with errors. View logs: TXT HTML
The change is small, but complexifies the logic of the code:
- For
odo dev
, we are using PushLinks to create the bindings with or without operators - For
odo deploy
, we are directly applying the servicebinding resource
I think that making some refactoring around the functions PushLinks
, pushDevfileKubernetesComponents
, PushKubernetesResources
and PushKubernetesResource
would be helpful, so the code would be usable for odo dev
and odo deploy
in a nice way.
@feloy There were a few complications with the refactoring when SBO is not installed. Since we're using library, it is required that the deployment in question be present before creating the secret. Because of this, we'll create the link after the component deployment is created if SBO is not present, otherwise it'll be created normally like any other K8s component.
With odo deploy
, we don't really need to deal with the Without SBO scenario.
@feloy There were a few complications with the refactoring when SBO is not installed. Since we're using library, it is required that the deployment in question be present before creating the secret. Because of this, we'll create the link after the component deployment is created if SBO is not present, otherwise it'll be created normally like any other K8s component.
With
odo deploy
, we don't really need to deal with the Without SBO scenario.
@valaparthvi Did you try by creating all Kubernetes resources (servicebindings and all other resources) after the deployment?
@feloy There were a few complications with the refactoring when SBO is not installed. Since we're using library, it is required that the deployment in question be present before creating the secret. Because of this, we'll create the link after the component deployment is created if SBO is not present, otherwise it'll be created normally like any other K8s component. With
odo deploy
, we don't really need to deal with the Without SBO scenario.@valaparthvi Did you try by creating all Kubernetes resources (servicebindings and all other resources) after the deployment?
Theoretically, I think that would work, but then we also talked about creating all the resources beforehand to avoid restarting the pod and speeding up the process. So I didn't go that way.
@feloy There were a few complications with the refactoring when SBO is not installed. Since we're using library, it is required that the deployment in question be present before creating the secret. Because of this, we'll create the link after the component deployment is created if SBO is not present, otherwise it'll be created normally like any other K8s component. With
odo deploy
, we don't really need to deal with the Without SBO scenario.@valaparthvi Did you try by creating all Kubernetes resources (servicebindings and all other resources) after the deployment?
Theoretically, I think that would work, but then we also talked about creating all the resources beforehand to avoid restarting the pod and speeding up the process. So I didn't go that way.
Thinking again about this, it's a matter of milliseconds. We would create Deployment and SBs very quickly, we are not 100% sure in which order they will be handled by controllers on the cluster.
And creating the SB before the Deployment can raise an error for the SBO because the Deployment does not exist when the SB is created, and I'm not sure how the error is recovered (either quickly or not).
And creating the SB before the Deployment can raise an error for the SBO because the Deployment does not exist when the SB is created, and I'm not sure how the error is recovered (either quickly or not).
It seems to be working fine so far. Creating SB before Deployment has not been a problem so far.
Thinking again about this, it's a matter of milliseconds. We would create Deployment and SBs very quickly, we are not 100% sure in which order they will be handled by controllers on the cluster.
Alright, let's go with creating K8s resources are the core deployment is created, it seems logical.
Thinking again about this, it's a matter of milliseconds. We would create Deployment and SBs very quickly, we are not 100% sure in which order they will be handled by controllers on the cluster.
Alright, let's go with creating K8s resources are the core deployment is created, it seems logical.
I'm trying to explore this direction, as it seems the most generic one, so we don't have to handle specific cases at high levels, and we can "hide" the fact that we are using the SBO library at the lowest possible level (in the kclient package).
Thinking again about this, it's a matter of milliseconds. We would create Deployment and SBs very quickly, we are not 100% sure in which order they will be handled by controllers on the cluster.
Alright, let's go with creating K8s resources are the core deployment is created, it seems logical.
I'm trying to explore this direction, as it seems the most generic one, so we don't have to handle specific cases at high levels, and we can "hide" the fact that we are using the SBO library at the lowest possible level (in the kclient package).
I find it difficult to imagine handling this case in the kclient package.
Thinking again about this, it's a matter of milliseconds. We would create Deployment and SBs very quickly, we are not 100% sure in which order they will be handled by controllers on the cluster.
Alright, let's go with creating K8s resources are the core deployment is created, it seems logical.
I'm trying to explore this direction, as it seems the most generic one, so we don't have to handle specific cases at high levels, and we can "hide" the fact that we are using the SBO library at the lowest possible level (in the kclient package).
I find it difficult to imagine handling this case in the kclient package.
Or in the pkg/service
package?
Thinking again about this, it's a matter of milliseconds. We would create Deployment and SBs very quickly, we are not 100% sure in which order they will be handled by controllers on the cluster.
Alright, let's go with creating K8s resources are the core deployment is created, it seems logical.
I'm trying to explore this direction, as it seems the most generic one, so we don't have to handle specific cases at high levels, and we can "hide" the fact that we are using the SBO library at the lowest possible level (in the kclient package).
I find it difficult to imagine handling this case in the kclient package.
Or in the
pkg/service
package?
Yes, I was thinking that too.
The failures are real, I'm debugging them right now.
OpenShift test failure is probably flake or it failed because of resource crunch. The test passes locally.
odo delete command tests
/go/odo_1/tests/integration/cmd_delete_test.go:13
when deleting a component containing preStop event that is deployed with DEV [BeforeEach]
/go/odo_1/tests/integration/cmd_delete_test.go:257
should contain preStop events list
/go/odo_1/tests/integration/cmd_delete_test.go:270
...
...
...
Timed out after 180.001s.
Expected
<string>: __
/ \__ Developing using the nodejs Devfile
\__/ \ Namespace: cmd-delete-test270afe
/ \__/ odo version: v3.0.0-beta3
\__/
↪ Deploying to the cluster in developer mode
• Waiting for Kubernetes resources ...
âš Pod is Pending
âš 0/3 nodes are available: 3 Insufficient memory.
âš 0/3 nodes are available: 3 Insufficient memory.
âš 0/3 nodes are available: 3 Insufficient memory.
to contain substring
<string>: Press Ctrl+c to exit
Insufficient memory
This error is probably due to old pods from old tests running on the cluster. Could you connect to the Openshift cluster and check if there are old tests namespaces ? I have also seen tests pods in the default namespace this week, not sure where they were coming from.
Kudos, SonarCloud Quality Gate passed!
0 Bugs
0 Vulnerabilities
0 Security Hotspots
3 Code Smells
No Coverage information
0.3% Duplication
/lgtm The code looks good I'll try to make more tests before to approve
/approve
/override ci/prow/unit-and-validate-test /override ci/prow/v4.10-integration-e2e Infra errors Tests pass on IBM Cloud
@feloy: Overrode contexts on behalf of feloy: ci/prow/unit-and-validate-test, ci/prow/v4.10-integration-e2e
In response to this:
/approve
/override ci/prow/unit-and-validate-test /override ci/prow/v4.10-integration-e2e Infra errors Tests pass on IBM Cloud
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: feloy
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~OWNERS~~ [feloy]
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment