:bug: Fix possible leak (W.i.P.)
Not closing os.File must not, but may lead to a leak. It is better to close it manually instead of waiting for the next GC to be triggered
Summary
Related issue(s)
Fixes #
Release Notes
NONE
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign mjudeikis for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
Hi @cupakob. Thanks for your PR.
I'm waiting for a kcp-dev member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/ok-to-test
/retest
/retest
Hi @cupakob, I'm not sure this is a flaky test. Have you verified that the same test works for you locally?
Hi @cupakob, I'm not sure this is a flaky test. Have you verified that the same test works for you locally?
nope, in the log output I dont really see a problem related to my changes
i will run it localy, maybe I will see more
/retest
@embik currently i don't have any changes in the branch (all changes are commented), but the tests are still failing...it looks like flaky tests somehow, right?
@embik currently i don't have any changes in the branch (all changes are commented), but the tests are still failing...it looks like flaky tests somehow, right?
I think you're right, the test occasionally flakes. You can see in older runs that it used to fail sometimes before this PR. Apologies for the confusion caused by it.
I think the consistent failures we saw earlier in this PR had a differentiation though: They ran into the 2h timeout, so those jobs were running for much longer. The flakes are usually happening within the first 15-20 minutes.
I think you're right, the test occasionally flakes. You can see in older runs that it used to fail sometimes before this PR. Apologies for the confusion caused by it.
all good, not a problem at all ;)
I think the consistent failures we saw earlier in this PR had a differentiation though: They ran into the 2h timeout, so those jobs were running for much longer. The flakes are usually happening within the first 15-20 minutes.
okay, but I don't have an explination, how the defer function, where I close the file, can cause a 2h timeout...any hints are welcome.
/retest
@cupakob is this blocked on anything? Anything we can help with?
@cupakob is this blocked on anything? Anything we can help with?
@sttts I have no idea what exactly is wrong and how some defer functions can cause 2h timeout. I will rebase on main and will try again to check, if the problem still exists.
@cupakob: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| pull-kcp-test-e2e-sharded | b65473bf1225cc5941508b111559150b4753aca0 | link | true | /test pull-kcp-test-e2e-sharded |
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
/close
@kcp-ci-bot: Closed this PR.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.