Fix Confluence connector swallowing all errors
Closes https://github.com/elastic/connectors/issues/2394
The goal of this PR is to update the following paginated_api_call function to re-raise and/or handle specific Exceptions instead of blanket-swallowing any Exceptions that arise.
Current exception-handling code: https://github.com/elastic/connectors/blob/8c8d4a3f7775c6edbb598a81a2b67348a7244f3a/connectors/sources/confluence.py#L194-L198
Checklists
Pre-Review Checklist
- [x] this PR does NOT contain credentials of any kind, such as API keys or username/passwords (double check
config.yml.example) - [x] this PR has a meaningful title
- [x] this PR links to all relevant github issues that it fixes or partially addresses
- [x] if there is no GH issue, please create it. Each PR should have a link to an issue
- [x] this PR has a thorough description
- [x] Covered the changes with automated tests
- [x] Tested the changes locally
- [x] Added a label for each target release version (example:
v7.13.2,v7.14.0,v8.0.0) - [ ] Considered corresponding documentation changes
- [ ] Contributed any configuration settings changes to the configuration reference
- [ ] if you added or changed Rich Configurable Fields for a Native Connector, you made a corresponding PR in Kibana
Agreed, this looks like the problem. We try to balance between the ideologies of: - "don't just fail the sync if there's some minor error, move on" - "fail fast if there's an
I was reading through this GH issue and came across the above quote from @seanstory
I don't have as much context or hands-on experience with Connectors (yet) - what constitutes a 'minor' or 'major' error in the case of connector syncs (for Confluence or any data source for that matter)
For example, in this PR I've added exception catching for 401s and 403s - are these worth re-raising so they get treated as a major error, or are they minor enough that we don't want to fail a sync because of them?
I noticed a lot of data sources don't have this fine-grain control over Exceptions, so if this is treading new-ish ground I might take a beat to think harder about this and propose something we can apply to other data sources, too.
cc @artem-shelkovnikov @erikcurrin-elastic @jedrazb as well
Marking this as ready-to-review, with the asterisk that this will most likely require more work - see the conversation starter above!
💚 Backport PR(s) successfully created
| Status | Branch | Result |
|---|---|---|
| ✅ | 9.0 | https://github.com/elastic/connectors/pull/3402 |
This backport PR will be merged automatically after passing CI.
checking if automation will create a backport for me
💚 All backports created successfully
| Status | Branch | Result |
|---|---|---|
| ✅ | 8.19 | |
| ✅ | 8.18 |
Questions ?
Please refer to the Backport tool documentation
If the PR is already merged, it won't, but you can just run npx backport --pr <PR number> and it'll pick up the branches that are labeled but don't yet exist for you.
Thank you Sean for manual backporting! It was late in the day and I was sjut checking whether automation would do it for me automatically :D
You can do it automatically, to do that you'll need to remove auto-backport tag first, then add it back along with versions you need to backport to