search_issues does not respect default_batch_sizes for Issues
Bug summary
search_issues(..., maxResults=False) is sending maxResults=100 parameter in the request instead of using the batch size for Issue from the default_batch_sizes option.
This causes searches to time out and return 500 status.
It seems there is a mismatch with the maxResults parameter. enhanced_search_issues keeps the old documentation that maxResults limits the number of issues to return, but the called _fetch_pages_searchToken has changed that into the batchsize.
This breaks the usual that maxResults defined the limit and the JIRA.__init__ parameter default_batch_sizes defines the batch sizes for different resources. Seems it was introduced in #2326 to fix #1940.
Is there an existing issue for this?
- [x] I have searched the existing issues
Jira Instance type
Jira Cloud (Hosted by Atlassian)
Jira instance version
No response
jira-python version
3.10.5
Python Interpreter version
3.11.13
Which operating systems have you used?
- [x] Linux
- [ ] macOS
- [ ] Windows
Reproduction steps
# 1. Given a Jira client instance
jira: JIRA(..., default_batch_sizes={Issue: 10})
# 2. When I call the function with argument x
jira.search_issues(jql, maxResults=False, fields=["issuelinks", "labels", "priority", "status"])
# 3.
Stack trace
File "***/update.py", line 12, in ***
for issue in self.jira.search_issues(jql, maxResults=False, fields=["issuelinks", "labels", "priority", "status"]):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.13/x64/lib/python3.11/site-packages/jira/client.py", line 3624, in search_issues
return self.enhanced_search_issues(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.13/x64/lib/python3.11/site-packages/jira/client.py", line 138, in check_if_cloud
return client_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.13/x64/lib/python3.11/site-packages/jira/client.py", line 3762, in enhanced_search_issues
issues = self._fetch_pages_searchToken(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.13/x64/lib/python3.11/site-packages/jira/client.py", line 138, in check_if_cloud
return client_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.13/x64/lib/python3.11/site-packages/jira/client.py", line 957, in _fetch_pages_searchToken
response = self._get_json(
^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.13/x64/lib/python3.11/site-packages/jira/client.py", line 4597, in _get_json
else self._session.get(url, params=params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.13/x64/lib/python3.11/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.13/x64/lib/python3.11/site-packages/jira/resilientsession.py", line 247, in request
elif raise_on_error(response, **processed_kwargs):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.13/x64/lib/python3.11/site-packages/jira/resilientsession.py", line 72, in raise_on_error
raise JIRAError(
jira.exceptions.JIRAError: JiraError HTTP 500 url: https://***.atlassian.net/rest/api/2/search/jql?jql=project%3D%22***%22+AND+component%3D%22***%22+AND+statusCategory+%21%3D+Done&fields=issuelinks&fields=labels&fields=priority&fields=status&maxResults=100
text: java.util.concurrent.CompletionException: java.util.concurrent.TimeoutException
response headers = {'Content-Type': 'application/json;charset=UTF-8', 'Transfer-Encoding': 'chunked', 'Connection': 'keep-alive', 'Date': 'Wed, 20 Aug 2025 10:35:12 GMT', 'Server': 'AtlassianEdge', 'Timing-Allow-Origin': '*', 'X-Arequestid': '7f9abdf97cfe16539a4ab14f80a43c08', 'X-Aaccountid': '712020%3A1037c631-6ec0-4691-9ca0-5d01f027fb82', 'Cache-Control': 'no-cache, no-store, no-transform', 'X-Content-Type-Options': 'nosniff', 'X-Xss-Protection': '1; mode=block', 'Atl-Traceid': 'c7b922419cfa4c09a9e01f145dd5096d', 'Atl-Request-Id': 'c7b92241-9cfa-4c09-a9e0-1f145dd5096d', 'Strict-Transport-Security': 'max-age=63072000; includeSubDomains; preload', 'Report-To': '{"endpoints": [{"url": "https://dz8aopenkvv6s.cloudfront.net"}], "group": "endpoint-1", "include_subdomains": true, "max_age": 600}', 'Nel': '{"failure_fraction": 0.001, "include_subdomains": true, "max_age": 600, "report_to": "endpoint-1"}', 'Server-Timing': 'atl-edge;dur=10547,atl-edge-internal;dur=14,atl-edge-upstream;dur=10532,atl-edge-pop;desc="aws-eu-west-1"', 'X-Cache': 'Error from cloudfront', 'Via': '1.1 ef81d2c0d5984a166a5467acd7c2d88a.cloudfront.net (CloudFront)', 'X-Amz-Cf-Pop': 'IAD55-P8', 'X-Amz-Cf-Id': 'vhqTMpZlkx445mCLKKVP6g4bAUZjUdxVgntJo1VohFyAOXbf2uzVow=='}
response text = {"message":"java.util.concurrent.CompletionException: java.util.concurrent.TimeoutException","status-code":500,"stack-trace":""}
Expected behaviour
Iterate all issues in batches of 10.
Additional Context
No response
@skumar36-atlassian, could you take a look at this issue?
@skumar36-atlassian, can you take a look at this issue?
@kohtala could you have a look here? https://github.com/pycontribs/jira/issues/2369 and try enhanced_search_issues?
@kohtala could you have a look here? https://github.com/pycontribs/jira/issues/2369 and try enhanced_search_issues?
You can see enhanced_search_issues in the stack trace. The bug is in it. It ignores the default batch size and uses the constant instead.
There is the pull request and I've been using that version successfully. Is there any chance of having the fix merged so I could switch to release version from my own fork?
I can have a look this weekend.... We need CI working in a better state though. Could you have a look at my open or and provide some feedback?
Any feedback is useful https://github.com/pycontribs/jira/pull/2376
The PR is still open, will this fix be in an upcoming release?
I'm currently using 3.10.4 and getting some 500 timeout errors when calling search_issues with MaxResults=0, intending to get all issues in batches of 50