self-hosted
self-hosted copied to clipboard
Sentry 20.11.0: redis.exceptions.ResponseError: invalid task id
Self-Hosted Version
20.11.0
CPU Architecture
x86_64
Docker Version
19.03.13
Docker Compose Version
3.0
Steps to Reproduce
Reloading the issues page of project. Sentry upgraded from 9.2 to 20.11.0 and rds postgresql running with 12.8 version. Migration scripts ran successfully.
Expected Result
List out the issues of the project.
Actual Result
Unknown api error in web.
Snuba APi logs:
2022-07-14 05:07:41,765 Error running query: SELECT (nullIf(group_id, 0) AS group_id), (count() AS times_seen), (min(timestamp) AS first_seen), (max(timestamp) AS last_seen), (ifNull(uniq((sentry:user AS tags[sentry:user])), 0) AS count) FROM sentry_local PREWHERE in(group_id, tuple(776179, 797441, 797447, 797486, 797485, 797313, 797303, 797294, 792535, 795491, 797484, 797483, 797450, 797482, 797481, 797480, 797474, 797436, 797479, 797430, 797429, 797467, 797435, 792194, 797478)) WHERE equals(deleted, 0) AND greaterOrEquals(timestamp, toDateTime('2022-04-15T05:07:11', 'Universal')) AND less(timestamp, toDateTime('2022-07-14T05:07:12', 'Universal')) AND in(project_id, tuple(85)) AND in(project_id, tuple(85)) GROUP BY (group_id) LIMIT 1000 OFFSET 0
2022-07-14 10:37:41timed out waiting for value
2022-07-14 10:37:41Traceback (most recent call last):
2022-07-14 10:37:41File "./snuba/state/cache/redis/backend.py", line 156, in get_readthrough
2022-07-14 10:37:41value = self.__executor.submit(function).result(task_timeout)
2022-07-14 10:37:41File "/usr/local/lib/python3.8/concurrent/futures/_base.py", line 441, in result
2022-07-14 10:37:41raise TimeoutError()
2022-07-14 10:37:41concurrent.futures._base.TimeoutError
2022-07-14 10:37:41The above exception was the direct cause of the following exception:
2022-07-14 10:37:41Traceback (most recent call last):
2022-07-14 10:37:41File "./snuba/web/db_query.py", line 311, in raw_query
2022-07-14 10:37:41result = execute_query_strategy(
2022-07-14 10:37:41File "./snuba/util.py", line 268, in wrapper
2022-07-14 10:37:41return func(*args, **kwargs)
2022-07-14 10:37:41File "./snuba/web/db_query.py", line 245, in execute_query_with_readthrough_caching
2022-07-14 10:37:41return cache.get_readthrough(
2022-07-14 10:37:41File "./snuba/state/cache/redis/backend.py", line 161, in get_readthrough
2022-07-14 10:37:41raise TimeoutError("timed out waiting for value") from error
2022-07-14 10:37:41TimeoutError: timed out waiting for value
2022-07-14 10:37:412022-07-14 05:07:41,761 Error setting cache result!
2022-07-14 10:37:41Traceback (most recent call last):
2022-07-14 10:37:41File "./snuba/state/cache/redis/backend.py", line 156, in get_readthrough
2022-07-14 10:37:41value = self.__executor.submit(function).result(task_timeout)
2022-07-14 10:37:41File "/usr/local/lib/python3.8/concurrent/futures/_base.py", line 441, in result
2022-07-14 10:37:41raise TimeoutError()
2022-07-14 10:37:41concurrent.futures._base.TimeoutError
2022-07-14 10:37:41The above exception was the direct cause of the following exception:
2022-07-14 10:37:41Traceback (most recent call last):
2022-07-14 10:37:41File "./snuba/state/cache/redis/backend.py", line 161, in get_readthrough
2022-07-14 10:37:41raise TimeoutError("timed out waiting for value") from error
2022-07-14 10:37:41TimeoutError: timed out waiting for value
2022-07-14 10:37:41During handling of the above exception, another exception occurred:
2022-07-14 10:37:41Traceback (most recent call last):
2022-07-14 10:37:41File "./snuba/state/cache/redis/backend.py", line 168, in get_readthrough
2022-07-14 10:37:41self.__script_set(
2022-07-14 10:37:41File "/usr/local/lib/python3.8/site-packages/redis/client.py", line 2944, in call
2022-07-14 10:37:41return client.evalsha(self.sha, len(keys), *args)
2022-07-14 10:37:41File "/usr/local/lib/python3.8/site-packages/redis/client.py", line 2079, in evalsha
2022-07-14 10:37:41return self.execute_command('EVALSHA', sha, numkeys, *keys_and_args)
2022-07-14 10:37:41File "/usr/local/lib/python3.8/site-packages/redis/client.py", line 668, in execute_command
2022-07-14 10:37:41return self.parse_response(connection, command_name, **options)
2022-07-14 10:37:41File "/usr/local/lib/python3.8/site-packages/redis/client.py", line 680, in parse_response
2022-07-14 10:37:41response = connection.read_response()
2022-07-14 10:37:41File "/usr/local/lib/python3.8/site-packages/redis/connection.py", line 629, in read_response
2022-07-14 10:37:41raise response
2022-07-14 10:37:41redis.exceptions.ResponseError: invalid task id
You cannot just change postgresql version to 12.8 from 9.6, something definitely will break.
BTW we don't have docker-compose version "3.0" still :)
I'm using aws ecs deployment to setup the sentry services, and the postgresql (rds) 12.8 works perfectly fine with the sentry 9.2 version. The problem started only when i took the backup of the current running(sentry 9.2 & postgresql 12.8 ) setup and created new rds server (same version 12.8) for sentry 20.11.0. Migration scripts ran good and i don't even see any errors in worker or web..only when i tried to reload the issues page..snuba getting timeout errors.
I'll let this issue open for others if they have an insight into this situation, but AFAICS this is related to changing postgres version and I don't have a resolution for that, sorry.
This issue has gone three weeks without activity. In another week, I will close it.
But! If you comment or otherwise update it, I will reset the clock, and if you label it Status: Backlog or Status: In Progress, I will leave it alone ... forever!
"A weed is but an unloved flower." ― Ella Wheeler Wilcox 🥀