starsessions
starsessions copied to clipboard
session invalidation with redis
Currently, Redis backend has some issues during logout, with concurrent requests. If the browser has two tabs opened to the applications, on the first tab the user "logout" (let's call it REQ1) and on the second tab the application does some background jobs and issues HTTP requests (this one will be REQ2).
In worst-case scenario here is what happens with the session object on the server side:
-
REQ1
start and load the session withsession_id=XXX
-
REQ2
start and load the session withsession_id=XXX
-
REQ1
clear the session, starsessions remove the Redis key withsession_id=XXX
and generate a newsession_id=YYY
-
REQ1
finish and savesession_id=YYY
to browser cookie -
REQ2
finish and starsessions save the session and re-create the Redis key forsession_id=XXX
-
REQ1
finish and savesession_id=XXX
to browser cookie
Since the Redis key has been recreated and the last request the browser has seen is with session_id=XXX
, the user will not really be logged out.
Here is a fix I use to workaround this issue:
class RedisStore(SessionStore):
def __init__(self, connection: "redis.asyncio.client.Redis[bytes]") -> None:
self._connection = connection
def get_redis_key(
self, session_id: str, purpose: typing.Literal["data", "invalidation"]
) -> str:
return f"fastapi-session/{session_id}/{purpose}"
async def read(self, session_id: str, lifetime: int) -> bytes:
pipe = await self._connection.pipeline()
await pipe.get(self.get_redis_key(session_id, purpose="data"))
await pipe.exists(self.get_redis_key(session_id, purpose="invalidation"))
value, invalid = typing.cast(tuple[bytes | None, bool], await pipe.execute())
if value is None or invalid:
return b""
return value
async def write(self, session_id: str, data: bytes, lifetime: int, ttl: int) -> str:
if lifetime == 0:
raise RuntimeError("lifetime must be greater than zero")
ttl = max(1, ttl)
await self._connection.set(self.get_redis_key(session_id, "data"), data, ex=ttl)
return session_id
async def remove(self, session_id: str) -> None:
# NOTE(sileht): as concurrent request can be done with the same cookie,
# if one clear the session, maybe other add stuff to the session
# so we can't remove the redis key, instead we add a new key to mark this session as invalid
# so future read will known that even if we have session data attached to this id, we must not use it
# we also ensure this key always expire after the "data" one
await self._connection.set(
self.get_redis_key(session_id, "invalidation"),
b"",
ex=3600 * config.SESSION_EXPIRATION_HOURS * 2,
)
async def exists(self, session_id: str) -> bool:
pipe = await self._connection.pipeline()
await pipe.exists(self.get_redis_key(session_id, "data"))
await pipe.exists(self.get_redis_key(session_id, "invalidation"))
has_data, has_invalidation = await pipe.execute()
return has_data and not has_invalidation
while you logout in tab1, tab2 is processing an endpoint is that the pattern ?
The case you described is valid. The first request, which destroys the session, wins. Any other will initiate a new session. Your workaround, actually, is not a workaround but a good custom app-specific backend.
while you logout in tab1, tab2 is processing an endpoint is that the pattern ?
Yes
The case you described is valid. The first request, which destroys the session, wins. Any other will initiate a new session. Your workaround, actually, is not a workaround but a good custom app-specific backend.
The issue here is that connections are concurrent, so even if the first request destroys the session since the second request read the session before it got destroyed, the session will be saved again by the second request.
The semantics of starsessions API looks good to me (first request must WIN), it's just a bug of the Redis backend.
As I understood, the issue is not with backend. The session_id
is set in two concurrent connection. When one of them terminates the session and clears the cookie, the other connection still holds session_id
in it. And when the other connection ends, then it flushes the old session_id
back to the backend using old value.
Can you attach a code snippet to reproduce locally?
I created a test case of the issue here: https://github.com/alex-oleshkevich/starsessions/pull/59
Hi - is there any intent to fix this or should we use the workaround?