superset
superset copied to clipboard
Async query deduplication and cancellation support
SIP-39 - Global Async Query Support mentions a future effort around deduplicating and cancelling asynchronous queries. I'm opening this issue to document our implementation plans for visibility + feedback.
While we explored a few options for implementation, this is our current preferred approach.
cc active participants in SIP-39: @robdiciuccio @etr2460 @DiggidyDave @willbarrett @williaster
Overview
Deduplication will require the current flow, a 1:1 mapping of query to client, to move to a 1:N mapping of query to N clients.
State
Existing state
The application currently performs four writes to Redis to support both sync/async queries:
- [sync/async] Write the query results (and some other metadata) to cache
- [async] Write the query context (for validating user access when later requesting cached values)
- [async] Write to global event stream (for WebSocket server)
- [async] Write to user-specific event stream (for WebSocket server)
New state requirements
All of the above values are currently written after the background worker has executed the query. Deduplication and cancellation features will need some state before the query has executed. To track requests, we'll need additional state:
- All job ids that are waiting for the results of a given query
- Some job metadata such as the query key, channel id, and user id
Architecture
- Each request generates a key unique to that particular request using the generated SQL and any other relevant parameters (e.g., impersonating user). Referring to this as the query key.
- A set of job ids are associated with the query key. These are waiting on query completion.
- Each job id has its own JSON blob in Redis, containing job metadata.
- Deduplication occurs when a query request generates a query key that maps to an existing key in Redis with a non-empty set of job ids. In this case, a new job is created and added to the set of jobs waiting on that query to complete. No new celery tasks are enqueued.
- When the Celery task running the query completes, it looks up all job ids waiting completion of the query. For each one, it grabs its job metadata and adds events to the Redis streams which will be consumed by the WebSocket server and forwarded to connected clients.
- If a request comes in for a query whose results are already cached, the server should return the cached results.
- Cancellation for a particular client can occur via removing its job id from the set. If the set of job ids awaiting query completion is empty, the query can be safely cancelled.
Pros
- Existing state in Redis does not need to be modified
- All Redis operations are atomic, no read-modify-write race conditions to consider
Cons
- Some cognitive overhead given the number of k/v pairs in Redis needed
Does the query key
you discuss here differ from the chart cache_key
? Can we reuse the cache key logic here?
@etr2460 I agree that consistency here is desired. I think we wanted to leave the existing cache values as they are, but as for the logic that generates the cache key, the idea is for it to be the same hashing logic, but with a different key prefix.
Does that make sense, or are you thinking something different?
Nope, that makes sense to me! Simply wanted to make sure that you knew we already had logic to generate this key that could be reused for this use case
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. For admin, please label this issue .pinned
to prevent stale bot from closing the issue.
Hey all, has there been any progress made on this?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. For admin, please label this issue .pinned
to prevent stale bot from closing the issue.
Looking forward to this feature
Love the proposal, but I'm closing this as stale since it's been silent for so long, and we're trying to steer toward a more actionable Issues backlog. If people are still encountering this in current versions (currently 3.x) please open a new Issue or a PR to address the problem.
CC @craig-rueda in case there's any interest to carry more of this forward in PRs or SIPs