next-shared-cache
next-shared-cache copied to clipboard
Different Cache Key Per Server (Pod)
Brief Description of the Bug
I just used the Redis cache-handler
as I have an app that gets deployed on multiple containers/pods/servers. The issue is that when we fetch data from an endpoint, it ends up with multiple cache keys for the same endpoint per running container. The cache stays on the Redis server till the expiresAt date which is good, but this way it's kind of useless as it's a centralized cache, but, each server generates it's own cache key!
I have configured a cache key prefix, and also I have a fixed buildId
that reads it's value from the Git Hash. So, I'm not sure what's the root cause for this!
Severity Critical
Frequency of Occurrence Always
Steps to Reproduce
cache-handler.mjs:
import { CacheHandler } from '@neshca/cache-handler';
import createLruHandler from '@neshca/cache-handler/local-lru';
import createRedisHandler from '@neshca/cache-handler/redis-stack';
import { createClient } from 'redis';
CacheHandler.onCreation(async () => {
let client;
try {
// Create a Redis client.
client = createClient({
url: process.env.REDIS_DSN ?? 'redis://redis:6379',
socket: {
connectTimeout: 1000,
}
});
// Redis won't work without error handling.
client.on('error', () => {});
} catch (error) {
console.warn('Failed to create Redis client:', error);
}
if (client) {
try {
if (process.env.REDIS_AVAILABLE) {
console.info('Connecting Redis client...');
// Wait for the client to connect.
// Caveat: This will block the server from starting until the client is connected.
// And there is no timeout. Make your own timeout if needed.
await client.connect();
console.info('Redis client connected.');
}
} catch (error) {
console.warn('Failed to connect Redis client:', error);
console.warn('Disconnecting the Redis client...');
// Try to disconnect the client to stop it from reconnecting.
client
.disconnect()
.then(() => {
console.info('Redis client disconnected.');
})
.catch(() => {
console.warn('Failed to quit the Redis client after failing to connect.');
});
}
}
/** @type {import("@neshca/cache-handler").Handler | null} */
let handler;
if (client?.isReady) {
// Create the `redis-stack` Handler if the client is available and connected.
handler = await createRedisHandler({
client,
keyPrefix: process.env.REDIS_PREFIX ?? '__',
timeoutMs: 1000,
});
} else {
// Fallback to LRU handler if Redis client is not available.
// The application will still work, but the cache will be in memory only and not shared.
handler = createLruHandler();
console.warn('Falling back to LRU handler because Redis client is not available.');
}
return {
handlers: [handler],
};
});
export default CacheHandler;
Expected vs. Actual Behavior Describe what you expected to happen and what actually happened.
Screenshots/Logs
Environment:
-
@neshca/cache-handler
version: 1.3.1 -
next
version: 14.1.4
Hello @khal3d, I noticed that every key has the same prefix - nxt_
. The remaining part of the key, for example, 9bc6f6daaf28afbc730ff4d50a0ae355750654724dbe262553a4f2b061f386cc
, represents the fetch cache key created by the Next.js server. In this screenshot, there are nine different fetch cache keys that share the same prefix, and everything seems okay to me.
You are on the right track as long as the process.env.REDIS_PREFIX
remains the same in every pod environment.
Some deep dive into the Next.js fetch cache: 9bc6f6daaf28afbc730ff4d50a0ae355750654724dbe262553a4f2b061f386cc
is a SHA256
of the combination of the resource
and options
parameters of the fetch(resource, options)
function call.
Hi @better-salmon Yes, I have a cache key prefix set to nxt_
, and at that time 9 containers were running for the same build, each container generates a unique cache key for the same fetch request with no additional parameters in the request. I was expecting all containers to share a single cache key for this scenario.
The cache keys grow as we scale the production containers, which means they are not using the same cache key.
To be honest, I'm not sure if it's a NextJS issue as it generates a cache key per running container, or if it's an issue with @neshca/cache-handler
setup.
Could you please check the values. Make sure that the the url
, headers
and the body
is the same for every key.
You can use atob()
to parse base64
data from the body.
@khal3d hello! Is this issue still relevant?
I'm closing this issue due to inactivity.