next-shared-cache
next-shared-cache copied to clipboard
RSC requests fail
Brief Description of the Bug rscData does not serialize/deserialize properly since it is a JS Buffer object
Severity Major
Frequency of Occurrence Always
Steps to Reproduce Provide detailed steps to reproduce the behavior, including any specific conditions or configurations where the bug occurs:
- Implement SSG and ISR in Next 14+
- Implement @neshca/cache-handler/redis-stack
- Notice that RSC requests fail due to streams not being created because rscData gets deserialized as a
{ type: 'Buffer', data: [ /* byte array */ ] }
Note: When I updated the implementation to set value.rscData = Buffer.from(value.rscData.data) the problem was resolved. However, the rss of my node process starting to increase over time, indicating a memory leak. I'm not sure why the Buffers don't get released.
Expected vs. Actual Behavior I expect an out-of-the box implementation to handle RSC requests and caching
Screenshots/Logs Let me know if this report is not clear enough and I can provide logs and whatnot.
Environment:
- OS: client is macOS using Chrome, server is Ubuntu 22.x LTS using Node 20.x LTS
- Node.js version: v20.17.0
@neshca/cache-handlerversion: 1.6.1nextversion: 15.0.0-canary
Attempted Solutions or Workarounds
value.rscData = Buffer.from(value.rscData.data) which works. However, there seems to be a memory leak caused by the Buffer objects not being released.
Impact of the Bug RSC requests fail with HTTP 500
Attached is a real-world example. However, this particular example does not use a barebones @neshca/cache-handler/redis-stack handler.
The best option in my testing so far seems to be:
/**
* @param {string} key - The cache key.
* @param {object} cacheHandlerValue - The value to cache.
* @returns {Promise<void>}
*/
set: async (key, cacheHandlerValue) => {
if (cacheHandlerValue?.value?.rscData) {
cacheHandlerValue.value.rscData =
cacheHandlerValue.value.rscData.toString('utf8');
}
// ...
}
Hey there, @SystemDisc! Next.js 15 isn't supported yet. Check out discussion #691 for a more reliable workaround.