Cm_Cache_Backend_Redis
Cm_Cache_Backend_Redis copied to clipboard
fwrite(): send of 8192 bytes failed with errno=104 Connection reset by peer
Some app may write too many keys into Redis and then try to clean them at once like Magento 2 Basically it's this issue however may be resolved in the library https://github.com/magento/magento2/issues/27151
just needs to implement batch processing here
\Cm_Cache_Backend_Redis::_removeByMatchingTags
would it be ok and safe to implement this method like this as a temporary workaround ?
/**
* @param array $tags
*/
protected function _removeByMatchingTags($tags)
{
$maxCount = 10000;
$ids = $this->getIdsMatchingTags($tags);
if($ids)
{
$ids = array_chunk($ids, $maxCount);
foreach ($ids as $batchedIds) {
$this->_redis->pipeline()->multi();
// Remove data
$this->_redis->del( $this->_preprocessIds($batchedIds));
// Remove ids from list of all ids
if($this->_notMatchingTags) {
$this->_redis->sRem( self::SET_IDS, $batchedIds);
}
$this->_redis->exec();
}
}
}
Yes, although the use of array_chunk will make the operation less atomic it seems that is preferable to errors and there is no better workaround that is simple that I can think of.
Hello @ilnytskyi, Did this change resolve the issues for you? we are also experiencing this with Magento 2.3.3 with the reindexing processes.
@bmitchell-ldg yes. However, we verified what Magento writes to the cache and found a lot of swatches blocks cached per URL. Additionally, you can check the Redis config, I noticed that my dev laptop had no problems cleaning 500K keys at once with total request size > 40M, but the test instance barely cleaned 10K (< 1MB). So we used 10K batch size in our case.
@ilnytskyi thank you! we went live with the 10k batch size change and it resolved the issue.
Pushed a fix in 02eef64
Seems like this issue has been resolved. It can probably be closed now.