magento2 icon indicating copy to clipboard operation
magento2 copied to clipboard

Big keys are stored in Redis used for both Default and Full Page Cache

Open dandrikop opened this issue 5 months ago • 2 comments

Preconditions and environment

I have integrated Magento 2.4.7-p5 with Redis 7.2.9. My Magento installation is multi-view implementation. I'm using Redis for Default and Full Page Cache. I checked the biggest keys stored in Redis for the Default Cache by executing the below command on Database 2 of Redis which I use for the Default Cache:

$ redis-cli -p 6379 -n 2 --bigkeys

# Scanning the entire keyspace to find biggest keys as well as # average sizes per key type. You can use -i 0.1 to sleep 0.1 sec # per 100 SCAN commands (not usually needed).

[00.00%] Biggest hash found so far '"zc:k:c48_STRUCTURE_LAYOUT_FRONTEND_STORE6_149F7D289E4A65B436366F61CE09E870F0"' with 4 fields [00.01%] Biggest set found so far '"zc:ti:c48_CAT_P_10355"' with 5 members [00.02%] Biggest set found so far '"zc:ti:c48_CATALOG_PRODUCT_VIEW_ID_5179"' with 8 members [00.04%] Biggest set found so far '"zc:ti:c48_CAT_P_11902"' with 24 members [00.07%] Biggest set found so far '"zc:ti:c48_CAT_P_6288"' with 50 members [00.39%] Biggest set found so far '"zc:ti:c48_CAT_P_4493"' with 53 members [01.70%] Biggest set found so far '"zc:ti:c48_CAT_P_3916"' with 94 members [02.85%] Biggest set found so far '"zc:ti:c48_CAT_P_4081"' with 184 members [03.02%] Biggest set found so far '"zc:ti:c48_CATALOG_CATEGORY_VIEW_DISPLAYMODE_PRODUCTS"' with 917 members [19.44%] Biggest set found so far '"zc:ti:c48_MAGE"' with 216602 members

-------- summary -------

Sampled 243499 keys in the keyspace! Total key length in bytes is 19627260 (avg len 80.61)

Biggest hash found '"zc:k:c48_STRUCTURE_LAYOUT_FRONTEND_STORE6_149F7D289E4A65B436366F61CE09E870F0"' has 4 fields Biggest set found '"zc:ti:c48_MAGE"' has 216602 members

0 lists with 0 items (00.00% of keys, avg size 0.00) 215039 hashs with 860156 fields (88.31% of keys, avg size 4.00) 0 strings with 0 bytes (00.00% of keys, avg size 0.00) 0 streams with 0 entries (00.00% of keys, avg size 0.00) 28460 sets with 842226 members (11.69% of keys, avg size 29.59) 0 zsets with 0 members (00.00% of keys, avg size 0.00)

Afterwards, I ran the same command for the Database 3 of Redis which I use for the Full Page Cache of Magento:

$ redis-cli -p 6379 -n 3 --bigkeys

# Scanning the entire keyspace to find biggest keys as well as # average sizes per key type. You can use -i 0.1 to sleep 0.1 sec # per 100 SCAN commands (not usually needed).

[00.00%] Biggest hash found so far '"zc:k:c48_23DE6CD17D9A532F2F6789DFAF53FC5F5747A71E"' with 4 fields [00.01%] Biggest set found so far '"zc:ti:c48_CAT_P_10355"' with 43 members [00.05%] Biggest set found so far '"zc:ti:c48_CAT_P_11902"' with 153 members [00.08%] Biggest set found so far '"zc:ti:c48_CAT_P_5460"' with 938 members [00.09%] Biggest set found so far '"zc:ti:c48_CAT_P_6288"' with 2438 members [00.31%] Biggest set found so far '"zc:ti:c48_FPC"' with 209478 members

-------- summary -------

Sampled 223525 keys in the keyspace! Total key length in bytes is 10556492 (avg len 47.23)

Biggest hash found '"zc:k:c48_23DE6CD17D9A532F2F6789DFAF53FC5F5747A71E"' has 4 fields Biggest set found '"zc:ti:c48_FPC"' has 209478 members

0 lists with 0 items (00.00% of keys, avg size 0.00) 209478 hashs with 837912 fields (93.72% of keys, avg size 4.00) 0 strings with 0 bytes (00.00% of keys, avg size 0.00) 0 streams with 0 entries (00.00% of keys, avg size 0.00) 14047 sets with 12478679 members (06.28% of keys, avg size 888.35) 0 zsets with 0 members (00.00% of keys, avg size 0.00)

Steps to reproduce

At the command line execute the Redis commands mentioned above on the Redis databases used for caching.

Expected result

The keys named as "MAGE" for the Default Cache and "FPC" for the Full Page Cache should not be stored in Redis.

Actual result

It is obvious that the keys named as "MAGE" for the Default Cache and "FPC" for the Full Page Cache are huge.

Additional information

No response

Release note

No response

Triage and priority

  • [ ] Severity: S0 - Affects critical data or functionality and leaves users without workaround.
  • [ ] Severity: S1 - Affects critical data or functionality and forces users to employ a workaround.
  • [ ] Severity: S2 - Affects non-critical data or functionality and forces users to employ a workaround.
  • [ ] Severity: S3 - Affects non-critical data or functionality and does not force users to employ a workaround.
  • [ ] Severity: S4 - Affects aesthetics, professional look and feel, “quality” or “usability”.

dandrikop avatar Jun 14 '25 17:06 dandrikop

Hi @dandrikop. Thank you for your report. To speed up processing of this issue, make sure that the issue is reproducible on the vanilla Magento instance following Steps to reproduce.


Join Magento Community Engineering Slack and ask your questions in #github channel. :warning: According to the Magento Contribution requirements, all issues must go through the Community Contributions Triage process. Community Contributions Triage is a public meeting. :clock10: You can find the schedule on the Magento Community Calendar page. :telephone_receiver: The triage of issues happens in the queue order. If you want to speed up the delivery of your contribution, join the Community Contributions Triage session to discuss the appropriate ticket.

m2-assistant[bot] avatar Jun 14 '25 17:06 m2-assistant[bot]

Hi @engcom-November. Thank you for working on this issue. In order to make sure that issue has enough information and ready for development, please read and check the following instruction: :point_down:

  • [ ] 1. Verify that issue has all the required information. (Preconditions, Steps to reproduce, Expected result, Actual result).
  • [ ] 2. Verify that issue has a meaningful description and provides enough information to reproduce the issue.
  • [ ] 3. Add Area: XXXXX label to the ticket, indicating the functional areas it may be related to.
  • [ ] 4. Verify that the issue is reproducible on 2.4-develop branch
    Details- If the issue is reproducible on 2.4-develop branch, please, add the label Reproduced on 2.4.x.
    - If the issue is not reproducible, add your comment that issue is not reproducible and close the issue and stop verification process here!
  • [ ] 5. Add label Issue: Confirmed once verification is complete.
  • [ ] 6. Make sure that automatic system confirms that report has been added to the backlog.

m2-assistant[bot] avatar Jun 16 '25 04:06 m2-assistant[bot]

Hi @engcom-Hotel. Thank you for working on this issue. In order to make sure that issue has enough information and ready for development, please read and check the following instruction: :point_down:

  • [ ] 1. Verify that issue has all the required information. (Preconditions, Steps to reproduce, Expected result, Actual result).
  • [ ] 2. Verify that issue has a meaningful description and provides enough information to reproduce the issue.
  • [ ] 3. Add Area: XXXXX label to the ticket, indicating the functional areas it may be related to.
  • [ ] 4. Verify that the issue is reproducible on 2.4-develop branch
    Details- If the issue is reproducible on 2.4-develop branch, please, add the label Reproduced on 2.4.x.
    - If the issue is not reproducible, add your comment that issue is not reproducible and close the issue and stop verification process here!
  • [ ] 5. Add label Issue: Confirmed once verification is complete.
  • [ ] 6. Make sure that automatic system confirms that report has been added to the backlog.

m2-assistant[bot] avatar Jun 27 '25 09:06 m2-assistant[bot]

Hello @dandrikop,

Thank you for reporting this issue with Redis key sizes in Magento's caching system.

This appears to be a legitimate architectural concern. The current implementation of cache tag indexing creates very large Redis keys (200,000+ members), which can lead to:

  • Performance degradation under load
  • Memory inefficiency
  • Potential blocking operations affecting other Redis commands
  • Network overhead during replication/failover

Thanks again for reporting.

engcom-Hotel avatar Jun 27 '25 10:06 engcom-Hotel

:white_check_mark: Jira issue https://jira.corp.adobe.com/browse/AC-15038 is successfully created for this GitHub issue.

github-jira-sync-bot avatar Jun 27 '25 10:06 github-jira-sync-bot

:white_check_mark: Confirmed by @engcom-Hotel. Thank you for verifying the issue.
Issue Available: @engcom-Hotel, You will be automatically unassigned. Contributors/Maintainers can claim this issue to continue. To reclaim and continue work, reassign the ticket to yourself.

m2-assistant[bot] avatar Jun 27 '25 10:06 m2-assistant[bot]