magento2
magento2 copied to clipboard
Asynchronous indexing can break large servers
Large stores with extremely active product databases can suffer many related problems with indexers not being kept up to date. Products can disappear from the website, product data can be out of date on the website, excessive cache flushing can cause huge server load spikes.
This is due to the manner in which triggers are created by the core magento code in https://github.com/magento/magento2/blob/9544fb243d5848a497d4ea7b88e08609376ac39e/lib/internal/Magento/Framework/Mview/View/Subscription.php#L201
INSERT INTO will insert a new record regardless of whether one exists or not. REPLACE INTO will as the name says, replace a record - and in this case it will generate a new version_id since that is an auto_increment field.
Preconditions (*)
- An extremely large database. Example 500,000 products
- An active update schedule
Steps to reproduce (*)
- Create a sample database with 500,000 products
- Enable Asynchronous indexing
- Reduce crons to every 10 minutes
- Run a mass script to change the inventory for the products every 30 seconds. So within 10 minutes, you will have made 10 million updates
Expected result (*)
- When the indexer runs, it will index the latest product information once for each index
Actual result (*)
It will actually try to index each product 20 times, once for each update
Please provide Severity assessment for the Issue as Reporter. This information will help during Confirmation and Issue triage processes.
- [ ] Severity: S0 - Affects critical data or functionality and leaves users without workaround.
Hi @gamort. Thank you for your report. To help us process this issue please make sure that you provided the following information:
- Summary of the issue
- Information on your environment
- Steps to reproduce
- Expected and actual results
Please make sure that the issue is reproducible on the vanilla Magento instance following Steps to reproduce. To deploy vanilla Magento instance on our environment, please, add a comment to the issue:
@magento give me 2.4-develop instance
- upcoming 2.4.x release
For more details, please, review the Magento Contributor Assistant documentation.
Please, add a comment to assign the issue: @magento I am working on this
- Join Magento Community Engineering Slack and ask your questions in #github channel.
:warning: According to the Magento Contribution requirements, all issues must go through the Community Contributions Triage process. Community Contributions Triage is a public meeting.
:clock10: You can find the schedule on the Magento Community Calendar page.
:telephone_receiver: The triage of issues happens in the queue order. If you want to speed up the delivery of your contribution, please join the Community Contributions Triage session to discuss the appropriate ticket.
:movie_camera: You can find the recording of the previous Community Contributions Triage on the Magento Youtube Channel
:pencil2: Feel free to post questions/proposals/feedback related to the Community Contributions Triage process to the corresponding Slack Channel
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 14 days if no further activity occurs. Thank you for your contributions.
Issue makes sense I think, removing stale issue label...
@sidolov: would be great if somebody could verify if this is still the case in 2.4-develop (since the referenced code was refactored a couple of months ago) and then assign priorities.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 14 days if no further activity occurs. Is this issue still relevant? If so, what is blocking it? Is there anything you can do to help move it forward? Thank you for your contributions!
Sigh ...
@sidolov, @ihor-sviziev : can we please delete this stalebot crappy thing? What's the use of people opening issues if nobody looks at them and then they get auto closed? That's just depressing and might scare people away from opening more issues in the future ...
(not saying that this issue is valid, but it feels like it should at least be investigated instead of auto closed)
@sidolov @sivaschenko @gabrieldagama this should be p1 or p2 priority
BTW it looks like the right solution will be not in the time of adding the record to DB, but on the indexation phase to process all duplicated records at once
@hostep , @ihor-sviziev we will take a look at it and prioritize in case it's still reproducible
Hi @engcom-Foxtrot. Thank you for working on this issue. In order to make sure that issue has enough information and ready for development, please read and check the following instruction: :point_down:
-
[ ] 1. Verify that issue has all the required information. (Preconditions, Steps to reproduce, Expected result, Actual result).
Details
If the issue has a valid description, the labelIssue: Format is valid
will be added to the issue automatically. Please, edit issue description if needed, until labelIssue: Format is valid
appears. -
[ ] 2. Verify that issue has a meaningful description and provides enough information to reproduce the issue. If the report is valid, add
Issue: Clear Description
label to the issue by yourself. -
[ ] 3. Add
Component: XXXXX
label(s) to the ticket, indicating the components it may be related to. -
[ ] 4. Verify that the issue is reproducible on
2.4-develop
branchDetails
- Add the comment@magento give me 2.4-develop instance
to deploy test instance on Magento infrastructure.
- If the issue is reproducible on2.4-develop
branch, please, add the labelReproduced on 2.4.x
.
- If the issue is not reproducible, add your comment that issue is not reproducible and close the issue and stop verification process here! -
[ ] 5. Add label
Issue: Confirmed
once verification is complete. -
[ ] 6. Make sure that automatic system confirms that report has been added to the backlog.
Hello, @gamort. I've checked this issue on the 2.4-develop branch and seems like it's already fixed. Could you please check it on the latest codebase?
I think we have same problem with 2.4.2, keep getting lock in different user cases
An error occurred during creating the deployment package, error message: Unable to execute command "php7.4 /tmp/builds/d-3e55edc1/bin/magento setup:upgrade 2>&1" - Running schema recurring...SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction, query was: UPDATE cron_schedule
SET status
= ?, messages
= ? WHERE (status = 'running')
And maybe it is why this extension made ? https://github.com/magemojo/m2-ce-cron
Hi @engcom-Hotel. Thank you for working on this issue. In order to make sure that issue has enough information and ready for development, please read and check the following instruction: :point_down:
-
[ ] 1. Verify that issue has all the required information. (Preconditions, Steps to reproduce, Expected result, Actual result).
Details
If the issue has a valid description, the labelIssue: Format is valid
will be added to the issue automatically. Please, edit issue description if needed, until labelIssue: Format is valid
appears. -
[ ] 2. Verify that issue has a meaningful description and provides enough information to reproduce the issue. If the report is valid, add
Issue: Clear Description
label to the issue by yourself. -
[ ] 3. Add
Component: XXXXX
label(s) to the ticket, indicating the components it may be related to. -
[ ] 4. Verify that the issue is reproducible on
2.4-develop
branchDetails
- Add the comment@magento give me 2.4-develop instance
to deploy test instance on Magento infrastructure.
- If the issue is reproducible on2.4-develop
branch, please, add the labelReproduced on 2.4.x
.
- If the issue is not reproducible, add your comment that issue is not reproducible and close the issue and stop verification process here! -
[ ] 5. Add label
Issue: Confirmed
once verification is complete. -
[ ] 6. Make sure that automatic system confirms that report has been added to the backlog.
Hello @gamort,
Please try to reproduce this issue in the 2.4-develop branch and let us know if you are still able to reproduce the issue.
Thanks
Hello @gamort,
We have noticed that this issue has not been updated for a period of 14 Days. Hence we assume that this issue is fixed now, so we are closing it. Please raise a fresh ticket or reopen this ticket if you need more assistance on this.
Regards
I believe this issue is still valid, could you still try to reproduce this @engcom-Hotel?
What you should look for (if I understand the issue correctly) is for duplicated entity_id
's in the changelog tables (like catalogsearch_fulltext_cl
or catalogrule_product_cl
or catalog_product_attribute_cl
, ...) and then figure out if when the partial reindexing system runs, if the same entity_id
is being processed once or more then once.
The expectation is that the same entity_id
is only processed once in a single partial reindex run.
Thanks!
@hostep,
We might have some potential deadlocks by replacing insert
with replace
.
Aren't we fetching the unique values before doing the reindex?
https://github.com/magento/magento2/blob/e4a2d3cdd2a1d427a35009c89b52fe570aba2847/lib/internal/Magento/Framework/Mview/View/Changelog.php#L214-L227
and
https://github.com/magento/magento2/blob/e4a2d3cdd2a1d427a35009c89b52fe570aba2847/lib/internal/Magento/Framework/Mview/View/ChangeLogBatchWalker.php#L44-L57
Yeah indeed, I would indeed handle making them unique when reading and not when writing.
But it appears this already happens like you indicate due to the distinct
call, so maybe the issue is no longer reproducible at this time?
@hostep, yeah, that's my point - it shouldn't reproduce.
Hello @hostep and @ihor-sviziev
I'll share my final conclusion after some test.
I migrated from 2.3.4 to 2.4.3-p1 recently. After a full product and stock import, I have some very long scheduled index and even some are stuck forever. It creates 800k line with only 30k distinct for "catalogrule_product_cl" as an Example.
(I have 0 rule in place, but the indexer takes more than 3 days, never finished... this is another issue, a foreach product than a foreach rule)
But for me, by reading the source code,
There is an issue when the mview is being updated if the same entity_id is present in two different chunk of the batch size. On 2.3.4, they first retrieve the max 100k versions, make them unique and batch on them
2.3.4 https://github.com/magento/magento2/blob/5f3b86ab4bd3e3b94e65429fed33f748d29c1bbe/lib/internal/Magento/Framework/Mview/View.php#L292-L311
https://github.com/magento/magento2/blob/5f3b86ab4bd3e3b94e65429fed33f748d29c1bbe/lib/internal/Magento/Framework/Mview/View/Changelog.php#L145-L166
3.4.3-p1
https://github.com/magento/magento2/blob/8afdb9858d238392ecb5dbfe556068ec1af137bc/lib/internal/Magento/Framework/Mview/View.php#L299-L315
https://github.com/magento/magento2/blob/8afdb9858d238392ecb5dbfe556068ec1af137bc/lib/internal/Magento/Framework/Mview/View/ChangeLogBatchWalker.php#L35-L58
Franck.
Hi @fgarnier-hublot, Could you report it as a separate issue? Thank you!
Hi, sorry to respond on a closed issue, but I think this is still relevant and I didn't found any other related issue.
The problem clearly comes from the ChangeLogBatchWalker::walk() function.
It misses a limit offset so it actually loops multiple times on the same values. Here is a simple example:
Fill your changelog table with 10000 rows of only 3 distinct entity_ids (1, 2, 3) Default batching is 1000.
walk() function will perform a SELECT DISTINCT on rows 0 to 10000 with LIMIT 1000
it will return: 1, 2, 3
on second call, it will perform a SELECT DISTINCT on rows 1000 to 10000 with LIMIT 1000
which also returns 1, 2, 3 ...
same for 8 more useless calls!
the easy fix would be to add an offset argument to the walk function so, on second call, we could perform a SELECT DISTINCT on rows 0 to 10000 with LIMIT 1000 OFFSET 1000 which will return, in our example, no result, and end the walk loops
Hope it helps, Regards
@ihor-sviziev: can this be re-opened and re-validated? Now that there is more info.
Hello @hostep,
Let me reopen this for further investigation.
Thanks
Having had this crushing our relatively small store (few thousand products, a dozen customer groups, a dozen catalog price rules) by hanging up the process for indexer_update_all_views for hours: I'm trying a different approach.
Rather than building out a massive batch walker update to deal with multiple version_ids per entity_id and get the max for each one, I'm updating the generic table definitions and triggers so that:
- The entity_id is inserted if it isn't present already
- If it already exists the version_id is updated to the newest auto_increment value since all we really care about is that the "latest" version_id in the changelog table for an entity_id is greater than the pointer for that table stored in mview_state.
Yes, there's additional overhead, but it's orders of magnitude less than churning the indexers constantly. This still required manual intervention listed below to deploy, but until the index is added the code continues to function similarly to Magento's current implementation.
- Setting indexers to "update on save"
- Flushing cache
- Deleting the existing _cl tables manually
- Reset the mview_state.version_id to 0 for all indexers
- Setting the indexers back to "update by schedule" to recreate the _cl tables
- Invalidating existing indexes and reindexing to ensure all changes are captured
Having had this crushing our relatively small store (few thousand products, a dozen customer groups, a dozen catalog price rules) by hanging up the process for indexer_update_all_views for hours: I'm trying a different approach.
Rather than building out a massive batch walker update to deal with multiple version_ids per entity_id and get the max for each one, I'm updating the generic table definitions and triggers so that:
- The entity_id is inserted if it isn't present already
- If it already exists the version_id is updated to the newest auto_increment value since all we really care about is that the "latest" version_id in the changelog table for an entity_id is greater than the pointer for that table stored in mview_state.
Yes, there's additional overhead, but it's orders of magnitude less than churning the indexers constantly. This still required manual intervention listed below to deploy, but until the index is added the code continues to function similarly to Magento's current implementation.
- Setting indexers to "update on save"
- Flushing cache
- Deleting the existing _cl tables manually
- Reset the mview_state.version_id to 0 for all indexers
- Setting the indexers back to "update by schedule" to recreate the _cl tables
- Invalidating existing indexes and reindexing to ensure all changes are captured
As the REPLACE statement is slower than INSERT, I think that approach could impact triggers performances and slow down large operations.
Also, what about replacement occurring while mview indexation is running ? Some rows that would have been processed during this cron process, would be delayed to next cron, right ?
Hello @gamort @hostep @bigbangx,
Thanks for your input on this issue. I have some findings and would like to share them here. In order to try to reproduce the issue, I have followed the below steps:
- Change the price of the product with SKU
24-MB01
20 times via APIrest/all/V1/products/24-MB01
- Doing the price change data inserted in the table
catalog_product_price_cl
- Adjust the
DEFAULT_BATCH_SIZE
in filelib/internal/Magento/Framework/Mview/View.php
with 10 from 1000. - Enable the DB logs via the
bin/magento dev:query-log:enable
command. - grep the text to get the SQL query on table
catalog_product_price_cl
. Please find attached the screenshot of db.log:

The query always returns entity_id 1
each time.
In this case, it seems that entity_id 1 will process many times, instead of only 1 time, and should be on the latest one.
Thanks
As the REPLACE statement is slower than INSERT, I think that approach could impact triggers performances and slow down large operations.
Also, what about replacement occurring while mview indexation is running ? Some rows that would have been processed during this cron process, would be delayed to next cron, right ?
@bigbangx You are correct that it does impact trigger performance, but we weighed the impact to doing a catalog-wide update as a couple of seconds increased processing time for our store. Impact to saving product changes and other real-time actions like stock updates was negligible. We had far more problems with the backlog reprocessing changes on the same entities than we are having with the version_id being replaced.
Regarding the version_id updating during indexing: the entity_id has already been selected at that point, so it's going to be indexed. The max version_id pointer for that batch should also already be saved, so what should happen is:
- The entity in question will be indexed in the current run
- the mview_state.version_id pointer will be set to the highest one selected for processing from the time the entity_ids for indexing were collected (I would double-check on this one and confirm that they are saving that pointer's value at the time of collection and not re-selecting the highest one after indexing a batch is completed - my understanding of the functions was that they had limits that they were processing set with fromVersion and toVersion parameters)
- The version_id for the entity in question will be incremented past the current pointer
- the next reindex will capture that the entity needs to be reindexed again, and process it during that pass.
This is essentially the same process as now, with the difference that if you have multiple iterations of the same entities being changed by multiple bulk actions, you won't get the oldest version ids for processing and work through the same set repeatedly.
The extreme edge case would be if you had so many products being updated that your indexer never caught up to the version_id of a particular entity because there were always more entities with older version_ids filling up your batch size. If you have that problem, it doesn't matter how this portion is coded: your catalog churn is faster than Magento's current indexer model can keep up with.
With all that being said: there are other issues that should be addressed with the current architecture like the sheer number of changelog entries that can be triggered: auto_increment int fields do run out of digits eventually. This was an expeditious band-aid that could be patched in to keep things running while an optimal long-term solution is explored.
We are confirming this issue as per our findings here: https://github.com/magento/magento2/issues/30012#issuecomment-1310391317
Also updated the main description to reproduce the issue.
Thanks
:white_check_mark: Jira issue https://jira.corp.adobe.com/browse/AC-7009 is successfully created for this GitHub issue.