PropelDataCacheBehavior
PropelDataCacheBehavior copied to clipboard
Possible issue with automatically purging the cache for foreign key related tables
Hey SNakano,
I am having the following issue by using your caching behaviour for propel.
Brief Introduction
The case is not easy. It looks like it is not an issue by your component (hopefully). I am writing this lines to get some feedback and with the hope in my fingers that you could tell me an "hey, you have to do it just this way to fix it easily" ;-). I am using propel for the whole data "dealing", output as well as manipulation.
Given that we have the following database table structure:
consumer:
id <int>
item_collection_to_consumer:
id <int>
consumer_id <int>
item_collection_id <int>
item_collection:
id <int>
item_to_item_collection
id <int>
item_collection_id <int>
item_id <int>
item:
id <int>
A "consumer" has a unique id. By using the table "item_collection_to_consumer", we are able to reference zero up to infinity "item_collection"s to a "consumer". A "item_collection" has a unique id. By using the table "item_to_item_collection", we are able to reference zero up to infinity "item"s to a "item_collection". A "item" has a unique id. Foreign keys are set between the tables and are also configured in the propel schema.xml.
I added the caching behaviour to each of this tables. I am using redis as caching backend.
When I change the connected "item_collection" for a "consumer", the "item_collection" cache gets purged, but not the "item" cache.
This is pretty bad. When I have to purge all the item cache each time I add/change/delete an entry in "item_collection_to_consumer", the whole caching might be useless.
Example
To illustrate it better, I will try to show it via the following example.
Example Data
First of all, I "output" the content of the tables described above.
Table consumer
| id |
|---|
| 1 |
| 2 |
Table item_collection_to_consumer
| id | consumer_id | item_collection_id |
|---|---|---|
| 1 | 1 | 1 |
| 2 | 2 | 1 |
Table item_collection
| id |
|---|
| 1 |
| 2 |
Table item_to_item_collection
| id | item_collection_id | item_id |
|---|---|---|
| 1 | 1 | 1 |
| 2 | 1 | 2 |
| 3 | 2 | 3 |
| 4 | 2 | 4 |
Table item
| id |
|---|
| 1 |
| 2 |
| 3 |
| 4 |
Scenario
Now, I will write down the steps, the expected result and the real result. All the output is based on the assumption that we want to know the data fitting to the consumer with the id "1".
| step with description | expected result | real cache result | real database result |
|---|---|---|---|
| Output the connected item_collection | 1 | 1 | 1 |
| Output the connected items | 1, 2 | 1, 2 | 1, 2 |
| Update the entry with the id "1" in the table item_collection_to_consumer by setting the value for the column "item_collection_to_consumer" to "2" | - | - | - |
| Output the connected item_collection | 2 | 2 | 2 |
| Output the connected items | 3, 4 | 1, 2 | 3, 4 |
As you see, the cache and the database differs. All I can assume is the following idea. While the cached data for table "item_collection_to_consumer" is purged, the cache data for the items is not purged. Thats a fact since I can see and read this in the redis cache.
Final Words
The big question, whole can we trigger the purging of the cache and ideally without doing anything on the application front. Do you have an advice? Do you experienced that kind of error already? Do you need more information?
Kind regards, stev
If the relationship is complex, we know that the deletion of the cache does not work. However, it is difficult to solve this problem, not been able to solve this problem even now.
For this reason, we are to use the cache function only in limited case.
We also do not have a method for solving your our problems. I am sorry that I was unable to be of any assistance.
Hey @SNakano, don't worry. You helped we already by confirming this edge case as a know behavior. Would you mind to add this in the documentation? I am asking you since you are maybe more familiar with this. Otherwise, if you want to, I could make up my mind and create a pull request with my suggestion of a comment.
Kind regards, Stev
Thank you for your great proposal. Would you send me the pull request?
Hey @SNakano,
I will do my best. Please be patient. I've spotted some issues already and trying to implement a workable solution that fits on more than just one edge case :-).