feast
feast copied to clipboard
Online store latency (redis) is growing over time
Expected Behavior
The online-store latency remains the same and doesn't grow over the time. We assume that amount of data in redis is not growing.
Current Behavior

The median latency growth over the past 30 days is on the chart above.
Steps to reproduce
We use a custom aiohttp python service to relay requests to the online-store, where we invoke the native feast client. The service is completely stateless, and I don't expect it to be the source of the problem.
here is our feast config:
config = RepoConfig(
project='feast',
registry='gs://our-bucket/feast/registry_file_3.db',
provider='gcp',
online_store=RedisOnlineStoreConfig(
connection_string=get_redis_connection_string(),
redis_type='redis',
key_ttl_seconds=REDIS_TTL_SECONDS,
),
offline_store=BigQueryOfflineStoreConfig(
dataset='feature_store',
project_id='project-id',
),
entity_key_serialization_version=2,
)
as you see, we use the default registry cache TTL (=600).
Specifications
- Version: 0.28.0
- Platform: amd64
- Subsystem: debian 10
Possible Solution
We noticed that changing the path to file-based registry (i.e. effectively re-creating it) eliminates the latency growth and it back to normal (today's chart):
Therefore, a solution might be related to fixing the registry caching mechanism.
Let me know if further details are needed!
I have noticed that timestamps of incremental materialization are stored in the registry and are sent to client as well. We run incr. mat. every 15 min, so over month it yields to 30 * 24 * 4 = 2880 timestamps per view, which might explain the gradual increase of the response time. Not sure if it is the reason, but decided to share the info here.
Thanks for filing! do you mean that you changed from the file based registry to have a TTL=0 and it was ok?
I have never adjusted registry cache TTL, it has always been set to the default value.
Hi guys. Any news on this? This issue forces us to update the registry file in production every couple of weeks to reset the latency..
Hi @adchia Do you have any plans to fix this soon?
an update from us: having disabled incremental materialization for a week, we do NOT notice increases in latency anymore, which confirms my hypothesis above. As you at the chart below, we recreated the registry and disabled incr. mat. around 25th of May:
Unfortunately, the cause nor a resolution of it is not obvious for me ATM.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
we found that logging to feast caused the problem, we disabled it and everything runs without leaks now
@RadionBik hey, thanks for investigating this, can you clarify the last comments? Did you find that both incremental materialization payload and feast usage
were the culprits here?
I might have left the last comment in a wrong issue.
So the current status is:
- we disabled feast's telemetry to help with a memory leak problem we had (not this issue)
- we do not use incremental materialisation anymore because of the increasing latency problem we reported in this issue
Hope this clarifies the situation a bit
@RadionBik thanks for the clarification