feast icon indicating copy to clipboard operation
feast copied to clipboard

Online store latency (redis) is growing over time

Open RadionBik opened this issue 1 year ago • 11 comments

Expected Behavior

The online-store latency remains the same and doesn't grow over the time. We assume that amount of data in redis is not growing.

Current Behavior

image

The median latency growth over the past 30 days is on the chart above.

Steps to reproduce

We use a custom aiohttp python service to relay requests to the online-store, where we invoke the native feast client. The service is completely stateless, and I don't expect it to be the source of the problem.

here is our feast config:

config = RepoConfig(
    project='feast',
    registry='gs://our-bucket/feast/registry_file_3.db',
    provider='gcp',
    online_store=RedisOnlineStoreConfig(
        connection_string=get_redis_connection_string(),
        redis_type='redis',
        key_ttl_seconds=REDIS_TTL_SECONDS,
    ),
    offline_store=BigQueryOfflineStoreConfig(
        dataset='feature_store',
        project_id='project-id',
    ),
    entity_key_serialization_version=2,
)

as you see, we use the default registry cache TTL (=600).

Specifications

  • Version: 0.28.0
  • Platform: amd64
  • Subsystem: debian 10

Possible Solution

We noticed that changing the path to file-based registry (i.e. effectively re-creating it) eliminates the latency growth and it back to normal (today's chart): image

Therefore, a solution might be related to fixing the registry caching mechanism.

Let me know if further details are needed!

RadionBik avatar Apr 13 '23 11:04 RadionBik

I have noticed that timestamps of incremental materialization are stored in the registry and are sent to client as well. We run incr. mat. every 15 min, so over month it yields to 30 * 24 * 4 = 2880 timestamps per view, which might explain the gradual increase of the response time. Not sure if it is the reason, but decided to share the info here.

RadionBik avatar Apr 14 '23 14:04 RadionBik

Thanks for filing! do you mean that you changed from the file based registry to have a TTL=0 and it was ok?

adchia avatar Apr 21 '23 15:04 adchia

I have never adjusted registry cache TTL, it has always been set to the default value.

RadionBik avatar Apr 21 '23 16:04 RadionBik

Hi guys. Any news on this? This issue forces us to update the registry file in production every couple of weeks to reset the latency..

nturusin avatar May 05 '23 08:05 nturusin

Hi @adchia Do you have any plans to fix this soon?

nturusin avatar May 23 '23 09:05 nturusin

an update from us: having disabled incremental materialization for a week, we do NOT notice increases in latency anymore, which confirms my hypothesis above. As you at the chart below, we recreated the registry and disabled incr. mat. around 25th of May:

image

Unfortunately, the cause nor a resolution of it is not obvious for me ATM.

RadionBik avatar May 30 '23 14:05 RadionBik

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

stale[bot] avatar Oct 15 '23 16:10 stale[bot]

we found that logging to feast caused the problem, we disabled it and everything runs without leaks now

RadionBik avatar Nov 08 '23 12:11 RadionBik

@RadionBik hey, thanks for investigating this, can you clarify the last comments? Did you find that both incremental materialization payload and feast usage were the culprits here?

tokoko avatar Mar 23 '24 06:03 tokoko

I might have left the last comment in a wrong issue.

So the current status is:

  • we disabled feast's telemetry to help with a memory leak problem we had (not this issue)
  • we do not use incremental materialisation anymore because of the increasing latency problem we reported in this issue

Hope this clarifies the situation a bit

RadionBik avatar Mar 23 '24 10:03 RadionBik

@RadionBik thanks for the clarification

tokoko avatar Mar 28 '24 10:03 tokoko