mark_all does not work for aggregated activities updated_at within the same second of each other
To reproduce:
- create a few activities all within one second of each other. create a NotificationFeed for all of these activities.
- for each activity, add them to a separate AggregatedActivity (whether it's already existing or a new one, it doesn't matter)
- call mark_all on the feed. examine all AggregatedActivity updated within the same second. only one of them will be marked as seen (the others will remain unchanged).
This issue is due to how AggregatedActivity's serialization_id not being precise enough. Activity's serialization_id implementation multiplies the timestamp by 1000 to allow for microsecond precision. AggregatedActivity's serialization_id implementation does not multiply this by 1000. If two AggregatedActivities have updated_at times of 4.242 and 4.424, their serialization_id would both be 4. As the serialization_id of these AggregatedActivities are the same, only one of these activities are updated when add_many and remove_many are called on timeline_storage....
In general, I think serialization_id for AggregatedActivity is a bit misleading right now...it is likely for them not to be unique. Can we consider allowing it to have microsecond precision by multiplying it by 1000 (same as Activity's serialization_id)?
Git blame shows that this used to be the case until https://github.com/tschellenbach/Stream-Framework/commit/078cb3e98734a790910cb1e6303929995e052701
I agree that the implementation of the serialization_id is misleading. Probably, it is not fixed so far due to backwards compatibility. Here is the implementation that I use:
from stream_framework.activity import AggregatedActivity as BaseAggregatedActivity
class AggregatedActivity(BaseAggregatedActivity):
@property
def serialization_id(self):
serialization_id = int(datetime_to_epoch(self.updated_at) * (10 ** 6))
return serialization_id
I am using the same approach for Activity class.
@Anislav Awesome, thanks for the feedback. That's the same workaround we are using right now.