Konrad `ktoso` Malawski

Results 259 issues of Konrad `ktoso` Malawski

Currently the API is only offered via `LifecycleWatch` protocol, and we should be able to make it accessible to non distributed actors as well perhaps

1 - triaged
t:deathwatch

``` 20:28:19 Test Case 'ClusterSingletonPluginClusteredTests.test_remoteCallShouldFailAfterAllocationTimedOut' started at 2022-07-26 11:28:07.389 20:28:19 :0: error: ClusterSingletonPluginClusteredTests.test_remoteCallShouldFailAfterAllocationTimedOut : threw error " 20:28:19 try await self.assertMemberStatus(on: second, node: firstNode, is: .down, within: .seconds(10)) 20:28:19 ^~~~~~...

failed 💥

We have ActorTags now, and can use them "give me actors tagged `"team": "x"`" rather than having separate reception keys -- which we needed before. We can have a specific...

1 - triaged
s:medium
t:receptionist

``` 21:43:00 :0: error: DowningClusteredTests.test_stopLeader_by_leaveSelfNode_shouldPropagateToOtherNodes : threw error " 21:43:00 try await self.joinNodes(node: first, with: second, ensureMembers: .up) 21:43:00 ^~~~~~ 21:43:00 error: MembershipError(awaitStatusTimedOut(20.0 seconds, Optional(DistributedActors.Cluster.MembershipError.statusRequirementNotMet(expected: DistributedActors.Cluster.MemberStatus.up, found: Member(sact://second:[email protected]:9002, status: joining,...

failed 💥

``` 21:34:55 Test Case 'LifecycleWatchTests.test_watch_shouldTriggerTerminatedWhenNodeTerminates' started at 2022-07-20 12:34:35.741 21:34:55 /code/Tests/DistributedActorsTests/LifecycleWatchTests.swift:138: error: LifecycleWatchTests.test_watch_shouldTriggerTerminatedWhenNodeTerminates : failed - 21:34:55 try await joinNodes(node: first, with: second, ensureMembers: .up) 21:34:55 ^~~~~~ 21:34:55 error: MembershipError(awaitStatusTimedOut(20.0...

failed 💥

We could consider offering a distributed rate limiter. We had some ideas around this in Akka way back then, around a replenishing token bucket based design. I was recently reminded...

9 - maybe some day
feature-request

If a distributed actor conforms to some protocol we can check it in based on it's metadata, rather than having to pass the key explicitly: ``` await local.receptionist.checkIn(forwarder, with: .stringForwarders)...

help wanted
s:small
t:receptionist

This is still a ref based receptionist, we should hide it and remove as soon as we can (and only support DA actors)

2 - pick next
s:medium

For the simplest case, using existing API, we're able to do this with receptionist and checking in with "user id" reception keys. It is somewhat sub optimal, but that is...

1 - triaged