Aaron Stannard

Results 761 comments of Aaron Stannard

Another example of slow message handling inside the shard / shard region actors going across node ![image](https://user-images.githubusercontent.com/326939/129968459-a736141d-f604-446b-ba9f-60d7a2b52557.png)

I think I know where to look now - it looks like there's a combination of: 1. Dispatch / context switching overhead - the gaps between spans on these charts,...

@carl-camilleri-uom that's great work. Explains why this issue is unique to sharding. I also noticed we're doing some things like converting the `IMessageExtractor` into a delegate at startup inside the...

The changes you included in your patch file - I can't get them to run. ```csharp System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.ArgumentNullException: The...

So this is really an Akka.Remote performance issue. I replicated the actor hierarchy in its essence here: https://github.com/Aaronontheweb/RemotingBenchmark ``` ini BenchmarkDotNet=v0.13.1, OS=Windows 10.0.19041.1165 (2004/May2020Update/20H1) AMD Ryzen 7 1700, 1 CPU,...

> The results are below: I still get >3s to complete 10000 messages, so it seems not using ASK on the remote path does not improve much on the results...

> I understand it's not ideal to await however I'm not sure I understand what should be the correct approach for the use case at hand Your original benchmark code...

Since writing my last comment we've improved the performance of Akka.Remote by ~40%: https://github.com/akkadotnet/akka.net/pull/5247 However, the improvement in the Akka.Cluster.Sharding benchmark is only about ~15% - the thing that really...

> At this stage I start wondering whether this is due to a single actor being responsible to marshal calls on a per-shard basis. And therefore any request to the...

> If we were able to spec serializer IDs as having a max value, we probably could... The literature on those is a max value of 1024 with 0-99 reserved...