Sort benchmark results in REAMDE.md by column: Mean
As the maintainer of Friflo.Engine.ECS I want to ask if the benchmark results in README.md can be ordered by column: Mean.
Every Component benchmark have now 44 rows. So you are lost in numbers. As a result you go with Arch assuming the tables are already sorted.
This assumption comes from the coincidence that Arch have good results in average. Morpeh & SveltoECS are at the bottom caused by the alphabetical sort order.
So there is no real incentive to improve performance. You never got on the Top caused by the selected project Name.
Alternatively we could rename our projects to "Abc-..." π
Any thoughts?
well it wasn't really supposed to become a score board but I get you haha. It could probably come with the next PR #36
Ah, did not read through this PR. Too much text for me :) But the idea is the same.
Speaking of leaderboard. This is the reference π https://www.techempower.com/benchmarks/#hw=ph&test=fortune§ion=data-r22
Best C# implementation: aspcore-ado-pg Looser! π
Speaking of leaderboard. This is the reference π https://www.techempower.com/benchmarks/#hw=ph&test=fortune§ion=data-r22
Oh, the race is absolutely on, @friflo ! ππ (in only the best of sporting spirits). I am growing really curious where either of our EC-Systems can be improved beyond what we as the creators already know (as well as the others) - true discoveries and learnings, so to speak. I gotta say, I admire your Carmack-style indentation, type hierarchies, and project structure, so slick and clean! Honestly one of the nicest repos I have seen in a long time.
LG, Tiger
PS: I also agree that Arch is at best middling despite bold claims to the contrary on its repo README.MD. Since the results are not sorted by value, Arch appears to benefit greatly from recency bias when reading. Perhaps I should rename fennecs to 123ecs, mwahah. π‘οΈ
Just to put into perspective where Arch kinda scores right now...
(this is CreateEntityWithThreeComponents 100k after the PR #36 as run on my machine)
| Method | EntityCount | Mean | Error | StdDev | Median | Allocated |
|---|---|---|---|---|---|---|
| fennecs | 100000 | 916.0 us | 20.43 us | 57.62 us | 898.7 us | 4.5 MB |
| FrifloEngineEcs | 100000 | 1,925.7 us | 155.23 us | 457.69 us | 2,050.8 us | 6.59 MB |
| Arch | 100000 | 3,327.8 us | 47.46 us | 72.48 us | 3,324.5 us | 3.86 MB |
| DefaultEcs | 100000 | 6,412.8 us | 127.84 us | 113.32 us | 6,401.3 us | 19.06 MB |
Hi @thygrrr!
Challenge accepted π I don't think I can beat this number. It would require dropping features.
I am already looking where I can improve performance. Found already one hotspot. There are two more candidates for improvement.
I checked your implementation. I guess you should add a Reserve() or EnsureCapacity() like Arch or Friflo ECS.
I assume that gives a speedup of 20%.
@Doraku Is it okay to reuse a World in the benchmark to avoid the cost for memory allocations? Arch is doing this indirectly as World.Create() recycle disposed worlds.
... Carmack-style indentation ...
Far too much praise but I appreciate. I like indentation it helps me parsing code into my few synapses faster.
btw: Arch is very good in selling his ECS. Even if I don't like this exaggerated wording.
I'm warming up my queries for the Job benchmarks (because of how ThreadPool.UnsafeQueueUserWorkItem works, and also because fennecs probably won't win any trophies with its bare-bones thread scheduling either way - it just makes it a lot less random because the Runtime & OS can take a near arbitrary time to fire up the necessary number of worker threads for the first time).
Additionally I'm (now) also pre-sizing World and underlying IdentityPool based on the number of entities to expect.
This saves unnecessary cascading and repeat dictionary and storage array resizes, etc. that on real life runs would also never occur more than once, and that a user in the know would work around the exact same way after reading the docs.
I find that this is common sense and common practice for any expected ECS real work use.
Otherwise I'd just set the default capacities from 4K to 128K in the next release. ;)
Otherwise I'd just set the default capacities from 4K to 128K in the next release. ;)
LOL
Hi @thygrrr,
finished optimization of create entities. See: https://github.com/Doraku/Ecs.CSharp.Benchmark/pull/38
The main optimization was to minimize the administration memory footprint used for an entity
from 48 byte to 16 bytes required by the struct EntityNode.
The counterpart in your lib is called Meta I guess.
Now I don't see any significant optimization possibilities. Create entities could be optimized a little to but delete entities will get slower.
Viele GrΓΌΓe, Ulli
This has been done now