arche
arche copied to clipboard
Some additional tests for competition
I've added the tests for sedyh/mizu@experimental and some additional fixes. From this discussion: #200.
🤔 Any idea how I can approve the CI to run? Or why I can't? No such option anywhere...
EDIT: Ok, seems it is becaused actions are restricted to push
. Will fix...
Ok, fixed. Could you rebase onto main
please?
Ok, fixed. Could you rebase onto
main
please?
Ok, I'll do it tomorrow. It's 4 AM rn at my tz.
Ok. Looks like I've synced a fork and rebased my branch onto main.
GLFW dependencies are missing. They need to be installed before the benchmarks run. See here for an example: https://github.com/mlange-42/arche-pixel/blob/main/.github/workflows/tests.yml#L24
Is it about changing the build stage? Added two commands under the run key.
No, it is required for the failing jobs in benchmarks.yml
.
Ah, I get it. The problem is that I wrapped ebitengine at myself. Thats because I wanted to make a simple api for the user, but the engine api doesn't really work with di and "accept interface, return struct" principle.
Yeah, I would try to avoid such dependencies where possible, esp. if it includes something as heavy as OpenGL.
Mh, another error around GLFW. But this time, it should not happen when just importing it. It looks like Ebiten actually does some GLFW stuff. I got this error only when actually trying to create a window, or something else that requires a "display".
So I guess this won't run in the CI in it's current form.
@sedyh Finally, I found a way to circumvent the problem of no display in the CI runners.
There is xvfb
, which emulates a display. Here is an example how to use it:
test:
name: Run tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: '1.20.x'
- name: Install dependencies
run: |
sudo apt-get update -y
sudo apt-get install -y libgl1-mesa-dev xorg-dev xvfb
go get .
- name: Run tests
run: |
xvfb-run -a go test -v ./... -covermode=count -coverprofile="coverage.out"
go tool cover -func="coverage.out"
Yeah... if we are going to merge that, then, probably, only in a separate branch, so as not to litter with additional dependencies? Or it will just better to close the PR after getting the test results?
Yeah... if we are going to merge that, then, probably, only in a separate branch, so as not to litter with additional dependencies? Or it will just better to close the PR after getting the test results?
I am a bit undecided. Maybe I will merge it, as it adds the dependencies only to the benchmark
module, which is separate from arche
.
But maybe I will treat it like the Entitas benchmarks, and merge both branches together. Would then rebase when re-running due to improvements here or in the other implementations.
Will prepare and upload the plots in the next days.
Another thing I notice while preparing the plots is that something in your PR (a dependency?) is slowing down some of the benchmarks, particularly in the Build
category. Not sure what this can be.
Take iterations as a baseline, and compare build against it...
On main
:
BenchmarkIterArche-2 284737 4069 ns/op 0 B/op 0 allocs/op
BenchmarkBuildArche-2 945 1354442 ns/op 2538583 B/op 10089 allocs/op
BenchmarkIterArcheGeneric-2 235425 5002 ns/op 0 B/op 0 allocs/op
BenchmarkBuildArcheGeneric-2 2462 506672 ns/op 1093978 B/op 70 allocs/op
On your branch:
BenchmarkIterArche-2 246014 4768 ns/op 0 B/op 0 allocs/op
BenchmarkBuildArche-2 632 1785171 ns/op 2538583 B/op 10089 allocs/op
BenchmarkIterArcheGeneric-2 241064 5120 ns/op 0 B/op 0 allocs/op
BenchmarkBuildArcheGeneric-2 1890 627462 ns/op 1093978 B/op 70 allocs/op
Hello, sorry for the long answer. The situation with the build slowdown quite strange, I'll try my best to solve it later.