FLAMEGPU2 icon indicating copy to clipboard operation
FLAMEGPU2 copied to clipboard

FLAME GPU 2 is a GPU accelerated agent based modelling framework for CUDA C++ and Python

Results 124 FLAMEGPU2 issues
Sort by recently updated
recently updated
newest added

Occasiaonally producing "nightly" wheels has been useful between releases. We can generate the full set of wheels by manually invoking the draft release CI workflow, but this will generate ~48...

CI
speculative

As of #1090, MPI-backed distribtued ensembles will be implemented, with an MPI-only test suite, which will only striclty test the use of MPI in a multi-gpu scenario. However, google test...

Tests
Priority: Low

test_codegen.py can be executed on CI as it does not require the rest of pyflamegpu / libcuda.so to be available at runtime. This is of very little use however, as...

Our current (functional) matrix syntax is not technically correct according to the web editor: > Matrix options must only contain primitive values I.e `cudacxx` is an object so not a...

CI

Following on from #316, there are improvements that can be made to improve quality of life with CMake: + [ ] Add messages when dependencies are beign downloaded, in case...

cmake

If an agent function should be common to agents in all states, can we provide a clean method to avoid the manual duplication? https://github.com/FLAMEGPU/FLAMEGPU2/discussions/1132

enhancement

Conda is an alternative python package distribution mechanism to pip/pypi. It is not as widely use due to `pip` and pypi being the defacto defaults, but is widely used in...

python

CUDA 12.3 includes additions to the CUDA graph API which should make it usable for FLAMEGPU: > CUDA Graphs: > > Conditional nodes, allowing you to conditionally execute or iterate...

optimistic
optimisation

Environment directed graphs, introduced by #1089, are currently only import/export-able via dedicated HostAPI functions. They are not currently present in existing IO methods which encapsulate a whole model: * Model...

enhancement

This should be possible. For shared message output; it requires tracking sharing an offset into the message output buffer (and initially sizing it correctly). For shared message input; it requires...

enhancement
Priority: Low