Jørgen Schartum Dokken
Jørgen Schartum Dokken
> > Usually meshes are created with the `dolfinx.mesh.create_mesh` function, as for instance shown in: https://jsdokken.com/FEniCS23-tutorial/src/mesh_generation.html#mesh-generation This is usually easier to work with from Python, as it works on numpy...
I've been running your script through docker (v0.7.3) for 100 % of the runtime (using 3 processes), and see no increase in memory usage:/ Could you try adding explicit usage...
> from mpi4py import MPI > import dolfinx as dlx > import dolfinx.fem.petsc > import ufl > > def run(Vh): > u = dlx.fem.Function(Vh) > v = ufl.TestFunction(Vh) > form...
With the following code, I do see a steady increase in memory usage: ```python from mpi4py import MPI import dolfinx as dlx import dolfinx.fem.petsc import ufl import gc def run(Vh):...
@uvilla Changing your code to: ```python def run(Vh): u = dlx.fem.Function(Vh) v = ufl.TestFunction(Vh) form = dlx.fem.form( ufl.inner(u,v)*ufl.dx ) vec = dlx.fem.petsc.assemble_vector(form) vec.destroy() ``` or ```python from mpi4py import MPI...
The `dolfinx.la.Vector` does not have a `.vector` property that interfaces with `PETSc`. It is only the `dolfinx.fem.Function` that has a `.vector` property that creates a wrapper around its data compatible...
Is there a specific motivation for using the petsc vector over the DOLFINx vector when working with DOLFINx? The DOLFINx one has clear memory management and easy access to ghost...
Could you elaborate on the last point there, i.e. "Step towards supporting periodic meshes"? I don't quite see how that would be related.
I guess the issue is that a lot of the functions in C++ are not vectorized. It would mean a lot of strided views of data in: https://github.com/FEniCS/dolfinx/blob/main/cpp/dolfinx/geometry/utils.h https://github.com/FEniCS/dolfinx/blob/main/cpp/dolfinx/geometry/gjk.h I...
Im in favor of this, simply because adios makes it easier for us to support multiple file formats (h5, BP and maybe some others)