underworld3
underworld3 copied to clipboard
**Describe the bug** - What is the bug? Memory leak, where memory usage goes up each iteration. Does not occur in serial, only when running in parallel. - What version...
Unifying the two BC functions add_dirichlet and add_natural in generic solvers.pyx + The single add_bc() function is much easier to maintain. + Improvements in the handling of what 'conds' can...
UW constants are `sympy` symbols that have an attached numerical value and a description. There is a subsitution operation attached to the class that will replace all constants with the...
Hi @lmoresi, In the following snippet, normals are not defined: https://github.com/underworldcode/underworld3/blob/4fe6bf4e6916a6d7b392840b1a8c81780f2fe4b4/src/underworld3/meshing.py#L719 I have two questions regarding this Q1) Is this necessary to define normals? Normals are defined for all other...
It appears that the refinement_callback is the culprit when it comes to issues in spherical models, especially when both refinement and refinement_callback are enabled. However, when I ran the model...
Here are the details of null space removal from the benchmark paper. https://gmd.copernicus.org/articles/14/1899/2021/#:~:text=Solving%20the%20linear%20system%20and%20dealing%20with%20null%20spaces Do we also need to remove zero modes from the approximate solution at every iteration?
We currently do not utilise the full functionality of labels in Boundary conditions but we should to save ourselves having to document every deviation we make from PETSc.
The meshVariable.sym value should be read only. I don't think there is any reason to change this once the variable has been defined. If there is, it should not be...
I'm having an issue evaluating on the swarm. I need to first do an evaluate on the mesh for the swarm evaluate to then work. No issues with the evalf...
Using conda in parallel models have this error along with the usual PETSC ERROR msg. `ERROR: SCOTCH_dgraphInit: Scotch compiled with SCOTCH_PTHREAD and program not launched with MPI_THREAD_MULTIPLE` The work around...