libCEED icon indicating copy to clipboard operation
libCEED copied to clipboard

CEED Library: Code for Efficient Extensible Discretizations

Results 99 libCEED issues
Sort by recently updated
recently updated
newest added

@jedbrown and I have been discussing the possibility of using JAX to write qfunctions, since it supports JIT compilation and automatic differentiation. I see several ways to go about this,...

enhancement
design
backend
GPU
Python

Some of you may have noticed that there is a `release` branch in the repository now. I'm curious if we should start distinguishing main development from ABI-compatible bug fixes, perhaps...

This topic was briefly mentioned in the CEED telecon. We have talked before about possible methods for caching JiTed kernels.

enhancement
GPU
CUDA
HIP

For those with LLNL CZ access, I created a mirror repository that will be able to run CI jobs. You can log in and request access if you don't already...

GPU
CUDA
CI

This is a WIP PR for the development of the shallow-water equations solver miniapp. Note: This PR moves the `fluids/navierstokes` miniapp into the subdirectory `fluids/navier-stokes/navierstokes` (relative `Makefile` and `.gitignore` files...

help wanted
examples
PETSc
0-WIP

Eventually, we should move to a more sophisticated strategy for managing the number of threads for testing on the GPU in CI. See discussion in https://github.com/CEED/libCEED/pull/706

CI

I am currently reviewing libCEED as part of https://github.com/openjournals/joss-reviews/issues/2945. The method CeedRequestWait is documented and seems extremely interesting. However, looking through the source it does not appear to be actually...

We should support batched application of CeedOperator to vectors. One approach is to make a new constructor ```c int CeedOperatorCreateKroneckerProduct(CeedOperator J, CeedInt m, const CeedScalar *T, CeedOperator *JxT); ``` where...

enhancement
performance

This issue is to continue the discussion raised in #642, regarding a framework that would allow users to explicitly set parameters related to GPU kernel launch configurations at runtime. @jedbrown...

enhancement
interface
GPU

Arm SVE intrinsics are vector-length agnostic, so represent a nontrivial difference in strategy from Intel intrinsics. An example shows how one might program with these intrinsics: https://developer.arm.com/documentation/100891/0612/coding-considerations/using-sve-intrinsics-directly-in-your-c-code Scatters and gathers...

backend