FEniCS Project - Latest posts https://fenicsproject.discourse.group Latest posts Announcing FEniCS 2026: 17-19 June 2026 at Paris, at the University of Chicago | John W. Boyer Center, France Good new everyone,

We have just sent the official emails regarding abstract acceptance for FEniCS 2026. If you have not received yours, please let us know, and kindly double-check your spam folder as well.

Due to the high volume of submissions, we have adopted a shorter 10-minute presentation format. We understand that this condensed format may not be suitable for every contribution, so delegates who submitted a presentation have been given the option to change the format of their participation. Please check your email and let us know of any requested changes by April 30, 2026.

Please also note that the number of places is limited, so we strongly encourage all participants who submitted an abstract to register for the conference as soon as possible in order to secure their place. For registration instructions, please visit the FEniCS 2026 website: FEniCS 2026 | FEniCS Project.

Important dates:
(optional) Change the format of the talk (for those who submitted a presentation): April 30, 2026
Early-bird registration deadline: 15 May 2026
Standard registration deadline: 1 June 2026

Other highlights:

  • For payment-related issues, please contact the University of Chicago Center directly. The contact email was provided in the official abstract acceptance email.
  • The preliminary programme is available on the website: FEniCS 2026Preliminary programme | FEniCS Project.
  • The conference host, the University of Chicago | John W. Boyer Center, has provided a list of hotels offering special booking rates, available here. When making your reservation at any of these establishments, please request the University of Chicago Paris Center rate.
  • Participants who wish to attend the optional Advanced Tutorial Session on 17 June are encouraged to arrive in Paris before early morning, as the session begins at 9:20. @dokken and @jackhale will guide attendees through the new features and best practices of FEniCSx. Please note that the conference opening ceremony will begin at 13:20 on the same day.

If you have any questions, please contact us at: [email protected].

The FEniCS 2026 Organizing Committee

]]>
https://fenicsproject.discourse.group/t/announcing-fenics-2026-17-19-june-2026-at-paris-at-the-university-of-chicago-john-w-boyer-center-france/19613#post_6 Wed, 22 Apr 2026 09:33:44 +0000 fenicsproject.discourse.group-post-58485
Solving two Poisson problems on subdomains of a parent mesh yielding non-deterministic success The first definition should in theory just integrate over all cells of the submesh. (As it should ignore all cells not in the submap.)
It is a bit suprising it doesn’t work consistently.

To me it is a bit unclear why you are not using dx0 = ufl.Measure(“dx”, domain=submesh0)
And similarly for dx1 as there is no coupling between the two submeshes. Then you wouldn’t need the meshtags.

]]>
https://fenicsproject.discourse.group/t/solving-two-poisson-problems-on-subdomains-of-a-parent-mesh-yielding-non-deterministic-success/19661#post_3 Tue, 21 Apr 2026 15:30:28 +0000 fenicsproject.discourse.group-post-58483
Errors while running asimov-contact demos Hi, by following your installation I ran into the same error and it turned out to be a nanobind version mismatch. I’d suggest checking your nanobind version and comparing it with the one required by fenics-dolfinx=0.9.0 from conda-forge (it should be nanobind 2.8).
If it doesn’t match (in my case it was 2.12.0), the fix was:

pip uninstall -y nanobind  
conda install -c conda-forge nanobind=2.8.0

and then rebuilding dolfinx_contact against the correct nanobind version. After that the demos ran fine.

Hope it helps!

]]>
https://fenicsproject.discourse.group/t/errors-while-running-asimov-contact-demos/19663#post_2 Tue, 21 Apr 2026 13:30:23 +0000 fenicsproject.discourse.group-post-58472
Errors while running asimov-contact demos Hello everyone,
I have installed dolfinx_contact and can build it successfully, but I run into an error when executing the demo scripts (e.g. demo_friction_cylinders.py). The issue seems similar to the one discussed here: TypeError: ContactProblem argument mismatch — ‘incompatible function arguments’ in dolfinx_contact · Issue #185 · Wells-Group/asimov-contact · GitHub
I followed the installation steps using:

  • conda install -c conda-forge fenics-dolfinx=0.9.0
  • then building asimov-contact from source with CMake/Ninja and installing the Python package
git clone https://github.com/Wells-Group/asimov-contact.git
cd asimov-contact/
export VERBOSE=1
cmake -G Ninja -DCMAKE_BUILD_TYPE=Developer  -B build-contact -S cpp/
ninja -C build-contact install
python3 -m pip -v install -r ./python/build-requirements.txt
python3 -m pip -v install --no-build-isolation python/
cd python/demos/hertz_contact
python3 -m pip install scipy
python3 -m pip install matplotlib
mkdir -p ../meshes
python3 demo_friction_cylinders.py 

I also tried an alternative installation approach from the FEniCS discourse thread ((Problems installing Asimov-Contact - #7 by dokken)), but the same error persists.

The error I get is:

TypeError: __init__(): incompatible function arguments. The following argument types are supported:
    1. __init__(self, markers: collections.abc.Sequence[dolfinx::mesh::MeshTags<int>], surfaces: dolfinx::graph::AdjacencyList<int>, contact_pairs: collections.abc.Sequence[collections.abc.Sequence[int]], mesh: dolfinx::mesh::Mesh<double>, search_method: collections.abc.Sequence[dolfinx_contact.cpp.ContactMode], quadrature_degree: int = 3) -> None

Invoked with types: dolfinx_contact.general_contact.contact_problem.ContactProblem, list, dolfinx.cpp.graph.AdjacencyList_int32, list, dolfinx.cpp.mesh.Mesh_float64, kwargs = { quadrature_degree: int, search_method: list }

I also experimented with changing the argument order in the ContactProblem call, but it didn’t resolve the issue.

I am not sure whether this is due to an installation/version mismatch or something subtle in how the demo is being run. Has anyone recently been able to run these demos successfully? Any guidance would be appreciated.

Thanks for your time.

]]>
https://fenicsproject.discourse.group/t/errors-while-running-asimov-contact-demos/19663#post_1 Mon, 20 Apr 2026 17:24:00 +0000 fenicsproject.discourse.group-post-58470
Can Fenics handle Dirichlet boundary conditions on DG spaces? In principle, the DG formulation is not “designed” for strong imposition of Dirichlet data. The huge advantage of the DG method is the rich selection of employable basis functions. E.g., one could choose from nodal, or modal bases. Furthermore, the weak imposition of Dirichlet data yields a global approximate solution closer to the true solution than by strongly imposing an interpolation or trace projection of your Dirichlet data.

All that said, if you really want to strongly impose Dirichlet data for a DG scheme, this could be achieved with a nodal basis and a spatial lookup for the degrees of freedom located on the exterior facets on which you wish to impose that data.

I’m not sure what basis is used by legacy FEniCS in the default case for “'DG'” in a function space. I think it may be a nodal basis of interpolatory Lagrange polynomials defined in each cell. So if you use the geometric or pointwise search to find those degrees of freedom aligned with the facets on which you want to impose Dirichlet boundary conditions, it should work. You can find the documentation here: Bitbucket

]]>
https://fenicsproject.discourse.group/t/can-fenics-handle-dirichlet-boundary-conditions-on-dg-spaces/19662#post_2 Sun, 19 Apr 2026 19:40:59 +0000 fenicsproject.discourse.group-post-58469
Can Fenics handle Dirichlet boundary conditions on DG spaces? Hello,
I want to solve the Poisson equation by defining its fields on DG spaces. The boundary value problem is
\nabla^2 u = f \, \text{in }\Omega
u = u_0 \text{ on } \partial\Omega
where \Omega is a disk.

I managed to solve this by imposing the boundary condition (BC) with a penalty term, but I would like to enforce it strongly as a DirichletBC, for reasons related to future developments.
Is this possible?

Here is the minimal working example where the BC is imposed with penalty (yes, I am purposedly using a nonlinear solver for future developments):

import numpy as np
from fenics import *
import ufl

# ── Mesh ─────────────────────────────────────────────────────────────────────
r = 1.0          # disk radius
N = 32           # mesh resolution (number of cells along diameter)
mesh = UnitDiscMesh.create(MPI.comm_world, N, 1, 2)

# ── Parameters ───────────────────────────────────────────────────────────────
degree = 2       # polynomial degree of DG space
alpha  = 10.0    # penalty parameter (must be large enough)
h      = CellDiameter(mesh)
n      = FacetNormal(mesh)

# ── Function space ────────────────────────────────────────────────────────────
Q      = FunctionSpace(mesh, 'DG', degree)
u      = Function(Q)
nu     = TestFunction(Q)
J_u    = TrialFunction(Q)

# ── Exact solution and RHS ───────────────────────────────────────────────────
class u_exact_expression(UserExpression):
    def eval(self, values, x):
        values[0] = 1 + x[0]**2 + 2*x[1]**2

    def value_shape(self):
        return (1,)

class f_expression(UserExpression):
    def eval(self, values, x):
        values[0] = 6
        
    def value_shape(self):
        return (1,)

u_exact = Function(Q)
f       = Function(Q)
u_exact.interpolate(u_exact_expression(element=Q.ufl_element()))
f.interpolate(f_expression(element=Q.ufl_element()))

i = ufl.indices(1)[0]

# ── Variational form ─────────────────────────────────────────────────────────

def jump(v, nn): return v('+') * nn('+') + v('-') * nn('-')
def avg(v):      return 0.5 * (v('+') + v('-'))

# F_0: bulk term + natural Neumann term
F_0 = (u.dx(i)*nu.dx(i) + f*nu) * dx \
    - n[i] * u.dx(i) * nu * ds

# F_I: interior penalty
F_I = (
    - avg(u.dx(i)) * jump(nu, n)[i]
    + alpha/h('+') * jump(u, n)[i] * jump(nu, n)[i]
) * dS

F_E = (
    alpha/h * (u - u_exact) * nu
) * ds

F = F_0 + F_I + F_E

solve(F == 0, u, [])

error_L2 = errornorm(u_exact, u, 'L2')
print(f"L2 error = {error_L2:.3e}")

and here the one where I try to impose it with DirichletBC

import numpy as np
from fenics import *
import ufl

# ── Mesh ─────────────────────────────────────────────────────────────────────
r = 1.0
N = 32
mesh = UnitDiscMesh.create(MPI.comm_world, N, 1, 2)

# ── Parameters ───────────────────────────────────────────────────────────────
degree = 2
h      = CellDiameter(mesh)
n      = FacetNormal(mesh)

# ── Function space ────────────────────────────────────────────────────────────
Q      = FunctionSpace(mesh, 'DG', degree)
u      = Function(Q)
nu     = TestFunction(Q)
J_u    = TrialFunction(Q)

# ── Exact solution and RHS ───────────────────────────────────────────────────
class u_exact_expression(UserExpression):
    def eval(self, values, x):
        values[0] = 1 + x[0]**2 + 2*x[1]**2
    def value_shape(self):
        return (1,)

class f_expression(UserExpression):
    def eval(self, values, x):
        values[0] = 6
    def value_shape(self):
        return (1,)

u_exact = Function(Q)
f       = Function(Q)
u_exact.interpolate(u_exact_expression(element=Q.ufl_element()))
f.interpolate(f_expression(element=Q.ufl_element()))

alpha = 10.0

i = ufl.indices(1)[0]

# ── Strong Dirichlet BC ───────────────────────────────────────────────────────
class Boundary(SubDomain):
    def inside(self, x, on_boundary):
        return on_boundary

bc = DirichletBC(Q, u_exact, Boundary())
print(f"Constrained DOFs = {len(bc.get_boundary_values())}")

# ── Variational form ─────────────────────────────────────────────────────────

def jump(v, nn): return v('+') * nn('+') + v('-') * nn('-')
def avg(v):      return 0.5 * (v('+') + v('-'))

F_0 = (u.dx(i)*nu.dx(i) + f*nu) * dx \
    - n[i] * u.dx(i) * nu * ds

F_I = (
    - avg(u.dx(i)) * jump(nu, n)[i]
    + alpha/h('+') * jump(u, n)[i] * jump(nu, n)[i]
) * dS

F = F_0 + F_I

solve(F == 0, u, [bc])

error_L2 = errornorm(u_exact, u, 'L2')
print(f"L2 error = {error_L2:.3e}")

The first one works, it gives:

# python3 mwe_penalty.py 
No Jacobian form specified for nonlinear variational problem.
Differentiating residual form F to obtain Jacobian J = F'.
Solving nonlinear variational problem.
  Newton iteration 0: r (abs) = 2.048e+02 (tol = 1.000e-10) r (rel) = 1.000e+00 (tol = 1.000e-09)
  Newton iteration 1: r (abs) = 1.054e-12 (tol = 1.000e-10) r (rel) = 5.146e-15 (tol = 1.000e-09)
  Newton solver finished in 1 iterations and 1 linear solver iterations.
*** Warning: Degree of exact solution may be inadequate for accurate result in errornorm.
L2 error = 1.395e-12

The second one fails (0 constrained DOFs), it gives:

# python3 mwe_dirichlet.py 
Constrained DOFs = 0
No Jacobian form specified for nonlinear variational problem.
Differentiating residual form F to obtain Jacobian J = F'.
Solving nonlinear variational problem.
  Newton iteration 0: r (abs) = 1.388e-01 (tol = 1.000e-10) r (rel) = 1.000e+00 (tol = 1.000e-09)
  Newton iteration 1: r (abs) = 1.119e-12 (tol = 1.000e-10) r (rel) = 8.058e-12 (tol = 1.000e-09)
  Newton solver finished in 1 iterations and 1 linear solver iterations.
*** Warning: Degree of exact solution may be inadequate for accurate result in errornorm.
L2 error = 4.462e+00

How can I make the Dirichlet code work by keeping DG spaces?
Thank you

]]>
https://fenicsproject.discourse.group/t/can-fenics-handle-dirichlet-boundary-conditions-on-dg-spaces/19662#post_1 Sun, 19 Apr 2026 13:54:58 +0000 fenicsproject.discourse.group-post-58468
Solving two Poisson problems on subdomains of a parent mesh yielding non-deterministic success I think I may have poorly defined the subdomain integration measures. Making the following change

# Integration measures on each subdomain
dx0 = ufl.Measure("dx", domain=mesh, subdomain_data=sub_mt0)
dx1 = ufl.Measure("dx", domain=mesh, subdomain_data=sub_mt1)
# Integration measures on each subdomain
dx0 = ufl.Measure("dx", domain=mesh, subdomain_data=sub_mt0, subdomain_id=1)
dx1 = ufl.Measure("dx", domain=mesh, subdomain_data=sub_mt1, subdomain_id=1)

Seems to give me consistently good results

]]>
https://fenicsproject.discourse.group/t/solving-two-poisson-problems-on-subdomains-of-a-parent-mesh-yielding-non-deterministic-success/19661#post_2 Sat, 18 Apr 2026 22:31:38 +0000 fenicsproject.discourse.group-post-58467
Solving two Poisson problems on subdomains of a parent mesh yielding non-deterministic success I’m trying to solve a pretty straightforward problem:

Find u_0 \in V_0(\Omega_0) and u_1 \in V_1(\Omega_1) such that

\begin{align} (\nabla u_0, \nabla v_0)_{\Omega_0} &= (1, v_0)_{\Omega_0} \\ (\nabla u_1, \nabla v_1)_{\Omega_1} &= (1, v_1)_{\Omega_1} \end{align}

for all v_0 \in V_{0,E}(\Omega_0) and v_1 \in V_{1,E}(\Omega_1). Here V_{\cdot, E}(\cdot) corresponds to the space modified to accommodate homogeneous Dirichlet boundary data.

The following code is my attempt to approximate the solution of the above problem.

from mpi4py import MPI
import dolfinx
import dolfinx.fem.petsc
import numpy as np
import ufl

# Parent mesh
mesh = dolfinx.mesh.create_unit_square(MPI.COMM_WORLD, 8, 8)

# Extract submeshes, mesh0 left, mesh1 right
cells = np.arange(mesh.topology.index_map(mesh.topology.dim).size_local, dtype=np.int32)
sub_cells0 = cells[dolfinx.mesh.compute_midpoints(mesh, mesh.topology.dim, cells)[:, 0] <= 0.5]
sub_cells1 = cells[dolfinx.mesh.compute_midpoints(mesh, mesh.topology.dim, cells)[:, 0] >= 0.5]
sub_mesh0, mapping_entity0, mapping_vertex0, mapping_geom0 = dolfinx.mesh.create_submesh(mesh, mesh.topology.dim, sub_cells0)
sub_mesh1, mapping_entity1, mapping_vertex1, mapping_geom1 = dolfinx.mesh.create_submesh(mesh, mesh.topology.dim, sub_cells1)

# Create domain markers
sub_mt0 = dolfinx.mesh.meshtags(mesh, mesh.topology.dim, sub_cells0, np.ones_like(sub_cells0))
sub_mt1 = dolfinx.mesh.meshtags(mesh, mesh.topology.dim, sub_cells1, np.ones_like(sub_cells1))

# Create spaces on each subdomain and mixed element
V0 = dolfinx.fem.functionspace(sub_mesh0, ("CG", 1))
V1 = dolfinx.fem.functionspace(sub_mesh1, ("CG", 1))
W = ufl.MixedFunctionSpace(V0, V1)

# Integration measures on each subdomain
dx0 = ufl.Measure("dx", domain=mesh, subdomain_data=sub_mt0)
dx1 = ufl.Measure("dx", domain=mesh, subdomain_data=sub_mt1)

# FE formulation
u0, u1 = ufl.TrialFunctions(W)
v0, v1 = ufl.TestFunctions(W)

a = ufl.inner(ufl.grad(u0), ufl.grad(v0)) * dx0
a += ufl.inner(ufl.grad(u1), ufl.grad(v1)) * dx1

L = ufl.inner(dolfinx.fem.Constant(sub_mesh0, 1.0), v0) * dx0
L += ufl.inner(dolfinx.fem.Constant(sub_mesh1, 1.0), v1) * dx1

# Extract block system
a_blocked = ufl.extract_blocks(a)
L_blocked = ufl.extract_blocks(L)

# Create BCs
sub_mesh0.topology.create_connectivity(sub_mesh0.topology.dim, sub_mesh0.topology.dim - 1)
sub_mesh0.topology.create_connectivity(sub_mesh0.topology.dim - 1, sub_mesh0.topology.dim)
u0_bdry = dolfinx.fem.Function(V0)
u0_bc = dolfinx.fem.dirichletbc(
    u0_bdry, dolfinx.fem.locate_dofs_topological(
        V0, sub_mesh0.topology.dim - 1, dolfinx.mesh.exterior_facet_indices(sub_mesh0.topology)
))

sub_mesh1.topology.create_connectivity(sub_mesh1.topology.dim, sub_mesh1.topology.dim - 1)
sub_mesh1.topology.create_connectivity(sub_mesh1.topology.dim - 1, sub_mesh1.topology.dim)
u1_bdry = dolfinx.fem.Function(V1)
u1_bc = dolfinx.fem.dirichletbc(
    u1_bdry, dolfinx.fem.locate_dofs_topological(
        V1, sub_mesh1.topology.dim - 1, dolfinx.mesh.exterior_facet_indices(sub_mesh1.topology)
))

# Solve problem
bcs = [u0_bc, u1_bc]
u0h = dolfinx.fem.Function(V0)
u1h = dolfinx.fem.Function(V1)
problem = dolfinx.fem.petsc.LinearProblem(
    a_blocked,
    L_blocked,
    u=[u0h, u1h],
    bcs=bcs,
    entity_maps=[mapping_entity0, mapping_entity1],
    petsc_options={
        "ksp_type": "preonly",
        "pc_type": "lu",
        "pc_factor_mat_solver_type": "mumps",
        "ksp_error_if_not_converged": True,
        "ksp_monitor": None,
    },
    petsc_options_prefix="monolithic_",
)
problem.solve()

I notice that I’m getting non-deterministic success. Sometimes this solves giving me the expected result, and sometimes it does not, yielding the error:

Traceback (most recent call last):
  File "$SCRIPT_PATH/poisson_subdomain_coupled.py", line 65, in <module>
    problem = dolfinx.fem.petsc.LinearProblem(
        a_blocked,
    ...<11 lines>...
        petsc_options_prefix="monolithic_",
    )
  File "$LIB_PATH/python3.14/site-packages/dolfinx/fem/petsc.py", line 804, in __init__
    self._A = create_matrix(self._a, kind=kind)
              ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
  File "$LIB_PATH/python3.14/site-packages/dolfinx/fem/petsc.py", line 193, in create_matrix
    return _cpp.fem.petsc.create_matrix_block(_a, kind)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
RuntimeError: Cannot insert rows that do not exist in the IndexMap.
Exception ignored while calling deallocator <function LinearProblem.__del__ at 0x12aee5380>:
Traceback (most recent call last):
  File "$LIB_PATH/python3.14/site-packages/dolfinx/fem/petsc.py", line 882, in __del__
    lambda obj: obj is not None, (self._solver, self._A, self._b, self._x, self._P_mat)
AttributeError: 'LinearProblem' object has no attribute '_solver'

Am I doing something silly in my original script?

>>> import dolfinx
>>> dolfinx.git_commit_hash
'338f32f692a1500571c910a29e69a3ff1d5c6549'
>>> dolfinx.__version__
'0.11.0.dev0'
]]>
https://fenicsproject.discourse.group/t/solving-two-poisson-problems-on-subdomains-of-a-parent-mesh-yielding-non-deterministic-success/19661#post_1 Sat, 18 Apr 2026 19:20:39 +0000 fenicsproject.discourse.group-post-58466
Announcing FEniCS 2026: 17-19 June 2026 at Paris, at the University of Chicago | John W. Boyer Center, France @BillS thanks for pointing out!

We have just updated the website with some suggestions for accommodation provided by the University of Chicago | John W. Boyer Center in Paris. They include special booking rates. When making your reservation at any of these establishments, please request the University of Chicago Paris Center rate.

]]>
https://fenicsproject.discourse.group/t/announcing-fenics-2026-17-19-june-2026-at-paris-at-the-university-of-chicago-john-w-boyer-center-france/19613#post_5 Sat, 18 Apr 2026 10:02:42 +0000 fenicsproject.discourse.group-post-58465
Create periodic mesh of a sector geometry I have a sector geometry where the sector boundaries should be periodic. I am using routines defined in script to turn the mesh periodic. First challenge I encounter is that script.create_periodic_mesh does not work with 0.10.0 (conda). Switching to 0.9.0 allows me to create periodic mesh. However, then I am unable to port cell tags from the original to periodic mesh because of duplicate mesh entity. Facet tags get ported fine though.

I will appreciate if someone could suggest a fix ideally for 0.10. Below is the MWE that reproduces the problem on both 0.9 and 0.10:

import dolfinx, gmsh, ufl, mpi4py, petsc4py, script
import numpy as np

sec_angle = np.pi/6
R = 1

gmsh.finalize()
gmsh.initialize()
# gmsh.option.setNumber("General.Terminal", 0)
gmsh.clear()

occ = gmsh.model.occ
gdim = 2

sec = occ.addCircle(0, 0, 0, R, angle1=np.pi/2-sec_angle/2, angle2=np.pi/2+sec_angle/2)
sec_pnts = occ.get_entities(0)
p0 = occ.addPoint(0, 0, 0)
p1 = occ.addPoint(0, R, 0)
l1 = occ.addLine(sec_pnts[0][1], p0)
l2 = occ.addLine(sec_pnts[1][1], p0)
curve = occ.addCurveLoop([sec, l1, l2])
surf = occ.addPlaneSurface([curve])
l3 = occ.addLine(p0, p1)
frag = occ.fragment([(gdim, surf)], [(gdim-1, l3)])
occ.removeAllDuplicates()
occ.synchronize()

all_doms = gmsh.model.getEntities(gdim)
for j, dom in enumerate(all_doms):
    gmsh.model.addPhysicalGroup(dom[0], [dom[1]], j + 1)  # create the main group/node

# number all boundaries
all_edges = gmsh.model.getEntities(gdim - 1)
for j, edge in enumerate(all_edges):
    gmsh.model.addPhysicalGroup(edge[0], [edge[1]], edge[1])  # create the main group/node

# sector boundary tags by visual inspection
left_edge_tag = (7,)
right_edge_tag = (5,)
gmsh.model.mesh.setPeriodic(gdim-1, left_edge_tag, right_edge_tag,
                            [np.cos(sec_angle), -np.sin(sec_angle), 0, 0, 
                            np.sin(sec_angle), np.cos(sec_angle), 0, 0, 
                            0, 0, 1, 0,
                            0, 0, 0, 1])

gmsh.model.mesh.generate(gdim)

mpi_rank = 0
# dolfinx_model = dolfinx.io.gmsh.model_to_mesh(gmsh.model, mpi4py.MPI.COMM_WORLD, mpi_rank, gdim)
if dolfinx.__version__ == "0.9.0":
    from dolfinx.io import gmshio
    dolfinx_model = gmshio.model_to_mesh(gmsh.model, mpi4py.MPI.COMM_WORLD, mpi_rank, gdim)
    mesh0, ct0, ft0 = dolfinx_model[0], dolfinx_model[1], dolfinx_model[2]
if dolfinx.__version__ == '0.10.0':
    dolfinx_model = dolfinx.io.gmsh.model_to_mesh(gmsh.model, mpi4py.MPI.COMM_WORLD, mpi_rank, gdim)
    mesh0, ct0, ft0 = dolfinx_model.mesh, dolfinx_model.cell_tags, dolfinx_model.facet_tags

# create periodic mesh
def indicator(x):
    angle = np.arctan2(x[1], x[0])
    edge_indices = np.isclose(angle, np.pi/2-sec_angle/2) # right edge
    return edge_indices

def mapping(x):
    rot_matrix = np.array([[np.cos(sec_angle), -np.sin(sec_angle)], 
                           [np.sin(sec_angle), np.cos(sec_angle)]])
    values = np.zeros(x.shape)
    values[:2,:] = rot_matrix @ x[:2]
    return values

# works with 0.9.0 but not 0.10.0
mesh, replaced_vertices, replacement_map = script.create_periodic_mesh(mesh0, indicator, mapping)

ct = script.transfer_meshtags_to_periodic_mesh(
        mesh0, mesh, replaced_vertices, ct0
    )
ft = script.transfer_meshtags_to_periodic_mesh(
        mesh0, mesh, replaced_vertices, ft0
    )
]]>
https://fenicsproject.discourse.group/t/create-periodic-mesh-of-a-sector-geometry/19660#post_1 Sat, 18 Apr 2026 02:03:51 +0000 fenicsproject.discourse.group-post-58464
FFCX compiler files size limit with petsc.assemble_matrix It finally worked!!! :confetti_ball: :confetti_ball:

I just had to define function spaces for auxiliary variables like this



V_main_a = fem.functionspace(domain, element)
V_main_b = fem.functionspace(domain, element)
V_aux_a = fem.functionspace(domain, element) 
V_aux_b = fem.functionspace(domain, element)

p_a =  ufl.TrialFunction(V_main_a) #solutions
p_b = ufl.TrialFunction(V_main_b) #solutions
u_a  =  ufl.TrialFunction(V_aux_a) #Derivative of p_a
u_a  =  ufl.TrialFunction(V_aux_b) #Derivative of p_b
w_aux_a = ufl.TestFunction(V_aux_a)

########### .... defining other test functions, etc ...... #####

EQ_aux = ufl.inner(u_a,w_aux_a)*ufl.dx
+....

And performing block assembly. I’ll be posting the final working code soon in an edit!
Now not only does it go far beyond p >= 80, but it also compiles much faster

I am immensely thankful! :blush:

]]>
https://fenicsproject.discourse.group/t/ffcx-compiler-files-size-limit-with-petsc-assemble-matrix/19654#post_12 Fri, 17 Apr 2026 14:52:21 +0000 fenicsproject.discourse.group-post-58463
FFCX compiler files size limit with petsc.assemble_matrix I didn’t know about block elements; thank you very much! That takes care of the mixed-element issue.

I here attach a simple version of what I’m attempting (the spectral eigenvalue solver). This is the “raw” version, with raw second derivatives (u.dx(0).dx(0))

 
import datetime
debug_mode = 1
now = datetime.datetime.now()
if debug_mode == 1:
    print(f"Process started at {str(now.time())}")
 
import numpy as np
from math import pi
import dill as pickle
from kerrnewman_functions import extremality
import os
from mpi4py import MPI
from petsc4py import PETSc

import warnings
warnings.filterwarnings("ignore")    
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()

def eig_solver(A,ME,target,pep_params):
    from slepc4py import SLEPc
    import numpy as np
    from petsc4py import PETSc
    from mpi4py import MPI
    from dolfinx import fem
    import basix.ufl
    nev, tol, max_it = pep_params
    comm = MPI.COMM_WORLD
    V_gr = ME
    pep = SLEPc.PEP().create(comm)
    pep.setOperators(A)
    pep.setProblemType(SLEPc.PEP.ProblemType.GENERAL)
    pep.setDimensions(nev=nev, ncv=max(15, 2*nev + 10), mpd=max(15, 2*nev + 10))
    pep.setType(SLEPc.PEP.Type.TOAR)
    st = pep.getST()
    st.setType(SLEPc.ST.Type.SINVERT)
    ksp = st.getKSP()
    ksp.setType(PETSc.KSP.Type.PREONLY)
    pep.setRefine(ref = SLEPc.PEP.Refine.SIMPLE, npart = 1, tol = 0.1*tol, its= 3, scheme= SLEPc.PEP.RefineScheme.SCHUR)
    pep.setTarget(target)
    pep.setWhichEigenpairs(SLEPc.PEP.Which.TARGET_MAGNITUDE)
    pep.setTolerances(tol, max_it)

    # Solver
    pep.solve()


    nconv = pep.getConverged()

    all_eigvals = []
    efuns = fem.Function(ME) 
    efuns_list_gr = []
    efuns_list_em = []

    error_list = []

    for i in range(nconv):   
        eigval = pep.getEigenpair(i, efuns.x.petsc_vec)
        error = pep.computeError(i)
        error_list.append(error) 
        efun_vals_gr = efuns.x.array[:]
        efuns_list_gr.append(efun_vals_gr)
        all_eigvals.append(eigval)
        
    return all_eigvals, efuns_list_gr, efuns_list_em, error_list

def new_assembler(phys_params, poly_degree, eps, debug_mode):
    import numpy as np
    import ufl
    from mpi4py import MPI
    from dolfinx.fem.petsc import assemble_matrix
    from dolfinx import mesh
    from dolfinx import fem
    import basix.ufl
    from dolfinx.fem.petsc import assemble_matrix
    from ufl import inner
    import warnings
    import time
    warnings.filterwarnings("ignore")


    ############## PARAMETERS UNPACKING ################

    M_mass_num, alpha, q_ch, m = phys_params[:] 
    Q = q_ch*M_mass_num
    a_rot = alpha*M_mass_num
    r_p = M_mass_num + np.sqrt(M_mass_num**2 - a_rot_num**2 - Q_ch**2)
    lambda_ = r_p

    ############## MESH BUILDING ###############

    cell_type = basix.CellType.quadrilateral
    element = basix.ufl.wrap_element(
        basix.create_tp_element(basix.ElementFamily.P, cell_type, poly_degree, basix.LagrangeVariant.gll_warped)
    )
    c_el = basix.ufl.blocked_element(
        basix.ufl.wrap_element(
            basix.create_tp_element(basix.ElementFamily.P, cell_type, 1, basix.LagrangeVariant.gll_warped)
        ), (2,)
    )

    nodes = np.array([[eps, eps],[1-eps, eps],[eps, 1-eps],[1-eps, 1-eps]], dtype=np.float64)
    connectivity = np.array([[0, 1, 2, 3]], dtype=np.int64)
    domain = mesh.create_mesh(MPI.COMM_WORLD, cells=connectivity, x=nodes, e=c_el)
    print("MESH CREATED")

    ########## FUNCTION SPACE ###########33333
    V_gr = fem.functionspace(domain, element)

    ######## TRIAL AND TEST FUNCTIONS##########
    v_test_gr = ufl.TestFunction(V_gr)
    psi_gr = ufl.TrialFunction(V_gr)
    dx_meas = ufl.dx
 
    ################ COEFFICIENTS DEFINITION #####################
    x = ufl.SpatialCoordinate(domain)
    M_psi_gr = 4*(a_rot**2*lambda_*x[1]**4 - a_rot**2*lambda_*x[1]**2 
                                      + a_rot**2*lambda_ + a_rot**2*x[0] + lambda_**3 + 2*lambda_**2*x[0] - lambda_**2 + 
                                      lambda_*x[0]**2 - 2*lambda_*x[0] + lambda_ - x[0]**2 + x[0])/lambda_ 

    C_psi_gr = 2*1j*(a_rot**2*x[0] + 1j*a_rot*lambda_*m - 4*1j*a_rot*lambda_*x[1]**2 + 2*1j*a_rot*lambda_ + 
                                         2*1j*a_rot*m*x[0] + 2*lambda_**2*x[0] + 3*lambda_*x[0]**2 - 2*lambda_*x[0] + 2*lambda_ - 3*x[0]**2)/lambda_
    
    C_dpgr_dsigma = fem.Function(V_gr)
    C_dpgr_dsigma.interpolate(lambda x: 2*1j*(a_rot**2*x[0]**2 + 2*lambda_**2*x[0]**2 - lambda_**2 + 2*lambda_*x[0]**3 - 2*lambda_*x[0]**2 - 2*x[0]**3 + 2*x[0]**2)/lambda_)

    K_psi_gr =  -1j*(8*a_rot*m_mode*x[0]*x[1]**4 - 8*a_rot*m_mode*x[0]*x[1]**2 + 1j*lambda_*m_mode**2 - 8*1j*lambda_*m_mode*x[1]**2 + 
                                       4*1j*lambda_*m_mode - 8*1j*lambda_*x[0]**2*x[1]**4 + 8*1j*lambda_*x[0]**2*x[1]**2 - 16*1j*lambda_*x[1]**4 + 
                                       32*1j*lambda_*x[1]**2 - 12*1j*lambda_ + 8*1j*x[0]**2*x[1]**4 - 8*1j*x[0]**2*x[1]**2 + 4*1j*x[0]*x[1]**4 
                                       - 4*1j*x[0]*x[1]**2)/(4*lambda_*x[1]**2*(x[1] - 1)*(x[1] + 1))
    
    n = ufl.FacetNormal(domain)
    ds_meas = ufl.ds(domain=domain)
    
    K_dpgr_dx = -(11*x[1]**2 - 9)/(4*x[1])

    K_dpgr_sigma = -1j*x[0]*(2*a_rot*m_mode*x[0] - 4*1j*lambda_*x[0]**2 - 2*1j*lambda_ + 4*1j*x[0]**2 - 1j*x[0])/lambda_
    
    K_d2p_gr_xx =  -(x[1] - 1)*(x[1] + 1)/4
    
    K_d2p_gr_sigma =  -x[0]**2*(x[0] - 1)*(lambda_*x[0] + lambda_ - x[0])/lambda_
    Kdev = (x[0]*(2*lambda_ - 3*x[0] - 4*lambda_*x[0]**2 + 4*x[0]**2))/lambda_ 
    
    # BOUNDARY ITEMS (unused in this version)
    n = ufl.FacetNormal(domain)
    ds_meas = ufl.ds(domain=domain)
 
    ########## EQUATION BUILDING
  
    K_form = K_d2p_gr_sigma * ufl.inner(psi_gr.dx(0).dx(0), v_test_gr) * dx_meas
              
    K_form +=  K_d2p_gr_xx * ufl.inner(psi_gr.dx(1).dx(1), v_test_gr) * dx_meas
               
    K_form += K_dpgr_dx * ufl.inner(psi_gr.dx(1), v_test_gr) * dx_meas \
               + K_psi_gr * ufl.inner(psi_gr, v_test_gr) * dx_meas \
               + K_dpgr_sigma * ufl.inner(psi_gr.dx(0), v_test_gr) * dx_meas

    
    C_form = C_dpgr_dsigma*inner(psi_gr.dx(0),v_test_gr)*dx_meas + C_psi_gr*inner(psi_gr, v_test_gr)*dx_meas

    M_form = M_psi_gr*inner(psi_gr,v_test_gr)*dx_meas

    time_seq = time.time()

    if debug_mode == 1:
        print("assembling forms and matrices...")


    K_formed = fem.form(K_form, form_compiler_options={"sum_factorization": False}) # If true, an error is returned
    C_formed = fem.form(C_form, form_compiler_options={"sum_factorization": True}) 
    M_formed = fem.form(M_form, form_compiler_options={"sum_factorization": True})

    K_mat = assemble_matrix(K_formed)
    K_mat.assemble() 

    C_mat = assemble_matrix(C_formed)
    C_mat.assemble()
    
    M_mat = assemble_matrix(M_formed)
    M_mat.assemble()
    # Add them together in PETSc
 
    if debug_mode == 1:
        print(f"Time of assembly: {time.time() - time_seq}")

    A = [ ]

    A.append(K_mat)
    A.append(C_mat)
    A.append(M_mat)

    return A, V_gr


###########################################
############   START
###########################################

M_mass = 0.5
m_mode = 2
alpha_rot = 0
q_charge = 0
print(f"extremality is {100*extremality(alpha_rot*M_mass, q_charge*M_mass, M_mass)} %")

Q_ch = q_charge*M_mass
a_rot_num = alpha_rot*M_mass
r_p = M_mass + np.sqrt(M_mass**2 - a_rot_num**2 - Q_ch**2)

phys_params = [M_mass, alpha_rot, q_charge, m_mode]

poly_degree = 18
# No need to cut the domain to avoid singularities in this case
eps = 0
debug_mode = 0
target =  0.85

if rank == 0:
    print(f"Physical parameters: (M,a,Q) = ({M_mass}, {a_rot_num}, {Q_ch})")
    print(f"Event horizon located at {2*r_p} M")
    if a_rot_num > 0:
        print(f"Computing QNMs for m = {m_mode}, at polynomial degree {poly_degree}")
    else:
        print(f"Computing QNMs for any m, at polynomial degree  {poly_degree}")
    print(f"Target magnitude is {target}")

A, ME = new_assembler(phys_params, poly_degree, eps, debug_mode = 1)

for i, mat in enumerate(A):  # your PEP matrices
    viewer = PETSc.Viewer().createASCII("/dev/null")
    arr = mat.convert("dense").getDenseArray()
    print(f"A_{i}: max={np.max(np.abs(arr)):.2e}, "
        f"has_nan={np.any(np.isnan(arr))}, "
        f"has_inf={np.any(np.isinf(arr))}")
    
target =  0.85
nev = 12
tol = 1e-7
max_it = 2000
pep_params = [nev, tol, max_it]

all_eigvals, efuns_list_gr, efuns_list_em, error_list = eig_solver(A,ME,target,pep_params)        

if rank == 0:
    print("\n--- Sorted Quasinormal Modes (QNMs) ---")
    print(f"Found {len(all_eigvals)} physical eigenpairs after sorting.")
    
    for i, eigval in enumerate(all_eigvals):
        print(f"QNM {i}: {eigval.real:.6e} + {eigval.imag:.6e}i, error {error_list[i]}")

If I turn form_compiler_options={“sum_factorization”: True} for my K_form (the one including second derivatives, I get the following error:

Traceback (most recent call last):
  File "/home/mmisas/PhD/eigenvalues/local_kn/prueba_2nd.py", line 228, in <module>
    A, ME = new_assembler(phys_params, poly_degree, eps, debug_mode = 1)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/PhD/eigenvalues/local_kn/prueba_2nd.py", line 171, in new_assembler
    K_formed = fem.form(K_form, form_compiler_options={"sum_factorization": True}) # If true, an error is returned
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/dolfinx/fem/forms.py", line 449, in form
    return _create_form(form)
           ^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/dolfinx/fem/forms.py", line 441, in _create_form
    return _form(form)
           ^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/dolfinx/fem/forms.py", line 361, in _form
    ufcx_form, module, code = jit.ffcx_jit(
                              ^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/dolfinx/jit.py", line 60, in mpi_jit
    return local_jit(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/dolfinx/jit.py", line 215, in ffcx_jit
    r = ffcx.codegeneration.jit.compile_forms([ufl_object], options=p_ffcx, **p_jit)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/codegeneration/jit.py", line 244, in compile_forms
    raise e
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/codegeneration/jit.py", line 224, in compile_forms
    impl = _compile_objects(
           ^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/codegeneration/jit.py", line 349, in _compile_objects
    _, code_body = ffcx.compiler.compile_ufl_objects(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/compiler.py", line 118, in compile_ufl_objects
    code = generate_code(ir, options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/codegeneration/codegeneration.py", line 49, in generate_code
    integral_generator(integral_ir, domain, options)
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/codegeneration/C/integrals.py", line 42, in generator
    parts = ig.generate(domain)
            ^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/codegeneration/integral_generator.py", line 167, in generate
    all_preparts += self.generate_piecewise_partition(rule, cell)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/codegeneration/integral_generator.py", line 309, in generate_piecewise_partition
    return self.generate_partition(arraysymbol, F, "piecewise", None, None)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/codegeneration/integral_generator.py", line 337, in generate_partition
    vdef = self.backend.definitions.get(mt, tabledata, quadrature_rule, vaccess)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/codegeneration/definitions.py", line 111, in get
    return handler(mt, tabledata, quadrature_rule, access)  # type: ignore
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/codegeneration/definitions.py", line 200, in _define_coordinate_dofs_lincomb
    FE, tables = self.access.table_access(tabledata, self.entity_type, mt.restriction, iq, ic)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/codegeneration/access.py", line 468, in table_access
    iq_i = quadrature_index.local_index(i)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/codegeneration/lnodes.py", line 387, in local_index
    assert idx < len(self.symbols)
           ^^^^^^^^^^^^^^^^^^^^^^^
AssertionError

I have also tried to integrate by parts, coding integration over the boundaries explicitly. It’s the same code as before, but substituting the equation block with this other:

    n = ufl.FacetNormal(domain)
    ds_meas = ufl.ds(domain=domain)

    K_form = K_d2p_gr_sigma * ufl.inner(psi_gr.dx(0), v_test_gr) * n[0] * ds_meas \
              + K_d2p_gr_xx * ufl.inner(psi_gr.dx(1), v_test_gr) * n[1] * ds_meas

    # Derivative of the K_d2p_gr_sigma term with respect to x[0], the derivative of
    # K_d2p_gr_xx with respect to x[1] is just -(x[1]/2)

    Kdev = (x[0]*(2*lambda_ - 3*x[0] - 4*lambda_*x[0]**2 + 4*x[0]**2))/lambda_ 
    K_form = -(x[1]/2) * ufl.inner(psi_gr.dx(1), v_test_gr) * dx_meas \
              - Kdev * ufl.inner(psi_gr.dx(0), v_test_gr) * dx_meas
              
    K_form += -K_d2p_gr_sigma * ufl.inner(psi_gr.dx(0), v_test_gr.dx(0)) * dx_meas \
               - K_d2p_gr_xx * ufl.inner(psi_gr.dx(1), v_test_gr.dx(1)) * dx_meas
               
    K_form += K_dpgr_dx * ufl.inner(psi_gr.dx(1), v_test_gr) * dx_meas \
               + K_psi_gr * ufl.inner(psi_gr, v_test_gr) * dx_meas \
               + K_dpgr_sigma * ufl.inner(psi_gr.dx(0), v_test_gr) * dx_meas

    # 4. Compile them separately
    C_form = C_dpgr_dsigma*inner(psi_gr.dx(0),v_test_gr)*dx_meas + C_psi_gr*inner(psi_gr, v_test_gr)*dx_meas

    M_form = M_psi_gr*inner(psi_gr,v_test_gr)*dx_meas

And what I get is

Traceback (most recent call last):
  File "/home/mmisas/PhD/eigenvalues/local_kn/prueba.py", line 307, in <module>
    A, ME = new_assembler(phys_params, poly_degree, eps, debug_mode = 1)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/PhD/eigenvalues/local_kn/prueba.py", line 231, in new_assembler
    K_bnd_form = fem.form(K_bnd_ufl, form_compiler_options={"sum_factorization": True})
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/dolfinx/fem/forms.py", line 449, in form
    return _create_form(form)
           ^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/dolfinx/fem/forms.py", line 441, in _create_form
    return _form(form)
           ^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/dolfinx/fem/forms.py", line 361, in _form
    ufcx_form, module, code = jit.ffcx_jit(
                              ^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/dolfinx/jit.py", line 60, in mpi_jit
    return local_jit(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/dolfinx/jit.py", line 215, in ffcx_jit
    r = ffcx.codegeneration.jit.compile_forms([ufl_object], options=p_ffcx, **p_jit)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/codegeneration/jit.py", line 244, in compile_forms
    raise e
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/codegeneration/jit.py", line 224, in compile_forms
    impl = _compile_objects(
           ^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/codegeneration/jit.py", line 349, in _compile_objects
    _, code_body = ffcx.compiler.compile_ufl_objects(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/compiler.py", line 113, in compile_ufl_objects
    ir = compute_ir(analysis, _object_names, _prefix, options, visualise)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/ir/representation.py", line 151, in compute_ir
    _compute_integral_ir(
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/ir/representation.py", line 402, in _compute_integral_ir
    integral_ir = compute_integral_ir(
                  ^^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/ir/integral.py", line 170, in compute_integral_ir
    mt_table_reference = build_optimized_tables(
                         ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mmisas/miniconda3/envs/fenicsx-env/lib/python3.12/site-packages/ffcx/ir/elementtables.py", line 557, in build_optimized_tables
    raise RuntimeError("Sum factorization not available for this quadrature rule.")
RuntimeError: Sum factorization not available for this quadrature rule.

I’ll try to come up with a workaround and let you know if I can get by somehow. Thanks again :slightly_smiling_face:

]]>
https://fenicsproject.discourse.group/t/ffcx-compiler-files-size-limit-with-petsc-assemble-matrix/19654#post_11 Fri, 17 Apr 2026 12:28:48 +0000 fenicsproject.discourse.group-post-58462
FFCX compiler files size limit with petsc.assemble_matrix There are several ways one can work around the issues of the mixed system.
For example:

  1. Instead of mixed spaces, use a blocked element. You can see this for the coordinate element in my example:

Could you make some minimal example (similar to what i showed in the post above) for each of this forms. Then I can investigate each of them.

]]>
https://fenicsproject.discourse.group/t/ffcx-compiler-files-size-limit-with-petsc-assemble-matrix/19654#post_10 Thu, 16 Apr 2026 18:04:24 +0000 fenicsproject.discourse.group-post-58460
FFCX compiler files size limit with petsc.assemble_matrix Thank you very much for your working example! Unfortunately, I’ve been trying to implement it over the day, and got some (sad :smiling_face_with_tear:) conclusions:

  • The problem is the generation of a file named libffcx_forms_5b2b2baa72db4ea6cb4e41304c5e40c4338e795f.c / .o /. so, whose size skyrockets if the polynomial degree is large. If it ever gets above 2 GB (for me, that happens when p > 80), then the program crashes, returning this:

  •   File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/dolfinx/jit.py", line 215, in ffcx_jit
        r = ffcx.codegeneration.jit.compile_forms([ufl_object], options=p_ffcx, **p_jit)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/ffcx/codegeneration/jit.py", line 244, in compile_forms
        raise e
      File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/ffcx/codegeneration/jit.py", line 224, in compile_forms
        impl = _compile_objects(
               ^^^^^^^^^^^^^^^^^
      File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/ffcx/codegeneration/jit.py", line 399, in _compile_objects
        ffibuilder.compile(tmpdir=cache_dir, verbose=True, debug=cffi_debug)
      File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/cffi/api.py", line 727, in compile
        return recompile(self, module_name, source, tmpdir=tmpdir,
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/cffi/recompiler.py", line 1581, in recompile
        outputfilename = ffiplatform.compile('.', ext,
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/cffi/ffiplatform.py", line 20, in compile
        outputfilename = _build(tmpdir, ext, compiler_verbose, debug)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/cffi/ffiplatform.py", line 54, in _build
        raise VerificationError('%s: %s' % (e.__class__.__name__, e))
    cffi.VerificationError: LinkError: command '/home/mario/.conda/envs/fenicsxv11-env/bin/gcc' failed with exit code 1
    
    
  • The key that makes tensor product elements work seems to be adding ,form_compiler_options={“sum_factorization”: True} . It makes the size of that file drops from 22 MB to approximately 50 KB (for a degree ~ 20), which made me hopeful that with a degree ~ 80 the size would plummet. However…

  • If any of my forms (to which I wish to apply form_compiler_options={“sum_factorization”: True} ) contain either:

    • second derivatives of trial functions
    • Two different test functions (as in a Mixed-Element setup)
    • The ufl operator for boundary evaluation ufl.ds

I get the error:

    K_formed = fem.form(K_form, form_compiler_options={"sum_factorization": True})
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/dolfinx/fem/forms.py", line 449, in form
    return _create_form(form)
           ^^^^^^^^^^^^^^^^^^
  File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/dolfinx/fem/forms.py", line 441, in _create_form
    return _form(form)
           ^^^^^^^^^^^
  File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/dolfinx/fem/forms.py", line 361, in _form
    ufcx_form, module, code = jit.ffcx_jit(
                              ^^^^^^^^^^^^^
  File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/dolfinx/jit.py", line 60, in mpi_jit
    return local_jit(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/dolfinx/jit.py", line 215, in ffcx_jit
    r = ffcx.codegeneration.jit.compile_forms([ufl_object], options=p_ffcx, **p_jit)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/ffcx/codegeneration/jit.py", line 244, in compile_forms
    raise e
  File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/ffcx/codegeneration/jit.py", line 224, in compile_forms
    impl = _compile_objects(
           ^^^^^^^^^^^^^^^^^
  File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/ffcx/codegeneration/jit.py", line 349, in _compile_objects
    _, code_body = ffcx.compiler.compile_ufl_objects(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/ffcx/compiler.py", line 118, in compile_ufl_objects
    code = generate_code(ir, options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/ffcx/codegeneration/codegeneration.py", line 49, in generate_code
    integral_generator(integral_ir, domain, options)
  File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/ffcx/codegeneration/C/integrals.py", line 42, in generator
    parts = ig.generate(domain)
            ^^^^^^^^^^^^^^^^^^^
  File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/ffcx/codegeneration/integral_generator.py", line 167, in generate
    all_preparts += self.generate_piecewise_partition(rule, cell)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/ffcx/codegeneration/integral_generator.py", line 309, in generate_piecewise_partition
    return self.generate_partition(arraysymbol, F, "piecewise", None, None)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/ffcx/codegeneration/integral_generator.py", line 337, in generate_partition
    vdef = self.backend.definitions.get(mt, tabledata, quadrature_rule, vaccess)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/ffcx/codegeneration/definitions.py", line 111, in get
    return handler(mt, tabledata, quadrature_rule, access)  # type: ignore
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/ffcx/codegeneration/definitions.py", line 200, in _define_coordinate_dofs_lincomb
    FE, tables = self.access.table_access(tabledata, self.entity_type, mt.restriction, iq, ic)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/ffcx/codegeneration/access.py", line 468, in table_access
    iq_i = quadrature_index.local_index(i)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mario/.conda/envs/fenicsxv11-env/lib/python3.12/site-packages/ffcx/codegeneration/lnodes.py", line 387, in local_index
    assert idx < len(self.symbols)
           ^^^^^^^^^^^^^^^^^^^^^^^
AssertionError

In light of these: do you happen to know any workaround so that, either I avoid completely saving the “long-named” file, or anything similar to ,form_compiler_options={“sum_factorization”: True} that may work in a general setting?

I can probably get around the second derivative issues, by integrating by parts and modifying the math to get rid of boundary terms, but I’m tackling a coupled PDE system, so Mixed-Elements seems mandatory.

EDIT: I just learned you can build a coupled system without mixed elements, which I just tried and it works. Now I only need to get around the derivatives issue!

Once again, thank you very much for your help. I really wish there was a way to avoid so much information being saved into the cache. If you have any further ideas, I would be utterly glad to try them out :slight_smile:

]]>
https://fenicsproject.discourse.group/t/ffcx-compiler-files-size-limit-with-petsc-assemble-matrix/19654#post_9 Thu, 16 Apr 2026 14:35:26 +0000 fenicsproject.discourse.group-post-58459
Solve the Poisson equation on the curved domain You can pick an isoparametric combination, sub parametric or super parametric combination for the (mesh geometry, unknown space) pair. Depending on what you pick, DOLFINx will select the correct kernel, quadrature rule etc for what you have specified.
As you saw by my simple experiment above, you see you get different rates depending on what parameters you select.

]]>
https://fenicsproject.discourse.group/t/solve-the-poisson-equation-on-the-curved-domain/19653#post_4 Thu, 16 Apr 2026 08:06:01 +0000 fenicsproject.discourse.group-post-58454
Contact between two deformable bodies
kumar:

I also have a follow-up question. My problem involves mechanical, thermal, and electrical interactions. While the third medium approach seems suitable for mechanical contact, it may not be appropriate for thermal and electrical fields, as the artificial medium could introduce non-physical conduction effects.

Yes, third medium doesn’t seem right if one need thermal and electrical field coupling.
In the repo @nate mentions, Sarah Roggendorf and I implemented contact methods (including thermal coupling) in DOLFINx. However, at the moment I do not have funding or time to keep it up to date with the latest release.

]]>
https://fenicsproject.discourse.group/t/contact-between-two-deformable-bodies/19645#post_11 Thu, 16 Apr 2026 07:46:42 +0000 fenicsproject.discourse.group-post-58452
Contact between two deformable bodies Dear Mr. Dokken,

Thank you for your response. I found that the issue was due to the old ParaView version (v5.12) when opening the .bp file. With a newer version, the visualization works correctly.

I also have a follow-up question. My problem involves mechanical, thermal, and electrical interactions. While the third medium approach seems suitable for mechanical contact, it may not be appropriate for thermal and electrical fields, as the artificial medium could introduce non-physical conduction effects.

As I am still new to contact modeling in FEniCSx, I would greatly appreciate any guidance you might be willing to share. In particular, I was wondering if the implementation provided by Mr. Nate (asimov-contact/python/demos at main · Wells-Group/asimov-contact · GitHub) would be a reasonable starting point to explore, or if there are other approaches or resources you would recommend, as I have not been able to find much related discussion on this in the forum.

Thank you very much for your time.

]]>
https://fenicsproject.discourse.group/t/contact-between-two-deformable-bodies/19645#post_10 Thu, 16 Apr 2026 07:32:20 +0000 fenicsproject.discourse.group-post-58451
Solve the Poisson equation on the curved domain Thank you for your reply! The meaning is that, according to the principle of Isoparametric element, the grid degree must be consistent with the degree of the element in order to ensure the correctness of the convergence order. The key point is to correctly import the curved mesh? Is the code for assembling and solving the same as the code in the straight-edge area?

]]>
https://fenicsproject.discourse.group/t/solve-the-poisson-equation-on-the-curved-domain/19653#post_3 Thu, 16 Apr 2026 06:50:56 +0000 fenicsproject.discourse.group-post-58450
FFCX compiler files size limit with petsc.assemble_matrix P=80 is way higher than what I would have ever expected to use in DOLFINx :stuck_out_tongue:
However, the following form is possible to compile on my laptop

"""Use tensor product elements to test high order assembly"""

from mpi4py import MPI
import basix.ufl
import ufl
import dolfinx
import numpy as np
P = 80
cell_type = basix.CellType.quadrilateral
element = basix.ufl.wrap_element(
    basix.create_tp_element(basix.ElementFamily.P, cell_type, P, basix.LagrangeVariant.gll_warped)
)
c_el = basix.ufl.blocked_element(
    basix.ufl.wrap_element(
        basix.create_tp_element(basix.ElementFamily.P, cell_type, 1, basix.LagrangeVariant.gll_warped)
    ), (2,)
)

eps = 2.5e-7
nodes = np.array([[eps, eps],[1-eps, eps],[eps, 1-eps],[1-eps, 1-eps]], dtype=np.float64)
connectivity = np.array([[0, 1, 2, 3]], dtype=np.int64)
mesh = dolfinx.mesh.create_mesh(MPI.COMM_WORLD, cells=connectivity, x=nodes, e=c_el)
print("MESH CREATED")
V = dolfinx.fem.functionspace(mesh, element)

v = ufl.TestFunction(V)
u = ufl.TrialFunction(V)
a = ufl.inner(u, v) * ufl.dx
print("PREFORM")
form = dolfinx.fem.form(a, form_compiler_options={"sum_factorization": True})
print("POSTFORM")
J = dolfinx.fem.assemble_matrix(form)
print("ASSEMBLE")

I’m currently stuck at assembling this, as it has a loop of 82^2\cdot 81^4=289~446~152~004 sum operations in the local tensor,
Running with order 40 is fairly fast though, with form compilation taking 1.29 seconds and assembly 18.17 seconds
while the same operations took 15.88 and 18.4 seconds.

Order 50 compilation takes 3.17 seconds and assembly 97.6 seconds

]]>
https://fenicsproject.discourse.group/t/ffcx-compiler-files-size-limit-with-petsc-assemble-matrix/19654#post_8 Wed, 15 Apr 2026 21:29:54 +0000 fenicsproject.discourse.group-post-58447
Contact between two deformable bodies
kumar:

tried running your third medium contact example, but I could not open the .bp file in ParaView (it crashes once i tried to open the folder)

What version of Paraview are you running, and are you selecting the ADIOS2VTXReader?

You should use the “Extract block” filter first to extract only the f (so it doesn’t say partial)

]]>
https://fenicsproject.discourse.group/t/contact-between-two-deformable-bodies/19645#post_9 Wed, 15 Apr 2026 21:14:00 +0000 fenicsproject.discourse.group-post-58446
Contact between two deformable bodies Dear Mr. Dokken,

I tried running your third medium contact example, but I could not open the .bp file in ParaView (it crashes once i tried to open the folder), so I switched the part of the code output to XDMF:

with dolfinx.io.XDMFFile(mesh.comm, "full_output.xdmf", "w") as xdmf:
    xdmf.write_mesh(mesh)

    loading_steps = 33
    load = 1. / loading_steps
    dl = load
    n = 0
    MAX_FAILURE = 5
    NUM_SUCCESSIVE_SOLVES = 0
    last_load = load

    max_load = 0.3
    while load <= max_load:
        print(f"Loading step: {load:.3e}", flush=True)
        applied_z.value = full_disp[2] * load
        applied_y.value = full_disp[1] * load
        problem.solve()
        reason = problem.solver.getConvergedReason()
        num_iterations += problem.solver.getIterationNumber()
        n += 1
        if reason < 0:
            load = last_load + dl/(n+1)
            u.x.array[:] = u_prev.copy()
            tm_func.x.array[:] = tm_prev.copy()
            NUM_SUCCESSIVE_SOLVES = 0
        else:
            n = 0
            last_load = load
            NUM_SUCCESSIVE_SOLVES += 1
            #---------------------------------------------------------
            ### WRITE SOLUTION (time = load)
            xdmf.write_function(u, load)
            xdmf.write_function(tm_func, load)
           #---------------------------------------------------------
            load += NUM_SUCCESSIVE_SOLVES * dl
            u_prev[:] = u.x.array.copy()
            tm_prev[:] = tm_func.x.array.copy()

        if n > MAX_FAILURE:
            print("Too many failures, aborting.")
            break
print(f"Total number of iterations: {num_iterations}")

This works, but I am having issues with visualization of only the domain without third medium. If I first apply Warp By Vector , I see deformation of the full domain including the third medium. However, if I first apply Threshold using the medium field to remove the third medium, I can no longer properly apply Warp By Vector afterwards.

Am I missing the correct ParaView workflow, or is there a recommended way (possibly via code changes) to visualize only the solid body deformation? (Like in your visualised output using an xdmf file). Thanks for your time.

]]>
https://fenicsproject.discourse.group/t/contact-between-two-deformable-bodies/19645#post_8 Wed, 15 Apr 2026 21:07:03 +0000 fenicsproject.discourse.group-post-58445
Solve the Poisson equation on the curved domain The difference lies in the imported mesh (which then will cascade the changes down to the assembled forms).
Running with (mesh degree 2, element_degree 2), you get:

0.200      2.23e-03    8.57e-02      -         -
0.100      2.98e-04    2.32e-02      2.908       1.886
0.050      3.74e-05    5.89e-03      2.992       1.976
0.025      4.70e-06    1.48e-03      2.994       1.994
0.013      5.91e-07    3.72e-04      2.990       1.992
0.006      7.41e-08    9.31e-05      2.997       1.998

Running with element degree 2 and mesh degree 1 yields:

0.200      5.89e-02    6.11e-01      -         -
0.100      3.12e-02    4.53e-01      0.914       0.434
0.050      1.54e-02    3.21e-01      1.023       0.496
0.025      7.71e-03    2.29e-01      0.996       0.489
0.013      3.88e-03    1.63e-01      0.989       0.490
0.006      3.16e-06    2.11e-04      10.261       9.589

Running with 1 and 1 yields:

0.200      6.82e-02    7.52e-01      -         -
0.100      1.90e-02    3.84e-01      1.842       0.967
0.050      4.86e-03    1.92e-01      1.971       1.000
0.025      1.22e-03    9.62e-02      1.994       0.999
0.013      3.06e-04    4.82e-02      1.992       0.996
0.006      7.67e-05    2.41e-02      1.999       0.999

As you can see it clearly has an effect.

DOLFINx doesnt rely on iso-parameteric discretizations, the mesh can be represented by a different degree element than the unknown.

The calculation doesn’t differ from a finite element kernel perspective, the same kernel (with a second order element representing the jacobian) is used in all cells. It is just that the cells within the mesh that are straight ends up with a constant jacobian.
Some of this is discussed in:

and the previous paragraphs.

]]>
https://fenicsproject.discourse.group/t/solve-the-poisson-equation-on-the-curved-domain/19653#post_2 Wed, 15 Apr 2026 20:47:06 +0000 fenicsproject.discourse.group-post-58444
FFCX compiler files size limit with petsc.assemble_matrix Dear Mr J. S. Dokken,

First of all, I am immensely grateful for your help. Please excuse my lackluster code-pasting, I’m still quite new here (I below paste the code correctly)

Exactly! I’m trying to do a spectral method (to solve an eigenvalue problem with SLEPc), based on this post here: Does Fenicsx support spectral element methods? - FEniCS Project

The ffcx c++ file (or at least that’s what I think it is, in fact they are three files called something like libffcx_forms_218d0c58cb3ff7… and with .o, .c, and .so extensions) gets over 2 GB at p => 80. Normally, I use quadrature degree = 2p - 1 (as in the example above), I tried things like int(1.5*p) but there was a significant loss in accuracy.

I will try assembling with tensor product elements - that might be my way out of this (I spent the whole evening battling with sympy and testing different workarounds :sweat_smile:)

I’ll try and come back here to poste whether it works. Once again, thanks a lot! I leave my code (now hopefully correctly pasted) here

import numpy as np
import os
from mpi4py import MPI
from petsc4py import PETSc

###################################################
#AUXILIARY FUNCTIONS
###################################################

######## SLEPC EIGENSOLVER #############
def eig_solver(A,ME,target,pep_params):
    from slepc4py import SLEPc
    from petsc4py import PETSc
    from mpi4py import MPI
    from dolfinx import fem

    nev, tol, max_it = pep_params
    comm = MPI.COMM_WORLD
    V_gr, dof_map_gr = ME.sub(0).collapse()
    V_em, dof_map_em = ME.sub(1).collapse()
    pep = SLEPc.PEP().create(comm)
    pep.setOperators(A)
    pep.setProblemType(SLEPc.PEP.ProblemType.GENERAL)
    pep.setDimensions(nev=nev, ncv=max(15, 2*nev + 10), mpd=max(15, 2*nev + 10))
    pep.setType(SLEPc.PEP.Type.TOAR)
    st = pep.getST()
    st.setType(SLEPc.ST.Type.SINVERT)
    ksp = st.getKSP()
    ksp.setType(PETSc.KSP.Type.PREONLY)
    pep.setRefine(ref = SLEPc.PEP.Refine.SIMPLE, npart = 1, tol = 0.1*tol, its= 3, scheme= SLEPc.PEP.RefineScheme.SCHUR)
    pep.setTarget(target)
    pep.setWhichEigenpairs(SLEPc.PEP.Which.TARGET_MAGNITUDE)
    pep.setTolerances(tol, max_it)

    # Solver
    pep.solve()


    nconv = pep.getConverged()

    all_eigvals = []
    efuns = fem.Function(ME) 
    efuns_list_gr = []
    efuns_list_em = []

    error_list = []

    for i in range(nconv):   
        eigval = pep.getEigenpair(i, efuns.x.petsc_vec)
        error = pep.computeError(i)
        error_list.append(error) 
        efun_vals_gr = efuns.x.array[dof_map_gr]
        efun_vals_em = efuns.x.array[dof_map_em]
        efuns_list_gr.append(efun_vals_gr)
        efuns_list_em.append(efun_vals_em)
        all_eigvals.append(eigval)
        
    return all_eigvals, efuns_list_gr, efuns_list_em, error_list


################ SYMPY SORTING ################

def extract_qep_coeffs(eqn):
    import sympy as sp
    omega = sp.symbols('omega', complex=True)

    expanded_eqn = eqn

    K_matrix_term = expanded_eqn.subs(omega, 0)
    C_matrix_term = sp.diff(expanded_eqn, omega).subs(omega, 0)
    M_matrix_term = sp.diff(expanded_eqn, omega, 2).subs(omega, 0) / 2
    
    return (M_matrix_term), (C_matrix_term), (K_matrix_term)

######### SYMPY CONVERSION TO UFL #########

def sympy_to_ufl(expr, mapping):
    import sympy as sp
    import ufl
    """
    Recursively walks a SymPy expression tree and builds a UFL expression.
    
    Args:
        expr: The SymPy expression node.
        mapping: A dictionary linking SymPy variable/function names to UFL objects or numbers.
    """
    # 1. Base Cases: Numbers and Constants

    if expr == sp.I:
        return 1j
        
    if isinstance(expr, (int, float)):
        return expr
        
    if isinstance(expr, sp.Number):
        return float(expr)
    
    # 2. Base Cases: Symbols and Trial Functions
    if isinstance(expr, sp.Symbol):
        if expr.name in mapping:
            return mapping[expr.name]
        raise ValueError(f"Symbol '{expr.name}' not found in mapping dictionary.")
        
    if isinstance(expr, sp.core.function.AppliedUndef):
        func_name = expr.func.__name__
        if func_name in mapping:
            return mapping[func_name]
        raise ValueError(f"Function '{func_name}' not found in mapping dictionary.")

    # 3. Recursive Cases: Arithmetic Operations
    if isinstance(expr, sp.Add):
        return sum(sympy_to_ufl(arg, mapping) for arg in expr.args)
    
    if isinstance(expr, sp.Mul):
        result = 1
        for arg in expr.args:
            result = result * sympy_to_ufl(arg, mapping)
        return result
    
    if isinstance(expr, sp.Pow):
        base = sympy_to_ufl(expr.args[0], mapping)
        exponent = sympy_to_ufl(expr.args[1], mapping)
        return base ** exponent
    
    if isinstance(expr, sp.exp):
        base = sympy_to_ufl(expr.args[0], mapping)
        return ufl.exp(argum)

    if isinstance(expr, sp.sin):
        argum = sympy_to_ufl(expr.args[0], mapping)
        return ufl.sin(argum)
    
    if isinstance(expr, sp.log):
        argum = sympy_to_ufl(expr.args[0], mapping)
        return ufl.ln(argum)
    
    if isinstance(expr, sp.cot):
        argum = sympy_to_ufl(expr.args[0], mapping)
        return ufl.cos(argum)/ufl.sin(argum)

    if isinstance(expr, sp.cos):
        argum = sympy_to_ufl(expr.args[0], mapping)
        return ufl.cos(argum)

    if isinstance(expr, sp.tan):
        argum = sympy_to_ufl(expr.args[0], mapping)
        return ufl.tan(argum)

    if isinstance(expr, sp.cot):
        argum = sympy_to_ufl(expr.args[0], mapping)
        return ufl.cot(argum)

    if isinstance(expr, sp.Abs):
        argum = sympy_to_ufl(expr.args[0], mapping)
        return ufl.algebra.Abs(argum)
    
    # 4. Calculus: Derivatives
    if isinstance(expr, sp.Derivative):
        ufl_func = sympy_to_ufl(expr.args[0], mapping)
        
        for var, count in expr.variable_count:
            if var.name == 'sigma':
                dim = 0
            elif var.name == 'x_coord':
                dim = 1
            else:
                raise ValueError(f"Unknown derivative variable: {var.name}")
            
            # Apply the UFL derivative .dx(dim) the correct number of times
            for _ in range(count):
                ufl_func = ufl_func.dx(dim)
        return ufl_func

    # 5. Math Functions

    if isinstance(expr, sp.conjugate):
        return ufl.conj(sympy_to_ufl(expr.args[0], mapping))

    # Catch-all for unsupported operations
    raise NotImplementedError(f"SymPy node type {type(expr)} is not implemented in the AST walker.")


############ PROBLEM ASSEMBLER ########################
def assembler(phys_params, poly_degree, eps, debug_mode):
    import dill as pickle
    import numpy as np
    import ufl
    from mpi4py import MPI
    from dolfinx.fem.petsc import assemble_matrix
    from dolfinx import mesh
    from dolfinx import fem
    import basix
    from basix import CellType, ElementFamily, LagrangeVariant
    from dolfinx.fem.petsc import assemble_matrix

    #Physical parameters
    M_mass_num, alpha, q_ch = phys_params[:] 
    Q_ch = q_ch*M_mass_num
    a_rot_num = alpha*M_mass_num
    r_p = M_mass_num + np.sqrt(M_mass_num**2 - a_rot_num**2 - Q_ch**2)
    lambda_param = r_p
    #Mesh creation
    N_x = 1
    N_y = 1
    # general cell type
    cell_type = CellType.quadrilateral
    # spectral element
    ufl_nodal_element = basix.ufl.element(ElementFamily.P, 
                                        cell_type, 
                                        poly_degree, 
                                        LagrangeVariant.gll_warped)

    # mesh: we "chop-off" by a small eps to avoid coordinate singularities
    domain = mesh.create_rectangle(MPI.COMM_WORLD, 
                                [[eps, eps], [1-eps, 1-eps]], 
                                [N_x, N_y],
                                mesh.CellType.quadrilateral)
	
    tdim = domain.topology.dim
    fdim = tdim - 1
    domain.topology.create_connectivity(fdim, tdim)

    #Function space: needs to be mixed since we're solving a coupled PDE system
    ME_element = basix.ufl.mixed_element([ufl_nodal_element, ufl_nodal_element])
    ME = fem.functionspace(domain, ME_element)
    #Trial and test functions
    v_test_gr, v_test_em = ufl.TestFunctions(ME)
    psi_gr, psi_em = ufl.TrialFunctions(ME)

    # Spatial coordinate
    x = ufl.SpatialCoordinate(domain)

    # create the quadrature rule and measure
    metadata = {
        "quadrature_rule": "GLL", 
        "quadrature_degree": 2 * poly_degree - 1
    }
    dx_meas = ufl.dx(domain=domain, metadata=metadata)

    # Getting back the objects:
    with open(f'eq_A.pk', 'rb') as f:  # Python 3: open(..., 'wb')
        EQN_kerr_A = pickle.load(f)
    with open(f'eq_B.pk', 'rb') as f:  # Python 3: open(..., 'wb')
        EQN_kerr_B = pickle.load(f)   

    # Extract into mass, stiffness...
    M_eq, C_eq, K_eq = extract_qep_coeffs(EQN_kerr_A)
    M_eq_b, C_eq_b, K_eq_b = extract_qep_coeffs(EQN_kerr_B)

    #Map for the UFL converter
    ufl_mapping = {
        'sigma': x[0],
        'x_coord': x[1],
        'M': M_mass_num,
        'lambda_': lambda_param,         
        'psi': psi_gr,
        'Q': Q_ch,
        'm': 0,
        'psi_b': psi_em,
        'a': a_rot_num,
    }

    # Translate the SymPy expressions directly into UFL operator expressions
    M_op = sympy_to_ufl(M_eq, ufl_mapping)
    C_op = sympy_to_ufl(C_eq, ufl_mapping)
    K_op = sympy_to_ufl(K_eq, ufl_mapping)

    M_op_b = sympy_to_ufl(M_eq_b, ufl_mapping)
    C_op_b = sympy_to_ufl(C_eq_b, ufl_mapping)
    K_op_b = sympy_to_ufl(K_eq_b, ufl_mapping)

    # Build the weak forms by taking the inner product with the test functions
    M_form_gr = ufl.inner(M_op, v_test_gr)*dx_meas
    M_form_em = ufl.inner(M_op_b, v_test_em)*dx_meas
    
    C_form_gr = ufl.inner(C_op, v_test_gr)*dx_meas
    C_form_em = ufl.inner(C_op_b, v_test_em)*dx_meas

    K_form_gr = ufl.inner(K_op, v_test_gr)*dx_meas
    K_form_em = ufl.inner(K_op_b, v_test_em)*dx_meas
 
    jit_options = {
    "cffi_extra_compile_args": ["-O1", "-fno-inline"]}
    
    M_1 = assemble_matrix(fem.form(M_form_gr+M_form_em, jit_options = jit_options))
    M_1.assemble()
    M_mat = M_1

    C_1 = assemble_matrix(fem.form(C_form_gr+C_form_em, jit_options = jit_options))
    C_1.assemble()
    C_mat = C_1

    K_1 = assemble_matrix(fem.form(K_form_gr+K_form_em, jit_options = jit_options))
    K_1.assemble()
    K_mat = K_1

    A = [ ]

    A.append(K_mat)
    A.append(C_mat)
    A.append(M_mat)

    return A, ME


###########################################
############   INITIALIZE
###########################################

phys_params = [0.5, 0, 0]

poly_degree = 18
eps = 2.5e-7
debug_mode = 0

A, ME = assembler(phys_params, poly_degree, eps, debug_mode = 0)

for i, mat in enumerate(A):  # PEP matrices
    viewer = PETSc.Viewer().createASCII("/dev/null")
    arr = mat.convert("dense").getDenseArray()
    print(f"A_{i}: max={np.max(np.abs(arr)):.2e}, "
        f"has_nan={np.any(np.isnan(arr))}, "
        f"has_inf={np.any(np.isinf(arr))}")
    
#PARAMETERS FOR SLEPC
target =  0.85
nev = 12
tol = 1e-7
max_it = 2000
pep_params = [nev, tol, max_it]

all_eigvals, efuns_list_gr, efuns_list_em, error_list = eig_solver(A,ME,target,pep_params)        

print("\n--- Sorted eigenvalues ---")
print(f"Found {len(all_eigvals)} physical eigenpairs after sorting.")
for i, eigval in enumerate(all_eigvals):
    print(f"QNM {i}: {eigval.real:.6e} + {eigval.imag:.6e}i, error {error_list[i]}")
]]>
https://fenicsproject.discourse.group/t/ffcx-compiler-files-size-limit-with-petsc-assemble-matrix/19654#post_7 Wed, 15 Apr 2026 20:35:21 +0000 fenicsproject.discourse.group-post-58443
Custom Assembly of Linear Form Consider the following mwe (using DOLFINx)

from mpi4py import MPI
import dolfinx
from ufl import dx, And, conditional, gt, lt, SpatialCoordinate

mesh = dolfinx.mesh.create_unit_square(MPI.COMM_WORLD, 10, 10)

x = SpatialCoordinate(mesh)

lower_bound = gt(x[0], 0.25)
upper_bound = lt(x[0], 0.8)
domain = And(lower_bound, upper_bound)
conditional_expr = conditional(domain, x[0], 0.0)
f = conditional_expr * dx(metadata={"quadrature_degree": 25})
print(dolfinx.fem.assemble_scalar(dolfinx.fem.form(f)))

print((0.8**2 - 0.25**2) / 2.0)

Note that the approach with conditional should work for you in legacy dolfin as well

from dolfin import *
try:
    from ufl import dx, And, conditional, gt, lt, SpatialCoordinate
except ModuleNotFoundError:
    from ufl_legacy import dx, And, conditional, gt, lt, SpatialCoordinate
mesh = UnitSquareMesh(10, 10)

x = SpatialCoordinate(mesh)

lower_bound = gt(x[0], 0.25)
upper_bound = lt(x[0], 0.8)
domain = And(lower_bound, upper_bound)
conditional_expr = conditional(domain, x[0], 0.0)
f = conditional_expr * dx(metadata={"quadrature_degree": 25})
print(assemble(f))

print((0.8**2 - 0.25**2) / 2.0)

Note that i picked high quadrature degree to try to capture the discontinuity that you are adding.

]]>
https://fenicsproject.discourse.group/t/custom-assembly-of-linear-form/19655#post_2 Wed, 15 Apr 2026 20:27:25 +0000 fenicsproject.discourse.group-post-58442
FFCX compiler files size limit with petsc.assemble_matrix Here is a minimal example setting up a grid with tensor factors

"""Use tensor product elements to test high order assembly"""

from mpi4py import MPI
import basix.ufl
import ufl
import dolfinx
import numpy as np
P = 15
cell_type = basix.CellType.quadrilateral
element = basix.ufl.wrap_element(
    basix.create_tp_element(basix.ElementFamily.P, cell_type, P, basix.LagrangeVariant.gll_warped)
)
c_el = basix.ufl.blocked_element(
    basix.ufl.wrap_element(
        basix.create_tp_element(basix.ElementFamily.P, cell_type, 1, basix.LagrangeVariant.gll_warped)
    ), (2,)
)

eps = 2.5e-7
nodes = np.array([[eps, eps],[1-eps, eps],[eps, 1-eps],[1-eps, 1-eps]], dtype=np.float64)
connectivity = np.array([[0, 1, 2, 3]], dtype=np.int64)
mesh = dolfinx.mesh.create_mesh(MPI.COMM_WORLD, cells=connectivity, x=nodes, e=c_el)

V = dolfinx.fem.functionspace(mesh, element)

v = ufl.TestFunction(V)
u = ufl.TrialFunction(V)
a = ufl.inner(u, v) * ufl.dx

J = dolfinx.fem.assemble_matrix(dolfinx.fem.form(a, form_compiler_options={"sum_factorization": True}))

which then in turn gives the kernel

void tabulate_tensor_integral_b6ab92259bf3ae452d1f0a4bb20de89952479a6b_quadrilateral(double _Complex* restrict A,
                                    const double _Complex* restrict w,
                                    const double _Complex* restrict c,
                                    const double* restrict coordinate_dofs,
                                    const int* restrict entity_local_index,
                                    const uint8_t* restrict quadrature_permutation,
                                    void* custom_data)
{
// Quadrature rules
static const double weights_06e[289] = {0.0001457851328577798, 0.0003348133780675438, 0.0005133696660845011, 0.0006754512570311628, 0.0008158284885834105, 0.0009299859235246961, 0.001014253485508105, 0.001065922421123082, 0.001083331928713395, 0.001065922421123082, 0.001014253485508105, 0.0009299859235246961, 0.0008158284885834105, 0.0006754512570311648, 0.0005133696660845011, 0.0003348133780675425, 0.0001457851328577798, 0.0003348133780675438, 0.0007689398495960406, 0.001179016191361836, 0.001551256377474323, 0.001873649849143541, 0.002135826352844399, 0.002329357109624001, 0.00244802113616285, 0.002488004198444595, 0.00244802113616285, 0.002329357109624001, 0.002135826352844399, 0.001873649849143541, 0.001551256377474328, 0.001179016191361836, 0.0007689398495960377, 0.0003348133780675438, 0.0005133696660845011, 0.001179016191361836, 0.001807786630155326, 0.002378542856058727, 0.002872869068033629, 0.003274864546640099, 0.003571605437217603, 0.0037535531002175, 0.003814859167051171, 0.0037535531002175, 0.003571605437217603, 0.003274864546640099, 0.002872869068033629, 0.002378542856058734, 0.001807786630155326, 0.001179016191361832, 0.0005133696660845011, 0.0006754512570311628, 0.001551256377474323, 0.002378542856058727, 0.003129498815699238, 0.003779894200001008, 0.004308808098277363, 0.004699236323384317, 0.004938628686833649, 0.005019290367182374, 0.004938628686833649, 0.004699236323384317, 0.004308808098277363, 0.003779894200001008, 0.003129498815699247, 0.002378542856058727, 0.001551256377474317, 0.0006754512570311628, 0.0008158284885834105, 0.001873649849143541, 0.002872869068033629, 0.003779894200001008, 0.004565459520715274, 0.005204296182472571, 0.005675866063309457, 0.005965010702568406, 0.006062436084608166, 0.005965010702568406, 0.005675866063309457, 0.005204296182472571, 0.004565459520715274, 0.003779894200001019, 0.002872869068033629, 0.001873649849143534, 0.0008158284885834105, 0.0009299859235246961, 0.002135826352844399, 0.003274864546640099, 0.004308808098277363, 0.005204296182472571, 0.005932524126433432, 0.006470079945179132, 0.006799684081509751, 0.006910742024642279, 0.006799684081509751, 0.006470079945179132, 0.005932524126433432, 0.005204296182472571, 0.004308808098277375, 0.003274864546640099, 0.00213582635284439, 0.0009299859235246961, 0.001014253485508105, 0.002329357109624001, 0.003571605437217603, 0.004699236323384317, 0.005675866063309457, 0.006470079945179132, 0.007056344585348723, 0.007415814697373854, 0.007536935784334625, 0.007415814697373854, 0.007056344585348723, 0.006470079945179132, 0.005675866063309457, 0.004699236323384331, 0.003571605437217603, 0.002329357109623993, 0.001014253485508105, 0.001065922421123082, 0.00244802113616285, 0.0037535531002175, 0.004938628686833649, 0.005965010702568406, 0.006799684081509751, 0.007415814697373854, 0.007793597231627863, 0.007920888568662418, 0.007793597231627863, 0.007415814697373854, 0.006799684081509751, 0.005965010702568406, 0.004938628686833664, 0.0037535531002175, 0.00244802113616284, 0.001065922421123082, 0.001083331928713395, 0.002488004198444595, 0.003814859167051171, 0.005019290367182374, 0.006062436084608166, 0.006910742024642279, 0.007536935784334625, 0.007920888568662418, 0.008050258930825227, 0.007920888568662418, 0.007536935784334625, 0.006910742024642279, 0.006062436084608166, 0.005019290367182388, 0.003814859167051171, 0.002488004198444586, 0.001083331928713395, 0.001065922421123082, 0.00244802113616285, 0.0037535531002175, 0.004938628686833649, 0.005965010702568406, 0.006799684081509751, 0.007415814697373854, 0.007793597231627863, 0.007920888568662418, 0.007793597231627863, 0.007415814697373854, 0.006799684081509751, 0.005965010702568406, 0.004938628686833664, 0.0037535531002175, 0.00244802113616284, 0.001065922421123082, 0.001014253485508105, 0.002329357109624001, 0.003571605437217603, 0.004699236323384317, 0.005675866063309457, 0.006470079945179132, 0.007056344585348723, 0.007415814697373854, 0.007536935784334625, 0.007415814697373854, 0.007056344585348723, 0.006470079945179132, 0.005675866063309457, 0.004699236323384331, 0.003571605437217603, 0.002329357109623993, 0.001014253485508105, 0.0009299859235246961, 0.002135826352844399, 0.003274864546640099, 0.004308808098277363, 0.005204296182472571, 0.005932524126433432, 0.006470079945179132, 0.006799684081509751, 0.006910742024642279, 0.006799684081509751, 0.006470079945179132, 0.005932524126433432, 0.005204296182472571, 0.004308808098277375, 0.003274864546640099, 0.00213582635284439, 0.0009299859235246961, 0.0008158284885834105, 0.001873649849143541, 0.002872869068033629, 0.003779894200001008, 0.004565459520715274, 0.005204296182472571, 0.005675866063309457, 0.005965010702568406, 0.006062436084608166, 0.005965010702568406, 0.005675866063309457, 0.005204296182472571, 0.004565459520715274, 0.003779894200001019, 0.002872869068033629, 0.001873649849143534, 0.0008158284885834105, 0.0006754512570311648, 0.001551256377474328, 0.002378542856058734, 0.003129498815699247, 0.003779894200001019, 0.004308808098277375, 0.004699236323384331, 0.004938628686833664, 0.005019290367182388, 0.004938628686833664, 0.004699236323384331, 0.004308808098277375, 0.003779894200001019, 0.003129498815699256, 0.002378542856058734, 0.001551256377474322, 0.0006754512570311648, 0.0005133696660845011, 0.001179016191361836, 0.001807786630155326, 0.002378542856058727, 0.002872869068033629, 0.003274864546640099, 0.003571605437217603, 0.0037535531002175, 0.003814859167051171, 0.0037535531002175, 0.003571605437217603, 0.003274864546640099, 0.002872869068033629, 0.002378542856058734, 0.001807786630155326, 0.001179016191361832, 0.0005133696660845011, 0.0003348133780675425, 0.0007689398495960377, 0.001179016191361832, 0.001551256377474317, 0.001873649849143534, 0.00213582635284439, 0.002329357109623993, 0.00244802113616284, 0.002488004198444586, 0.00244802113616284, 0.002329357109623993, 0.00213582635284439, 0.001873649849143534, 0.001551256377474322, 0.001179016191361832, 0.0007689398495960349, 0.0003348133780675425, 0.0001457851328577798, 0.0003348133780675438, 0.0005133696660845011, 0.0006754512570311628, 0.0008158284885834105, 0.0009299859235246961, 0.001014253485508105, 0.001065922421123082, 0.001083331928713395, 0.001065922421123082, 0.001014253485508105, 0.0009299859235246961, 0.0008158284885834105, 0.0006754512570311648, 0.0005133696660845011, 0.0003348133780675425, 0.0001457851328577798};
// Precomputed values of basis functions and precomputations
// FE* dimensions: [permutation][entities][points][dofs]
static const double FE_TF0[1][1][17][16] = {{{{0.5310179651539434, 0.002514143263163935, 0.5884823047483387, -0.1793847948263871, 0.09732164084031325, -0.06362108595724753, 0.04577976819603085, -0.03492972494947335, 0.02770554332857452, -0.02256350657595092, 0.01869763014686472, -0.01564139383462264, 0.01309572661657683, -0.01084094025003909, 0.008673660364852842, -0.006306936264939094},
  {-0.1279643740930969, -0.003235687285707545, 0.825276019527148, 0.4016052578050424, -0.1536075508159097, 0.09097174773003154, -0.06269562416475484, 0.04675394028868764, -0.03658362653399562, 0.02953633333506332, -0.02433380644796956, 0.02027471838678221, -0.0169273774697448, 0.01398560334933942, -0.01117514119019908, 0.008119567579283157},
  {0.04506847775101008, 0.002870613041430765, -0.1492565638184692, 0.9323133249205195, 0.236219340013864, -0.1018419901991895, 0.06319363895274897, -0.04483303014671571, 0.0341130338495718, -0.02707010731090384, 0.02205080070300053, -0.01823148475814101, 0.01514049433573509, -0.01246336839301106, 0.009934557924711289, -0.007207736866162262},
  {-0.01229554036123029, -0.001507932790634431, 0.03528793100807117, -0.07476331401308049, 0.9884767900729207, 0.08944352429381004, -0.04196240826588792, 0.02688770122719832, -0.01944302869602156, 0.01497995718888914, -0.01197709268493107, 0.009780881755744455, -0.008054536715507075, 0.006592445566051996, -0.00523505612868456, 0.003789678543292256},
  {-0.003582234564908075, -0.000739774109720467, 0.00971234526043426, -0.01662921035517776, 0.03524923230686862, 0.9980461370946736, -0.03297846445881575, 0.01652159743212438, -0.01085112819950925, 0.007953873646228531, -0.006174506294946994, 0.004948273372891968, -0.004024346742651605, 0.003266432998359481, -0.002579816169174316, 0.001861588783323459},
  {0.01139891269024134, 0.003672131136338329, -0.03003351311013579, 0.04707080384154232, -0.07679234987354021, 0.1709490109412144, 0.9626287483056376, -0.1258109160332868, 0.06647574244713014, -0.04453582436867183, 0.03295929578191496, -0.02566796309725861, 0.02049754424751977, -0.01644016709043109, 0.01288579309629175, -0.009257248914506041},
  {-0.01455857691406362, -0.006990023863127238, 0.03773300702719259, -0.05645426657445396, 0.08274433532555152, -0.1348356252830777, 0.3233027662258365, 0.8858877501707699, -0.1860605036384325, 0.1027438196202606, -0.06994074440908214, 0.05206092659087528, -0.04046019944613791, 0.03190210201316634, -0.02473958412712687, 0.01766481728184915},
  {0.01476942609043248, 0.01029569794319792, -0.03788738811311202, 0.055139045998481, -0.07636531428674226, 0.1100722191402691, -0.1820572953998159, 0.4820557925418167, 0.7741419307270463, -0.2134701215720267, 0.1222953984338127, -0.08426281753751952, 0.06278183258882505, -0.04827648106171816, 0.03687498906184535, -0.02610691455479212},
  {-0.01309204101562505, -0.01309204101562488, 0.03335544248995533, -0.04768643702639053, 0.06383014874598097, -0.08627598963282575, 0.1243076624552219, -0.2105115081265031, 0.6360727221101858, 0.6360727221101865, -0.2105115081265024, 0.1243076624552216, -0.08627598963282571, 0.06383014874598111, -0.04768643702639108, 0.03335544248995551},
  {0.01029569794319817, 0.01476942609043239, -0.02610691455479202, 0.03687498906184507, -0.04827648106171822, 0.06278183258882525, -0.08426281753751999, 0.1222953984338136, -0.2134701215720279, 0.7741419307270471, 0.4820557925418165, -0.182057295399816, 0.1100722191402696, -0.07636531428674288, 0.05513904599848175, -0.03788738811311237},
  {-0.006990023863127413, -0.01455857691406353, 0.01766481728184911, -0.02473958412712672, 0.03190210201316653, -0.0404601994461381, 0.05206092659087558, -0.06994074440908271, 0.1027438196202611, -0.1860605036384328, 0.8858877501707697, 0.3233027662258372, -0.1348356252830784, 0.08274433532555206, -0.05645426657445486, 0.03773300702719305},
  {0.003672131136338363, 0.01139891269024114, -0.00925724891450588, 0.01288579309629153, -0.01644016709043094, 0.02049754424751953, -0.0256679630972587, 0.03295929578191473, -0.04453582436867162, 0.06647574244712956, -0.1258109160332856, 0.962628748305638, 0.1709490109412131, -0.07679234987353983, 0.04707080384154243, -0.0300335131101359},
  {-0.0007397741097204323, -0.003582234564907856, 0.001861588783323354, -0.002579816169174141, 0.003266432998359122, -0.004024346742651431, 0.004948273372891867, -0.006174506294946454, 0.007953873646227993, -0.01085112819950874, 0.01652159743212317, -0.03297846445881378, 0.998046137094674, 0.03524923230686623, -0.01662921035517689, 0.009712345260433847},
  {-0.001507932790634611, -0.01229554036123084, 0.003789678543292389, -0.005235056128685074, 0.006592445566052121, -0.008054536715507425, 0.009780881755745248, -0.01197709268493141, 0.01497995718889036, -0.01944302869602261, 0.0268877012271997, -0.04196240826589059, 0.08944352429381575, 0.988476790072919, -0.07476331401308516, 0.03528793100807356},
  {0.002870613041430889, 0.04506847775100992, -0.007207736866162262, 0.009934557924711391, -0.01246336839301107, 0.01514049433573512, -0.01823148475814129, 0.02205080070300094, -0.02707010731090406, 0.03411303384957206, -0.044833030146716, 0.06319363895274903, -0.1018419901991901, 0.2362193400138683, 0.9323133249205199, -0.1492565638184722},
  {-0.003235687285707548, -0.1279643740930907, 0.008119567579282649, -0.01117514119019856, 0.0139856033493392, -0.01692737746974434, 0.0202747183867812, -0.024333806447969, 0.02953633333506164, -0.03658362653399398, 0.0467539402886867, -0.06269562416475231, 0.09097174773002825, -0.1536075508159051, 0.4016052578050343, 0.8252760195271474},
  {0.002514143263164119, 0.5310179651539475, -0.006306936264939108, 0.008673660364852594, -0.01084094025003951, 0.01309572661657724, -0.01564139383462328, 0.01869763014686473, -0.02256350657595175, 0.0277055433285752, -0.03492972494947359, 0.04577976819603195, -0.06362108595724893, 0.09732164084031523, -0.1793847948263935, 0.5884823047483395}}}};
static const double FE_TF1[1][1][17][2] = {{{{-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0}}}};
static const double FE_TF2[1][1][17][2] = {{{{0.9952877376572087, 0.004712262342791262},
  {0.9753377608843838, 0.0246622391156161},
  {0.940119576863493, 0.05988042313650699},
  {0.8907570019484007, 0.1092429980515993},
  {0.8288355796083454, 0.1711644203916546},
  {0.7563452685432385, 0.2436547314567615},
  {0.6756158817269382, 0.3243841182730618},
  {0.5892420907479239, 0.4107579092520761},
  {0.5, 0.5},
  {0.4107579092520761, 0.5892420907479239},
  {0.3243841182730618, 0.6756158817269382},
  {0.2436547314567615, 0.7563452685432385},
  {0.1711644203916547, 0.8288355796083453},
  {0.1092429980515995, 0.8907570019484006},
  {0.05988042313650704, 0.9401195768634929},
  {0.0246622391156161, 0.9753377608843838},
  {0.004712262342791262, 0.9952877376572087}}}};
for (int iq0 = 0; iq0 < 17; ++iq0)
{
  for (int iq1 = 0; iq1 < 17; ++iq1)
  {
    // ------------------------ 
    // Section: Jacobian
    // Inputs: FE_TF1, coordinate_dofs, FE_TF2
    // Outputs: J0_c3, J0_c1, J0_c0, J0_c2
    double J0_c0 = 0.0;
    double J0_c3 = 0.0;
    double J0_c1 = 0.0;
    double J0_c2 = 0.0;
    {
      for (int ic0 = 0; ic0 < 2; ++ic0)
      {
        for (int ic1 = 0; ic1 < 2; ++ic1)
        {
          J0_c0 += coordinate_dofs[(2 * ic0 + ic1) * 3] * (FE_TF1[0][0][iq0][ic0] * FE_TF2[0][0][iq1][ic1]);
        }
        for (int ic1 = 0; ic1 < 2; ++ic1)
        {
          J0_c3 += coordinate_dofs[(2 * ic0 + ic1) * 3 + 1] * (FE_TF2[0][0][iq0][ic0] * FE_TF1[0][0][iq1][ic1]);
        }
        for (int ic1 = 0; ic1 < 2; ++ic1)
        {
          J0_c1 += coordinate_dofs[(2 * ic0 + ic1) * 3] * (FE_TF2[0][0][iq0][ic0] * FE_TF1[0][0][iq1][ic1]);
        }
        for (int ic1 = 0; ic1 < 2; ++ic1)
        {
          J0_c2 += coordinate_dofs[(2 * ic0 + ic1) * 3 + 1] * (FE_TF1[0][0][iq0][ic0] * FE_TF2[0][0][iq1][ic1]);
        }
      }
    }
    // ------------------------ 
    // ------------------------ 
    // Section: Intermediates
    // Inputs: J0_c3, J0_c1, J0_c0, J0_c2
    // Outputs: fw0
    double _Complex fw0 = 0;
    {
      double sv_06e_0 = J0_c0 * J0_c3;
      double sv_06e_1 = J0_c1 * J0_c2;
      double sv_06e_2 = -sv_06e_1;
      double sv_06e_3 = sv_06e_0 + sv_06e_2;
      double sv_06e_4 = fabs(sv_06e_3);
      fw0 = sv_06e_4 * weights_06e[17 * iq0 + iq1];
    }
    // ------------------------ 
    // ------------------------ 
    // Section: Tensor Computation
    // Inputs: FE_TF0, fw0
    // Outputs: A
    {
      for (int j0 = 0; j0 < 16; ++j0)
      {
        for (int j1 = 0; j1 < 16; ++j1)
        {
          for (int i0 = 0; i0 < 16; ++i0)
          {
            for (int i1 = 0; i1 < 16; ++i1)
            {
              A[256 * (16 * i0 + i1) + (16 * j0 + j1)] += fw0 * (FE_TF0[0][0][iq0][i0] * FE_TF0[0][0][iq1][i1]) * (FE_TF0[0][0][iq0][j0] * FE_TF0[0][0][iq1][j1]);
            }
          }
        }
      }
    }
    // ------------------------ 
  }
}

}

Turning off tensor product factorization yields massive tables such as:

 FE2_C0_Q06e[1][1][289][256]
]]>
https://fenicsproject.discourse.group/t/ffcx-compiler-files-size-limit-with-petsc-assemble-matrix/19654#post_6 Wed, 15 Apr 2026 20:17:08 +0000 fenicsproject.discourse.group-post-58441
FFCX compiler files size limit with petsc.assemble_matrix
mariom121:

I also attempted to reduce the quadrature degree, but I lost too much precission in exchange. Thanks a lot anyways for suggesting it.

Are you saying that using

and

gives you a loss of precision?

Edit
Right, you are trying to do a spectral method. At what polynomial degree does your problem currently fail?
I guess you could try using the tensor product element from basix (ffcx/demo/MassAction.py at 54893260dbf19a75fb15f3560e46a28c8cb73c58 · FEniCS/ffcx · GitHub)
i.e. if one considers:

"""Use tensor product elements to test high order assembly"""


import basix.ufl
import ufl

P = 13
cell_type = basix.CellType.hexahedron
element = basix.ufl.wrap_element(
    basix.create_tp_element(basix.ElementFamily.P, cell_type, P, basix.LagrangeVariant.gll_warped)
)
c_el = basix.ufl.blocked_element(
    basix.ufl.wrap_element(
        basix.create_tp_element(basix.ElementFamily.P, cell_type, 1, basix.LagrangeVariant.gll_warped)
    ), (3,)
)

mesh = ufl.Mesh(c_el)
V = ufl.FunctionSpace(mesh, element)
x = ufl.SpatialCoordinate(mesh)

v = ufl.TestFunction(V)
u = ufl.TrialFunction(V)
a = ufl.inner(u, v) * ufl.dx

w = ufl.Coefficient(V)
L = ufl.action(a, w)

which can be compiled with:

python3 -m ffcx mwe.py --sum_factorization

you get the following (seems like the quadrature weights does not use tensor products).

void tabulate_tensor_integral_54a463af79997cceebe0a7740ccc42adfa5a9ff6_hexahedron(double* restrict A,
                                    const double* restrict w,
                                    const double* restrict c,
                                    const double* restrict coordinate_dofs,
                                    const int* restrict entity_local_index,
                                    const uint8_t* restrict quadrature_permutation,
                                    void* custom_data)
{
// Quadrature rules
static const double weights_843[4913] = {...,
  {0.002514143263164119, 0.5310179651539475, -0.006306936264939108, 0.008673660364852594, -0.01084094025003951, 0.01309572661657724, -0.01564139383462328, 0.01869763014686473, -0.02256350657595175, 0.0277055433285752, -0.03492972494947359, 0.04577976819603195, -0.06362108595724893, 0.09732164084031523, -0.1793847948263935, 0.5884823047483395}}}};
static const double FE_TF1[1][1][17][2] = {{{{-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0},
  {-1.0, 1.0}}}};
static const double FE_TF2[1][1][17][2] = {{{{0.9952877376572087, 0.004712262342791262},
  {0.9753377608843838, 0.0246622391156161},
  {0.940119576863493, 0.05988042313650699},
  {0.8907570019484007, 0.1092429980515993},
  {0.8288355796083454, 0.1711644203916546},
  {0.7563452685432385, 0.2436547314567615},
  {0.6756158817269382, 0.3243841182730618},
  {0.5892420907479239, 0.4107579092520761},
  {0.5, 0.5},
  {0.4107579092520761, 0.5892420907479239},
  {0.3243841182730618, 0.6756158817269382},
  {0.2436547314567615, 0.7563452685432385},
  {0.1711644203916547, 0.8288355796083453},
  {0.1092429980515995, 0.8907570019484006},
  {0.05988042313650704, 0.9401195768634929},
  {0.0246622391156161, 0.9753377608843838},
  {0.004712262342791262, 0.9952877376572087}}}};
for (int iq0 = 0; iq0 < 17; ++iq0)
{
  for (int iq1 = 0; iq1 < 17; ++iq1)
  {
    for (int iq2 = 0; iq2 < 17; ++iq2)
    {
      // ------------------------ 
      // Section: Jacobian
      // Inputs: FE_TF2, FE_TF1, coordinate_dofs
      // Outputs: J0_c5, J0_c3, J0_c1, J0_c4, J0_c8, J0_c2, J0_c0, J0_c7, J0_c6
      double J0_c0 = 0.0;
      double J0_c4 = 0.0;
      double J0_c8 = 0.0;
      double J0_c5 = 0.0;
      double J0_c7 = 0.0;
      double J0_c3 = 0.0;
      double J0_c6 = 0.0;
      double J0_c1 = 0.0;
      double J0_c2 = 0.0;
      {
        for (int ic0 = 0; ic0 < 2; ++ic0)
        {
          for (int ic1 = 0; ic1 < 2; ++ic1)
          {
            for (int ic2 = 0; ic2 < 2; ++ic2)
            {
              J0_c0 += coordinate_dofs[(4 * ic0 + 2 * ic1 + ic2) * 3] * (FE_TF1[0][0][iq0][ic0] * FE_TF2[0][0][iq1][ic1] * FE_TF2[0][0][iq2][ic2]);
            }
          }
          for (int ic1 = 0; ic1 < 2; ++ic1)
          {
            for (int ic2 = 0; ic2 < 2; ++ic2)
            {
              J0_c4 += coordinate_dofs[(4 * ic0 + 2 * ic1 + ic2) * 3 + 1] * (FE_TF2[0][0][iq0][ic0] * FE_TF1[0][0][iq1][ic1] * FE_TF2[0][0][iq2][ic2]);
            }
          }
          for (int ic1 = 0; ic1 < 2; ++ic1)
          {
            for (int ic2 = 0; ic2 < 2; ++ic2)
            {
              J0_c8 += coordinate_dofs[(4 * ic0 + 2 * ic1 + ic2) * 3 + 2] * (FE_TF2[0][0][iq0][ic0] * FE_TF2[0][0][iq1][ic1] * FE_TF1[0][0][iq2][ic2]);
            }
          }
          for (int ic1 = 0; ic1 < 2; ++ic1)
          {
            for (int ic2 = 0; ic2 < 2; ++ic2)
            {
              J0_c5 += coordinate_dofs[(4 * ic0 + 2 * ic1 + ic2) * 3 + 1] * (FE_TF2[0][0][iq0][ic0] * FE_TF2[0][0][iq1][ic1] * FE_TF1[0][0][iq2][ic2]);
            }
          }
          for (int ic1 = 0; ic1 < 2; ++ic1)
          {
            for (int ic2 = 0; ic2 < 2; ++ic2)
            {
              J0_c7 += coordinate_dofs[(4 * ic0 + 2 * ic1 + ic2) * 3 + 2] * (FE_TF2[0][0][iq0][ic0] * FE_TF1[0][0][iq1][ic1] * FE_TF2[0][0][iq2][ic2]);
            }
          }
          for (int ic1 = 0; ic1 < 2; ++ic1)
          {
            for (int ic2 = 0; ic2 < 2; ++ic2)
            {
              J0_c3 += coordinate_dofs[(4 * ic0 + 2 * ic1 + ic2) * 3 + 1] * (FE_TF1[0][0][iq0][ic0] * FE_TF2[0][0][iq1][ic1] * FE_TF2[0][0][iq2][ic2]);
            }
          }
          for (int ic1 = 0; ic1 < 2; ++ic1)
          {
            for (int ic2 = 0; ic2 < 2; ++ic2)
            {
              J0_c6 += coordinate_dofs[(4 * ic0 + 2 * ic1 + ic2) * 3 + 2] * (FE_TF1[0][0][iq0][ic0] * FE_TF2[0][0][iq1][ic1] * FE_TF2[0][0][iq2][ic2]);
            }
          }
          for (int ic1 = 0; ic1 < 2; ++ic1)
          {
            for (int ic2 = 0; ic2 < 2; ++ic2)
            {
              J0_c1 += coordinate_dofs[(4 * ic0 + 2 * ic1 + ic2) * 3] * (FE_TF2[0][0][iq0][ic0] * FE_TF1[0][0][iq1][ic1] * FE_TF2[0][0][iq2][ic2]);
            }
          }
          for (int ic1 = 0; ic1 < 2; ++ic1)
          {
            for (int ic2 = 0; ic2 < 2; ++ic2)
            {
              J0_c2 += coordinate_dofs[(4 * ic0 + 2 * ic1 + ic2) * 3] * (FE_TF2[0][0][iq0][ic0] * FE_TF2[0][0][iq1][ic1] * FE_TF1[0][0][iq2][ic2]);
            }
          }
        }
      }
      // ------------------------ 
      // ------------------------ 
      // Section: Intermediates
      // Inputs: J0_c5, J0_c3, J0_c1, J0_c4, J0_c8, J0_c2, J0_c0, J0_c7, J0_c6
      // Outputs: fw0
      double fw0 = 0;
      {
        double sv_843_0 = J0_c4 * J0_c8;
        double sv_843_1 = J0_c5 * J0_c7;
        double sv_843_2 = -sv_843_1;
        double sv_843_3 = sv_843_0 + sv_843_2;
        double sv_843_4 = J0_c0 * sv_843_3;
        double sv_843_5 = J0_c3 * J0_c8;
        double sv_843_6 = J0_c5 * J0_c6;
        double sv_843_7 = -sv_843_6;
        double sv_843_8 = sv_843_5 + sv_843_7;
        double sv_843_9 = -J0_c1;
        double sv_843_10 = sv_843_8 * sv_843_9;
        double sv_843_11 = sv_843_4 + sv_843_10;
        double sv_843_12 = J0_c3 * J0_c7;
        double sv_843_13 = J0_c4 * J0_c6;
        double sv_843_14 = -sv_843_13;
        double sv_843_15 = sv_843_12 + sv_843_14;
        double sv_843_16 = J0_c2 * sv_843_15;
        double sv_843_17 = sv_843_11 + sv_843_16;
        double sv_843_18 = fabs(sv_843_17);
        fw0 = sv_843_18 * weights_843[289 * iq0 + 17 * iq1 + iq2];
      }
      // ------------------------ 
      // ------------------------ 
      // Section: Tensor Computation
      // Inputs: FE_TF0, fw0
      // Outputs: A
      {
        for (int j0 = 0; j0 < 16; ++j0)
        {
          for (int j1 = 0; j1 < 16; ++j1)
          {
            for (int j2 = 0; j2 < 16; ++j2)
            {
              for (int i0 = 0; i0 < 16; ++i0)
              {
                for (int i1 = 0; i1 < 16; ++i1)
                {
                  for (int i2 = 0; i2 < 16; ++i2)
                  {
                    A[4096 * (256 * i0 + 16 * i1 + i2) + (256 * j0 + 16 * j1 + j2)] += fw0 * (FE_TF0[0][0][iq0][i0] * FE_TF0[0][0][iq1][i1] * FE_TF0[0][0][iq2][i2]) * (FE_TF0[0][0][iq0][j0] * FE_TF0[0][0][iq1][j1] * FE_TF0[0][0][iq2][j2]);
                  }
                }
              }
            }
          }
        }
      }
      // ------------------------ 
    }
  }
}

}
]]>
https://fenicsproject.discourse.group/t/ffcx-compiler-files-size-limit-with-petsc-assemble-matrix/19654#post_5 Wed, 15 Apr 2026 19:42:57 +0000 fenicsproject.discourse.group-post-58440
FFCX compiler files size limit with petsc.assemble_matrix Please note that your code formatting is not correct.
Please use

```python

#add code here
```
]]>
https://fenicsproject.discourse.group/t/ffcx-compiler-files-size-limit-with-petsc-assemble-matrix/19654#post_4 Wed, 15 Apr 2026 19:38:43 +0000 fenicsproject.discourse.group-post-58439
Custom Assembly of Linear Form Hi everyone,

I have a program that numerically solves the conservative phase-field equation

\frac{\partial \phi}{\partial t} =\nabla\cdot(M(\phi)\nabla\phi).

Using a simple forward-Euler time discretization, we have

\phi^{n+1} = \phi^n - \Delta t \nabla\cdot(M(\phi^n)\nabla\phi^n).

Since the second term on the right-hand side is only non-zero in a small region of the domain (roughly corresponding to the set of points x \in \Omega for which \phi(x) \in (0.4, 0.6)), I would like to assemble the right-hand side vector conditionally - in other words, I only want to perform integration over quadrature points if \phi \in (0.4,0.6) on the given cell.

Is there a way to do this in legacy FEniCS? Find my program below. Here, the term I’m interested corresponds to lin_form2. Also, I know there are significant problems with the program, the program itself is only a stepping stone - I’m not using it for anything serious.

import fenics as fe
import os
import numpy as np
import matplotlib
matplotlib.use("Agg")
import matplotlib.pyplot as plt
import matplotlib.tri as tri

from mpi4py import MPI
comm = fe.MPI.comm_world
rank = fe.MPI.rank(comm)

plt.close('all')

# Where to save the plots
WORKDIR = os.getcwd()

def plot_solution_1d(phi, mesh, t, step, outdir):
   """
   Plot and save 1D phase-field solution
   """
   # Get mesh coordinates and solution values
   x = mesh.coordinates().flatten()
   phi_vals = phi.vector().get_local()

   # Sort (important for plotting)
   idx = np.argsort(x)
   x = x[idx]
   phi_vals = phi_vals[idx]

   plt.figure(figsize=(6, 4))
   plt.plot(x, phi_vals, lw=2)
   plt.xlabel("x")
   plt.ylabel(r"$\phi$")
   plt.title(f"t = {t:.4f}")
   plt.ylim(-0.5, 1.5)
   plt.grid(True)

   filename = os.path.join(outdir, f"phi_{step:05d}.png")
   plt.savefig(filename, dpi=150, bbox_inches="tight")
   plt.close()

M_tilde = 0.005

outDirName = f"1D"

if rank == 0:
   os.makedirs(outDirName, exist_ok=True)
comm.Barrier()


class PeriodicBoundary(fe.SubDomain):
   # Left boundary is "target"
   def inside(self, x, on_boundary):
       return bool(on_boundary and fe.near(x[0], -1.0))

   # Map right boundary (x=1) to left (x=-1)
   def map(self, x, y):
       if fe.near(x[0], 1.0):
           y[0] = x[0] - 1.0
       else:
           y[0] = x[0]

pbc = PeriodicBoundary()

T = 100
N_pts = 100

mesh = fe.IntervalMesh(N_pts, -1, 1)
h = mesh.hmin()
eps = 0.7*h
dt = 0.1*h 
num_steps = int(T/dt)

V = fe.FunctionSpace(mesh, "CG", 1, constrained_domain=pbc)

# Define trial and test functions, as well as
# finite element functions at previous timesteps

f_trial = fe.TrialFunction(V)

phi_n = fe.Function(V)
mu_nP1 = fe.Function(V)

v = fe.TestFunction(V)

phi_nP1 = fe.Function(V)


# Define Allen-Cahn mobility
def mobility(phi_n):
   grad_phi_n = fe.grad(phi_n)

   abs_grad_phi_n = fe.sqrt(fe.dot(grad_phi_n, grad_phi_n) + 1e-6)
   inv_abs_grad_phi_n = 1.0 / abs_grad_phi_n

   mob = M_tilde*( 1 - 4*phi_n*(1 - phi_n)/eps * inv_abs_grad_phi_n )
   return mob


# Initialize \phi
phi_init_expr = fe.Expression(
   "0.5 * (1.0 + tanh((x[0] - x0)/(sqrt(2)*eps)))",
   degree=4,
   eps=eps,
   x0=0.0
)


phi_n = fe.project(phi_init_expr, V)

bilin_form_AC = f_trial * v * fe.dx

lin_form1 = phi_n * v * fe.dx
lin_form2 = - dt*fe.dot(fe.grad(v), mobility(phi_n)*fe.grad(phi_n))*fe.dx


phi_mat = fe.assemble(bilin_form_AC)

phi_solver = fe.KrylovSolver("cg", "hypre_amg")
phi_solver.set_operator(phi_mat)
prm = phi_solver.parameters
prm["absolute_tolerance"] = 1e-14
prm["relative_tolerance"] = 1e-8
prm["maximum_iterations"] = 1000
prm["nonzero_initial_guess"] = False

outfile = fe.XDMFFile(comm, f"{outDirName}/solution.xdmf")
outfile.parameters["flush_output"] = True
outfile.parameters["functions_share_mesh"] = True
outfile.parameters["rewrite_function_mesh"] = False

# Timestepping
t = 0.0
for n in range(num_steps):
   t += dt
   
   if n % 10 == 0:  # plot every 10 steps
   
       
       outfile.write(phi_n, t)

       if rank == 0:
           plot_solution_1d(phi_n, mesh, t, n, outDirName)
           
   
   rhs_AC = fe.assemble(lin_form1) + fe.assemble(lin_form2)
   
   phi_solver.solve(phi_nP1.vector(), rhs_AC)


   # Update previous solutions
   phi_n.assign(phi_nP1)
]]>
https://fenicsproject.discourse.group/t/custom-assembly-of-linear-form/19655#post_1 Wed, 15 Apr 2026 16:05:29 +0000 fenicsproject.discourse.group-post-58437
LEoPart won't compile on Mac I honestly thought the API and ABI would have changed so much since v0.7.0 or so, that Leopart-x wouldn’t work. This surprises me!

]]>
https://fenicsproject.discourse.group/t/leopart-wont-compile-on-mac/19650#post_4 Wed, 15 Apr 2026 15:36:43 +0000 fenicsproject.discourse.group-post-58436
Using mpc with ufl.MixedFunctionSpace I am eyeing moving (rotating, sliding) electric machines in 3D. Generally the sliding interface is in air (or non-conducting) domain which is modeled with scalar magnetic potential (CG, Poisson equation). I believe MPC is useful here to enforce continuity. But I would like to push the boundaries by considering sliding contact between two metals, a problem beyond the commercial tools I know. Due to current flow, the sliding domains have to be considered with Ampere’s law and edge elements. I would appreciate if I could get referral to existing fenics examples involving such interfaces irrespective of the context. Once I understand how to work with fenics I can search for appropriate mathematical formulations on my own.

]]>
https://fenicsproject.discourse.group/t/using-mpc-with-ufl-mixedfunctionspace/19651#post_5 Wed, 15 Apr 2026 11:58:19 +0000 fenicsproject.discourse.group-post-58435
FFCX compiler files size limit with petsc.assemble_matrix Thank you very much for your reply!

I here paste a reproducible example:

You will need the files containing the physical equations, I uploaded them to Google Drive and can be downloaded here (corrected version):
https://drive.google.com/drive/folders/1IhCmBS0manIEjtggFl-zs-NgmShIMoPZ?usp=sharing

I also attempted to reduce the quadrature degree, but I lost too much precission in exchange. Thanks a lot anyways for suggesting it.

import numpy as np

import os

from mpi4py import MPI

from petsc4py import PETSc

###################################################

#AUXILIARY FUNCTIONS

###################################################

######## SLEPC EIGENSOLVER #############

def eig_solver(A,ME,target,pep_params):

from slepc4py import SLEPc

from petsc4py import PETSc

from mpi4py import MPI

from dolfinx import fem

nev, tol, max_it = pep_params

comm = MPI.COMM_WORLD

V_gr, dof_map_gr = ME.sub(0).collapse()

V_em, dof_map_em = ME.sub(1).collapse()

pep = SLEPc.PEP().create(comm)

pep.setOperators(A)

pep.setProblemType(SLEPc.PEP.ProblemType.GENERAL)

pep.setDimensions(nev=nev, ncv=max(15, 2\*nev + 10), mpd=max(15, 2\*nev + 10))

pep.setType(SLEPc.PEP.Type.TOAR)

st = pep.getST()

st.setType(SLEPc.ST.Type.SINVERT)

ksp = st.getKSP()

ksp.setType(PETSc.KSP.Type.PREONLY)

pep.setRefine(ref = SLEPc.PEP.Refine.SIMPLE, npart = 1, tol = 0.1\*tol, its= 3, scheme= SLEPc.PEP.RefineScheme.SCHUR)

pep.setTarget(target)

pep.setWhichEigenpairs(SLEPc.PEP.Which.TARGET_MAGNITUDE)

pep.setTolerances(tol, max_it)



*# Solver*

pep.solve()




nconv = pep.getConverged()



all_eigvals = \[\]

efuns = fem.Function(ME) 

efuns_list_gr = \[\]

efuns_list_em = \[\]



error_list = \[\]



for i in range(nconv):   

    eigval = pep.getEigenpair(i, efuns.x.petsc_vec)

    error = pep.computeError(i)

    error_list.append(error) 

    efun_vals_gr = efuns.x.array\[dof_map_gr\]

    efun_vals_em = efuns.x.array\[dof_map_em\]

    efuns_list_gr.append(efun_vals_gr)

    efuns_list_em.append(efun_vals_em)

    all_eigvals.append(eigval)

    

return all_eigvals, efuns_list_gr, efuns_list_em, error_list

################ SYMPY SORTING ################

def extract_qep_coeffs(eqn):

import sympy as sp

omega = sp.symbols('omega', complex=True)



expanded_eqn = eqn



K_matrix_term = expanded_eqn.subs(omega, 0)

C_matrix_term = sp.diff(expanded_eqn, omega).subs(omega, 0)

M_matrix_term = sp.diff(expanded_eqn, omega, 2).subs(omega, 0) / 2



return (M_matrix_term), (C_matrix_term), (K_matrix_term)

######### SYMPY CONVERSION TO UFL #########

def sympy_to_ufl(expr, mapping):

import sympy as sp

import ufl

"""

Recursively walks a SymPy expression tree and builds a UFL expression.



Args:

    expr: The SymPy expression node.

    mapping: A dictionary linking SymPy variable/function names to UFL objects or numbers.

"""

*# 1. Base Cases: Numbers and Constants*



if expr == sp.I:

    return 1**j**

    

if isinstance(expr, (int, float)):

    return expr

    

if isinstance(expr, sp.Number):

    return float(expr)



*# 2. Base Cases: Symbols and Trial Functions*

if isinstance(expr, sp.Symbol):

    if expr.name in mapping:

        return mapping\[expr.name\]

    raise ValueError(**f**"Symbol '{expr.name}' not found in mapping dictionary.")

    

if isinstance(expr, sp.core.function.AppliedUndef):

    func_name = expr.func.\__name_\_

    if func_name in mapping:

        return mapping\[func_name\]

    raise ValueError(**f**"Function '{func_name}' not found in mapping dictionary.")



*# 3. Recursive Cases: Arithmetic Operations*

if isinstance(expr, sp.Add):

    return sum(sympy_to_ufl(arg, mapping) for arg in expr.args)



if isinstance(expr, sp.Mul):

    result = 1

    for arg in expr.args:

        result = result \* sympy_to_ufl(arg, mapping)

    return result



if isinstance(expr, sp.Pow):

    base = sympy_to_ufl(expr.args\[0\], mapping)

    exponent = sympy_to_ufl(expr.args\[1\], mapping)

    return base \*\* exponent



if isinstance(expr, sp.exp):

    base = sympy_to_ufl(expr.args\[0\], mapping)

    return ufl.exp(argum)



if isinstance(expr, sp.sin):

    argum = sympy_to_ufl(expr.args\[0\], mapping)

    return ufl.sin(argum)



if isinstance(expr, sp.log):

    argum = sympy_to_ufl(expr.args\[0\], mapping)

    return ufl.ln(argum)



if isinstance(expr, sp.cot):

    argum = sympy_to_ufl(expr.args\[0\], mapping)

    return ufl.cos(argum)/ufl.sin(argum)



if isinstance(expr, sp.cos):

    argum = sympy_to_ufl(expr.args\[0\], mapping)

    return ufl.cos(argum)



if isinstance(expr, sp.tan):

    argum = sympy_to_ufl(expr.args\[0\], mapping)

    return ufl.tan(argum)



if isinstance(expr, sp.cot):

    argum = sympy_to_ufl(expr.args\[0\], mapping)

    return ufl.cot(argum)



if isinstance(expr, sp.Abs):

    argum = sympy_to_ufl(expr.args\[0\], mapping)

    return ufl.algebra.Abs(argum)



*# 4. Calculus: Derivatives*

if isinstance(expr, sp.Derivative):

    ufl_func = sympy_to_ufl(expr.args\[0\], mapping)

    

    for var, count in expr.variable_count:

        if var.name == 'sigma':

            dim = 0

        elif var.name == 'x_coord':

            dim = 1

        else:

            raise ValueError(**f**"Unknown derivative variable: {var.name}")

        

        *# Apply the UFL derivative .dx(dim) the correct number of times*

        for \_ in range(count):

            ufl_func = ufl_func.dx(dim)

    return ufl_func



*# 5. Math Functions*



if isinstance(expr, sp.conjugate):

    return ufl.conj(sympy_to_ufl(expr.args\[0\], mapping))



*# Catch-all for unsupported operations*

raise NotImplementedError(**f**"SymPy node type {type(expr)} is not implemented in the AST walker.")

############ PROBLEM ASSEMBLER ########################

def assembler(phys_params, poly_degree, eps, debug_mode):

import dill as pickle

import numpy as np

import ufl

from mpi4py import MPI

from dolfinx.fem.petsc import assemble_matrix

from dolfinx import mesh

from dolfinx import fem

import basix

from basix import CellType, ElementFamily, LagrangeVariant

from dolfinx.fem.petsc import assemble_matrix



*#Physical parameters*

M_mass_num, alpha, q_ch = phys_params\[:\] 

Q_ch = q_ch\*M_mass_num

a_rot_num = alpha\*M_mass_num

r_p = M_mass_num + np.sqrt(M_mass_num\*\*2 - a_rot_num\*\*2 - Q_ch\*\*2)

lambda_param = r_p

*#Mesh creation*

N_x = 1

N_y = 1

*# general cell type*

cell_type = CellType.quadrilateral

*# spectral element*

ufl_nodal_element = basix.ufl.element(ElementFamily.P, 

                                    cell_type, 

                                    poly_degree, 

                                    LagrangeVariant.gll_warped)



*# mesh: we "chop-off" by a small eps to avoid coordinate singularities*

domain = mesh.create_rectangle(MPI.COMM_WORLD, 

                            \[\[eps, eps\], \[1-eps, 1-eps\]\], 

                            \[N_x, N_y\],

                            mesh.CellType.quadrilateral)



tdim = domain.topology.dim

fdim = tdim - 1

domain.topology.create_connectivity(fdim, tdim)



*#Function space: needs to be mixed since we're solving a coupled PDE system*

ME_element = basix.ufl.mixed_element(\[ufl_nodal_element, ufl_nodal_element\])

ME = fem.functionspace(domain, ME_element)

*#Trial and test functions*

v_test_gr, v_test_em = ufl.TestFunctions(ME)

psi_gr, psi_em = ufl.TrialFunctions(ME)



*# Spatial coordinate*

x = ufl.SpatialCoordinate(domain)



*# create the quadrature rule and measure*

metadata = {

    "quadrature_rule": "GLL", 

    "quadrature_degree": 2 \* poly_degree - 1

}

dx_meas = ufl.dx(domain=domain, metadata=metadata)



*# Getting back the objects:*

with open(**f**'eq_A.pk', 'rb') as f:  *# Python 3: open(..., 'wb')*

    EQN_kerr_A = pickle.load(f)

with open(**f**'eq_B.pk', 'rb') as f:  *# Python 3: open(..., 'wb')*

    EQN_kerr_B = pickle.load(f)   



*# Extract into mass, stiffness...*

M_eq, C_eq, K_eq = extract_qep_coeffs(EQN_kerr_A)

M_eq_b, C_eq_b, K_eq_b = extract_qep_coeffs(EQN_kerr_B)



*#Map for the UFL converter*

ufl_mapping = {

    'sigma': x\[0\],

    'x_coord': x\[1\],

    'M': M_mass_num,

    'lambda\_': lambda_param,         

    'psi': psi_gr,

    'Q': Q_ch,

    'm': 0,

    'psi_b': psi_em,

    'a': a_rot_num,

}



*# Translate the SymPy expressions directly into UFL operator expressions*

M_op = sympy_to_ufl(M_eq, ufl_mapping)

C_op = sympy_to_ufl(C_eq, ufl_mapping)

K_op = sympy_to_ufl(K_eq, ufl_mapping)



M_op_b = sympy_to_ufl(M_eq_b, ufl_mapping)

C_op_b = sympy_to_ufl(C_eq_b, ufl_mapping)

K_op_b = sympy_to_ufl(K_eq_b, ufl_mapping)



*# Build the weak forms by taking the inner product with the test functions*

M_form_gr = ufl.inner(M_op, v_test_gr)\*dx_meas

M_form_em = ufl.inner(M_op_b, v_test_em)\*dx_meas



C_form_gr = ufl.inner(C_op, v_test_gr)\*dx_meas

C_form_em = ufl.inner(C_op_b, v_test_em)\*dx_meas



K_form_gr = ufl.inner(K_op, v_test_gr)\*dx_meas

K_form_em = ufl.inner(K_op_b, v_test_em)\*dx_meas



jit_options = {

"cffi_extra_compile_args": \["-O1", "-fno-inline"\]}



M_1 = assemble_matrix(fem.form(M_form_gr+M_form_em, jit_options = jit_options))

M_1.assemble()

M_mat = M_1



C_1 = assemble_matrix(fem.form(C_form_gr+C_form_em, jit_options = jit_options))

C_1.assemble()

C_mat = C_1



K_1 = assemble_matrix(fem.form(K_form_gr+K_form_em, jit_options = jit_options))

K_1.assemble()

K_mat = K_1



A = \[ \]



A.append(K_mat)

A.append(C_mat)

A.append(M_mat)



return A, ME

###########################################

############ INITIALIZE

###########################################

phys_params = [0.5, 0, 0]

poly_degree = 18

eps = 2.5e-7

debug_mode = 0

A, ME = assembler(phys_params, poly_degree, eps, debug_mode = 0)

for i, mat in enumerate(A): # PEP matrices

viewer = PETSc.Viewer().createASCII("/dev/null")

arr = mat.convert("dense").getDenseArray()

print(**f**"A\_{i}: max={np.max(np.abs(arr))**:.2e**}, "

    **f**"has_nan={np.any(np.isnan(arr))}, "

    **f**"has_inf={np.any(np.isinf(arr))}")

#PARAMETERS FOR SLEPC

target = 0.85

nev = 12

tol = 1e-7

max_it = 2000

pep_params = [nev, tol, max_it]

all_eigvals, efuns_list_gr, efuns_list_em, error_list = eig_solver(A,ME,target,pep_params)

print(“\n— Sorted eigenvalues —”)

print(f"Found {len(all_eigvals)} physical eigenpairs after sorting.")

for i, eigval in enumerate(all_eigvals):

print(**f**"QNM {i}: {eigval.real**:.6e**} + {eigval.imag**:.6e**}i, error {error_list\[i\]}")
]]>
https://fenicsproject.discourse.group/t/ffcx-compiler-files-size-limit-with-petsc-assemble-matrix/19654#post_3 Wed, 15 Apr 2026 11:06:37 +0000 fenicsproject.discourse.group-post-58434
FFCX compiler files size limit with petsc.assemble_matrix We would need a reproducible example to be able to do some proper guidance here.
One thing you can try is to fix the quadrature degree to something specific (say 15), as shown in:

]]>
https://fenicsproject.discourse.group/t/ffcx-compiler-files-size-limit-with-petsc-assemble-matrix/19654#post_2 Wed, 15 Apr 2026 10:44:12 +0000 fenicsproject.discourse.group-post-58433
FFCX compiler files size limit with petsc.assemble_matrix Dear Fenics community: 'm building a fenicsx / dolfinx code for a physical equation built with sympy, which is very intrincate. The physics of the equation are fine but I need to emulate spectral methods and thus I need to use a high polynomial degree (I then solve an eigenvalue problem via SLEPc). My problem is that for a degree above a high enough number (let’s call it p), my ffcx compiler crashes when doing “dolfinx.fem.petsc.assemble_matrix”, because some cache files get over size limit (2 GB). Please forgive my lack of expertise - I was never very good with computers :sweat_smile:

I need to get around that somehow, because for low polynomial degrees it works wonderfully, but I need to get past p to increase my accuracy. Something like 1.5*p would work.

Here’s a list of the things I tried and did not work at all (not even reducing my cache files by 1 kB):

  • Siplifying the symbolic equation with sp.cancel term by term

  • Assembling “smaller” matrices calling dolfinx.petsc.assemble_matrix on each equation term and then adding with axpy.

  • Toggling jit compiler options

  • Writing the weak forms with ufl expressions insteand of fem.Function objects

I would be extremely grateful if anyone could suggest workarounds. Right now, I assemble the symbolic equation separately and then import it. Here’s my code from that point (I paste the ufl-expressions version which is much cleaner)

with open(f'eq_A_m_{m_mode_num}_simp.pk', 'rb') as f:  # Python 3: open(..., 'wb')
    EQN_kerr_A = pickle.load(f)
with open(f'eq_B_m_{m_mode_num}_simp.pk', 'rb') as f:  # Python 3: open(..., 'wb')
    EQN_kerr_B = pickle.load(f)   

#Sort the equation terms to make SLEPc understand my problem
M_eq, C_eq, K_eq = extract_qep_coeffs(EQN_kerr_A)
M_eq_b, C_eq_b, K_eq_b = extract_qep_coeffs(EQN_kerr_B)

ufl_mapping = {
    'sigma': x[0],
    'x_coord': x[1],
    'M': M_mass_num,
    'lambda_': lambda_param,                    
    'psi': psi_gr,
    'Q': Q_ch,
    'm': m_mode_num,
    'psi_b': psi_em,
    'a': a_rot_num,
}

# Translate the SymPy expressions directly into UFL operator expressions

M_op = sympy_to_ufl(M_eq, ufl_mapping)
C_op = sympy_to_ufl(C_eq, ufl_mapping)
K_op = sympy_to_ufl(K_eq, ufl_mapping)

M_op_b = sympy_to_ufl(M_eq_b, ufl_mapping)
C_op_b = sympy_to_ufl(C_eq_b, ufl_mapping)
K_op_b = sympy_to_ufl(K_eq_b, ufl_mapping)

# Build the weak forms by taking the inner product with the test functions
M_form_gr = ufl.inner(M_op, v_test_gr)*dx_meas
M_form_em = ufl.inner(M_op_b, v_test_em)*dx_meas

C_form_gr = ufl.inner(C_op, v_test_gr)*dx_meas
C_form_em = ufl.inner(C_op_b, v_test_em)*dx_meas

K_form_gr = ufl.inner(K_op, v_test_gr)*dx_meas
K_form_em = ufl.inner(K_op_b, v_test_em)*dx_meas

M_mat = assemble_matrix(fem.form(M_form_gr+M_form_em))
M_mat.assemble()

C_mat = assemble_matrix(fem.form(C_form_gr+C_form_em))
C_mat.assemble()

K_mat = assemble_matrix(fem.form(K_form_gr+K_form_em))
K_mat.assemble()

A = [ ]

A.append(K_mat)
A.append(C_mat)
A.append(M_mat)

return A
]]>
https://fenicsproject.discourse.group/t/ffcx-compiler-files-size-limit-with-petsc-assemble-matrix/19654#post_1 Wed, 15 Apr 2026 09:16:12 +0000 fenicsproject.discourse.group-post-58432
LEoPart won't compile on Mac I can reproduce this on:

Not really sure why it only happens on UFL integration tests, and not on main:

It seems like one should build DOLFINx with the enable adios2 flag to avoid this error, as the following change on CI works:

]]>
https://fenicsproject.discourse.group/t/leopart-wont-compile-on-mac/19650#post_3 Wed, 15 Apr 2026 08:08:46 +0000 fenicsproject.discourse.group-post-58431
Using mpc with ufl.MixedFunctionSpace
sbhasan:

While we are on this, can I quickly ask if there are alternatives for enforcing field (dis)continuity across non-conformal interfaces?

There probably is, like nitsche/mortar methods. But they are quite hard to implement. What specific problem are you considering?

]]>
https://fenicsproject.discourse.group/t/using-mpc-with-ufl-mixedfunctionspace/19651#post_4 Wed, 15 Apr 2026 04:51:33 +0000 fenicsproject.discourse.group-post-58430
Solve the Poisson equation on the curved domain Hello, everyone!Currently, I want to conduct numerical simulation for the curved domain. Since I’m not very familiar with it, I started with the simplest Poisson equation. Below is the code I used to calculate it with fenicsx.

import gmsh    
from dolfinx.io import gmshio
from dolfinx.fem.petsc import LinearProblem
from mpi4py import MPI
import numpy as np
from dolfinx import fem
import ufl
from dolfinx import default_scalar_type
import numpy as np
def solve_poisson_on_mesh(h, element_order, FE_order):
    gmsh.initialize()

    membrane = gmsh.model.occ.addDisk(0, 0, 0, 1, 1)
    gmsh.model.occ.synchronize()
    gdim = 2
    gmsh.model.addPhysicalGroup(gdim, [membrane], 1)
    gmsh.option.setNumber("Mesh.CharacteristicLengthMin", h)
    gmsh.option.setNumber("Mesh.CharacteristicLengthMax", h)
    gmsh.option.setNumber("Mesh.ElementOrder",  element_order)
    gmsh.option.setNumber("Mesh.SecondOrderLinear", 0)  
    gmsh.option.setNumber("Mesh.SecondOrderIncomplete", 0)  
    gmsh.model.mesh.generate(gdim)

    gmsh_model_rank = 0
    mesh_comm = MPI.COMM_WORLD
    domain, cell_markers, facet_markers = gmshio.model_to_mesh(gmsh.model, mesh_comm, gmsh_model_rank, gdim=gdim)

    print(f"网格几何度数: {domain.geometry.cmap.degree}")
    print(f"几何映射类型: {domain.geometry.cmap.__class__.__name__}")

    print(f"拓扑维度: {domain.topology.dim}")
    print(f"顶点数量: {domain.geometry.x.shape[0]}")
    print(f"单元数量: {domain.topology.index_map(domain.topology.dim).size_global}")

    boundary_points = domain.geometry.x
    radii = np.sqrt(boundary_points[:, 0]**2 + boundary_points[:, 1]**2)
    on_boundary = np.isclose(radii, 1.0, atol=0.02)  
    print(f"总点数: {len(boundary_points)}")
    print(f"边界点数量: {np.sum(on_boundary)}")
    if np.sum(on_boundary) > 0:
       print(f"边界点半径范围: [{radii[on_boundary].min():.6f}, {radii[on_boundary].max():.6f}]")


    def exact_solution(x):
        """精确解:w = (1 - x² - y²) * sin(πx) * cos(πy)"""
        r2 = x[0]**2 + x[1]**2
        return (1 - r2) * np.sin(np.pi*x[0]) * np.cos(np.pi*x[1])
    
    def source_term(x):
        """手动计算的源项:f = -∇²w"""
        x_val = x[0]
        y_val = x[1]
        r2 = x_val**2 + y_val**2
        
        term1 = (4 + 2*np.pi**2 * (1 - r2)) * np.sin(np.pi*x_val) * np.cos(np.pi*y_val)
        term2 = 4*np.pi*x_val * np.cos(np.pi*x_val) * np.cos(np.pi*y_val)
        term3 = -4*np.pi*y_val * np.sin(np.pi*x_val) * np.sin(np.pi*y_val)
        
        return term1 + term2 + term3

    def on_boundary(x):
        return np.isclose(np.sqrt(x[0]**2 + x[1]**2), 1)

    x = ufl.SpatialCoordinate(domain)
    V = fem.functionspace(domain, ("Lagrange", FE_order))
   # f = fem.Constant(domain, default_scalar_type(4))
    f = fem.Function(V)
    f.interpolate(source_term)


    boundary_dofs = fem.locate_dofs_geometrical(V, on_boundary)
    bc = fem.dirichletbc(default_scalar_type(0), boundary_dofs, V)
    u = ufl.TrialFunction(V)
    v = ufl.TestFunction(V)
    a = ufl.dot(ufl.grad(u), ufl.grad(v)) * ufl.dx
    L = f * v * ufl.dx
    problem = LinearProblem(a, L, bcs=[bc], petsc_options={"ksp_type": "preonly", "pc_type": "lu"})
    uh = problem.solve()

    V_exact = fem.functionspace(domain, ("Lagrange", FE_order +2))  
    u_exact_func = fem.Function(V_exact)
    u_exact_func.interpolate(exact_solution)

    error_L2 = fem.form(ufl.inner(uh - u_exact_func, uh - u_exact_func) * ufl.dx)
    L2_error = np.sqrt(fem.assemble_scalar(error_L2))

    error_H1 = fem.form(ufl.inner(uh - u_exact_func, uh - u_exact_func) * ufl.dx + 
                   ufl.inner(ufl.grad(uh - u_exact_func), ufl.grad(uh - u_exact_func)) * ufl.dx)
    H1_error = np.sqrt(fem.assemble_scalar(error_H1))
    gmsh.finalize()
    return L2_error, H1_error,h

def main():
    mesh_sizes = [0.2, 0.1, 0.05, 0.025, 0.0125, 0.00625]
    
    L2_errors = []
    H1_errors = []
    h_list = []
    
    print("开始收敛性分析...")
    print("=" * 60)
    
    for i, h in enumerate(mesh_sizes):
        print(f"\n计算网格尺寸 h = {h}")
        L2_error, H1_error, actual_h = solve_poisson_on_mesh(h, 2, 2)
        
        L2_errors.append(L2_error)
        H1_errors.append(H1_error)
        h_list.append(actual_h)
        
        print(f"  网格尺寸: {h:.3f}")
        print(f"  L2误差: {L2_error:.6e}")
        print(f"  H1误差: {H1_error:.6e}")
    
    
    print("\n" + "=" * 60)
    print("收敛阶分析:")
    print("=" * 60)
    print("网格尺寸     L2误差        H1误差        L2收敛阶    H1收敛阶")
    print("-" * 75)
    
    for i in range(len(mesh_sizes)):
        if i == 0:
            print(f"{h_list[i]:.3f}      {L2_errors[i]:.2e}    {H1_errors[i]:.2e}      -         -")
        else:
            # 计算收敛阶
            rate_L2 = np.log(L2_errors[i-1] / L2_errors[i]) / np.log(h_list[i-1] / h_list[i])
            rate_H1 = np.log(H1_errors[i-1] / H1_errors[i]) / np.log(h_list[i-1] / h_list[i])
            
            print(f"{h_list[i]:.3f}      {L2_errors[i]:.2e}    {H1_errors[i]:.2e}      {rate_L2:.3f}       {rate_H1:.3f}")
if __name__ == "__main__":
    print("程序开始执行...")
    main()
    print("程序执行完成")

The result is:

============================================================
Convergence order analysis:
============================================================
h             L2 error        H1error     L2 convergence order    H1 convergence order
---------------------------------------------------------------------------
0.200      2.23e-03    8.57e-02      -         -
0.100      2.98e-04    2.32e-02      2.908       1.886
0.050      3.74e-05    5.89e-03      2.992       1.976
0.025      4.70e-06    1.48e-03      2.994       1.994
0.013      5.91e-07    3.72e-04      2.990       1.992
0.006      7.41e-08    9.31e-05      2.997       1.998

What I don’t quite understand is whether the calculation in the curved region differs from that in the straight region, and this difference lies in the import process of GMSH? Are there no differences in the others? Is this result of my operation reasonable? Is the degree of approximation controlled by the line gmsh.option.setNumber("Mesh.ElementOrder", element_order)? Looking forward to your reply!

]]>
https://fenicsproject.discourse.group/t/solve-the-poisson-equation-on-the-curved-domain/19653#post_1 Wed, 15 Apr 2026 01:36:07 +0000 fenicsproject.discourse.group-post-58429
Using mpc with ufl.MixedFunctionSpace Thanks a lot for the quick fix! The weak form in the shared MWE is just a filler to reproduce the error. The actual problem is that of a rotating electric generator which is working with this correction.

While we are on this, can I quickly ask if there are alternatives for enforcing field (dis)continuity across non-conformal interfaces?

]]>
https://fenicsproject.discourse.group/t/using-mpc-with-ufl-mixedfunctionspace/19651#post_3 Tue, 14 Apr 2026 22:21:12 +0000 fenicsproject.discourse.group-post-58428
Dolfinx mpc contact inelastic condition with MPI I’ve located the issue and made a fix with:

With this your example gives consistent results on various sets of processes.
When I implemented the inelastic contact constraint I didn’t think of it for cases where the block size of the space is not equal to the dimension of the space (scalar or manifold). This is fixed in the PR.

EDIT
I’ve made a 0.10.5 release that fixes this.

It should be on conda within a few days.

]]>
https://fenicsproject.discourse.group/t/dolfinx-mpc-contact-inelastic-condition-with-mpi/19652#post_3 Tue, 14 Apr 2026 20:22:21 +0000 fenicsproject.discourse.group-post-58427
Dolfinx mpc contact inelastic condition with MPI There seems to be a bug here, as:

import dolfinx
import dolfinx_mpc
from dolfinx.io import gmsh
import numpy as np
import ufl
from mpi4py import MPI
from dolfinx import plot

#cell tags are -> 1: disk, 2 right square, 3 left square
comm = MPI.COMM_WORLD
partitioner = (
        dolfinx.mesh.create_cell_partitioner(
            dolfinx.mesh.GhostMode.none,
        )
        if comm.Get_size() > 1
        else None
    )

mesh,ct,ft,*_ = gmsh.read_from_msh("2squares.msh",comm,partitioner=partitioner)


#%%px
dx = ufl.Measure("dx", domain=mesh, subdomain_data=ct)
disk = 1
right_square = 2
left_square = 3
left_square_contact = 104
right_square_contact = 107
tdim = mesh.topology.dim
tags_to_shift = right_square
mesh.topology.create_connectivity(tdim, 0)

sp = dolfinx.fem.functionspace(mesh, ("CG", 2))
nu0 = 1/(4*3.14*1e-7)

#dirichlet bc on left edge
bc = dolfinx.fem.dirichletbc(0.0, dolfinx.fem.locate_dofs_geometrical(sp, lambda x: np.isclose(x[0], 2.0)), sp)
mpc = dolfinx_mpc.MultiPointConstraint(sp)
mpc.create_contact_inelastic_condition(ft, left_square_contact, right_square_contact, eps2=1e-11,allow_missing_masters=True)
mpc.finalize()
print(len(mpc.slaves), len(ft.find(left_square_contact)), len(ft.find(right_square_contact)))

gives way to many slaves on 3 procs and segfaults on 2.
I’ll have a look.

]]>
https://fenicsproject.discourse.group/t/dolfinx-mpc-contact-inelastic-condition-with-mpi/19652#post_2 Tue, 14 Apr 2026 20:05:58 +0000 fenicsproject.discourse.group-post-58426
Contact between two deformable bodies
dokken:

Depending on what kind of elastic model you want for the body, it might be easier to get convergence that it was from the test in Wriggers paper, which I haven’t found a satisfactory linear loading method for (probably need a adaptive loading method).

Dear Mr Dokken,
Thank you for your suggestion regarding the contact method. It provides a very interesting perspective on handling contact between two bodies, and I find the approach very insightful.

At the moment, I am working with an elasto-plastic formulation based on von Mises material model (Elasto-plastic analysis of a 2D von Mises material — Computational Mechanics Numerical Tours with FEniCSx). I find it challenging to directly integrate the third medium approach in the setting, particularly in terms of consistency between the energy-based formulation and the history-dependent plasticity model.

Additionally, I am using the elasto-plastic setting in my application involving problem of contact between an electrode and a sheet in a spot welding simulation, where both contact and material nonlinearity play important roles.

However, I will continue exploring the third medium method and attempt to integrate it into this formulation. In addition, I would greatly appreciate any recommendations you may have, based on your experience, on alternative contact approaches in FEniCSx for such problems.

Thanks again for your time and considerations.

]]>
https://fenicsproject.discourse.group/t/contact-between-two-deformable-bodies/19645#post_7 Tue, 14 Apr 2026 17:25:49 +0000 fenicsproject.discourse.group-post-58425
LEoPart won't compile on Mac LEoPart has stagnated since the main authors are now in industry. It’s unlikely they have the motivation to update it for the latest version of DOLFIN and DOLFINx.

You can probably get it to work with an older version of DOLFINx from around the time it was last updated.

]]>
https://fenicsproject.discourse.group/t/leopart-wont-compile-on-mac/19650#post_2 Tue, 14 Apr 2026 15:43:29 +0000 fenicsproject.discourse.group-post-58424
GMRES 3D Naviers Stokes No Luck with the PCD method ?
About this augmented lagrangian, did you find that to be more robust to Reynolds number ?
I will see that article thank you.

]]>
https://fenicsproject.discourse.group/t/gmres-3d-naviers-stokes/17636#post_16 Tue, 14 Apr 2026 14:30:34 +0000 fenicsproject.discourse.group-post-58423
Dolfinx mpc contact inelastic condition with MPI Hi, I have noticed a strange behaviour with MPI when using inelastic condition. As in the post here Dolfinx mpc overlapping constraints , I am defining a non-conformal mesh, and I would like to use the inelastic condition to create the potential continuity. Without MPI, everything seems ok, with MPI I get unexpected results. This is the mesh creation with gmsh:

import gmsh

gmsh.initialize()
gmsh.option.setNumber("General.Terminal", 1)
gmsh.model.add("2squares")

r = 0.1
cx, cy = 0.5, 0.6

# Create base surfaces
rect_left  = gmsh.model.occ.addRectangle(0, 0, 0, 1, 1)
disk       = gmsh.model.occ.addDisk(cx, cy, 0.0, r, r)
rect_right = gmsh.model.occ.addRectangle(1, 0.15, 0, 1, 1)

gmsh.model.occ.synchronize()

# Fragment: object = left rect, tools = disk + right rect
outDimTags, outMap = gmsh.model.occ.fragment([(2, rect_left)], [(2, disk)])
gmsh.model.occ.synchronize()


for i, (dim, ctag) in enumerate(gmsh.model.getEntities(2), start=1):
     pg = gmsh.model.addPhysicalGroup(2, [ctag], tag=i)
     gmsh.model.setPhysicalName(2, pg, f"surf_{ctag}")

for i, (dim, ctag) in enumerate(gmsh.model.getEntities(1), start=1):
     pg = gmsh.model.addPhysicalGroup(1, [ctag], tag=100 + i)
     gmsh.model.setPhysicalName(1, pg, f"line_{ctag}")
     
# Mesh and write
# Keep left-square sizes unchanged; refine only in the right square.
def right_square_refine(dim, tag, x, y, z, lc):
    if x > 1.0 and 0.0 <= y <= 1.0:
        return min(lc, 0.02)
    return lc

#make right square mesh finer to see the non conformality
field = gmsh.model.mesh.field.add("Constant")
gmsh.model.mesh.field.setNumber(field, "VIn", 0.19)   # size inside selected entities
gmsh.model.mesh.field.setNumbers(field, "SurfacesList", [rect_right])
gmsh.model.mesh.field.setAsBackgroundMesh(field)

gmsh.model.mesh.generate(2)
gmsh.model.mesh.refine()
gmsh.write("2squares.msh")
gmsh.finalize()

The mesh is non conformal where the 2 square are in contact. In this notebook cell I load and plot the mesh:

#%%px

import dolfinx
import dolfinx_mpc
from dolfinx.io import gmsh
import numpy as np
import pyvista
import ufl
from mpi4py import MPI
from dolfinx import plot

#cell tags are -> 1: disk, 2 right square, 3 left square
comm = MPI.COMM_WORLD
partitioner = (
        dolfinx.mesh.create_cell_partitioner(
            dolfinx.mesh.GhostMode.shared_facet,
        )
        if comm.Get_size() > 1
        else None
    )

mesh,ct,ft,*_ = gmsh.read_from_msh("2squares.msh",comm,partitioner=partitioner)
# Each rank builds its own mesh
owned = mesh.topology.index_map(mesh.topology.dim).size_local
ghosts = mesh.topology.index_map(mesh.topology.dim).num_ghosts
topology, cell_types, geometry = plot.vtk_mesh(mesh, mesh.topology.dim,np.arange(owned + ghosts))
mesh_data = (topology, cell_types, geometry)
all_data = comm.gather(mesh_data, root=0)
all_owned = comm.gather(owned, root=0)


if comm.rank == 0:
    plotter = pyvista.Plotter(shape=(1, comm.size))
    counter=0
    for (topology, cell_types, geometry),num_owned in zip(all_data,all_owned,strict=True):
        grid = pyvista.UnstructuredGrid(topology, cell_types, geometry)
        owned_cells = grid.extract_cells(range(num_owned))
        ghost_cells = grid.extract_cells(np.arange(num_owned, grid.n_cells))
        plotter.subplot(0, counter)
        plotter.add_mesh(owned_cells, show_edges=True)
        if ghost_cells.is_empty is False:
            plotter.add_mesh(ghost_cells, show_edges=True,color="red")
        plotter.view_xy()
        counter+=1  # noqa: SIM113

    plotter.show()

The code that I solve is the following:

#%%px
dx = ufl.Measure("dx", domain=mesh, subdomain_data=ct)
disk = 1
right_square = 2
left_square = 3
left_square_contact = 104
right_square_contact = 107
tdim = mesh.topology.dim
tags_to_shift = right_square
mesh.topology.create_connectivity(tdim, 0)

sp = dolfinx.fem.functionspace(mesh, ("CG", 2))
nu0 = 1/(4*3.14*1e-7)

#dirichlet bc on left edge
bc = dolfinx.fem.dirichletbc(0.0, dolfinx.fem.locate_dofs_geometrical(sp, lambda x: np.isclose(x[0], 2.0)), sp)
mpc = dolfinx_mpc.MultiPointConstraint(sp)
mpc.create_contact_inelastic_condition(ft, left_square_contact, right_square_contact, eps2=1e-11,allow_missing_masters=True)
mpc.finalize()

#linear form: source inside domain disk, all materials are linear with nu0
a = nu0*ufl.inner(ufl.grad(ufl.TrialFunction(sp)), ufl.grad(ufl.TestFunction(sp)))*ufl.dx
L= 10000.0*ufl.TestFunction(sp)*dx(disk)

#solve linear problem
problem = dolfinx_mpc.LinearProblem(a,L, mpc=mpc,bcs=[bc])
sol_lin= problem.solve()

Without MPI, I get the solution I expect:

#%%px
owned = mesh.topology.index_map(mesh.topology.dim).size_local # “How many cells do I own on this MPI rank?”
ghosts = mesh.topology.index_map(mesh.topology.dim).num_ghosts # “How many ghost cells do I own on this MPI rank?”
topology, cell_types, geometry = plot.vtk_mesh(mpc.function_space,np.arange(owned + ghosts))
mesh_data = (topology, cell_types, geometry)
all_data = comm.gather(mesh_data, root=0)
all_owned = comm.gather(owned, root=0)
all_sol = comm.gather(sol_lin.x.array[:] , root=0)
if comm.rank == 0:
    plotter = pyvista.Plotter(shape=(1, comm.size))
    counter=0
    for (topology, cell_types, geometry),num_owned,sol in zip(all_data,all_owned,all_sol,strict=True):
        grid = pyvista.UnstructuredGrid(topology, cell_types, geometry)
        grid.point_data["u"] = sol
        grid.set_active_scalars("u")
        levels = np.linspace(0, 6e-4, 25)
        iso_lin = grid.contour(isosurfaces=levels, scalars="u")
        owned_cells = grid.extract_cells(range(num_owned))
        ghost_cells = grid.extract_cells(np.arange(num_owned, grid.n_cells))
        plotter.subplot(0, counter)
        plotter.add_mesh(owned_cells, show_edges=True)
        plotter.add_mesh(iso_lin, color="white", line_width=2)
        if ghost_cells.is_empty is False:
            plotter.add_mesh(ghost_cells, show_edges=True,color="red")

        plotter.view_xy()
        counter+=1

    plotter.show()

If I use MPI, I get the following partition of the mesh (in red cells with ghost nodes):

and mpc.create_contact_inelastic_condition causes MPI_ABORT with CG2 function space, while with CG1 gives different results:

Am I missing something?

Mattia

]]>
https://fenicsproject.discourse.group/t/dolfinx-mpc-contact-inelastic-condition-with-mpi/19652#post_1 Tue, 14 Apr 2026 14:19:08 +0000 fenicsproject.discourse.group-post-58422
GMRES 3D Naviers Stokes Hi, I’ve managed to simulate incompressible flows at Re ~ 1000–5000 using a modified augmented Lagrangian method (O. Marquet has an excellent article on this), but I haven’t been able to go any higher than that. I didn’t try with a turbulence model. I’m now focusing on compressible flows with gmres

]]>
https://fenicsproject.discourse.group/t/gmres-3d-naviers-stokes/17636#post_15 Tue, 14 Apr 2026 14:15:57 +0000 fenicsproject.discourse.group-post-58421
Using mpc with ufl.MixedFunctionSpace There are several mistakes in your code.

This is not correct, should use ufl.system and then ufl.extract_blocks.

Need to pass in an empty constraint for second variable.

Here is something that runs:

import dolfinx, gmsh, ufl, mpi4py, scifem
import numpy as np
import  dolfinx_mpc

gmsh.initialize()
gmsh.option.setNumber("General.Terminal", 0)
gmsh.clear()
# gmsh.model.add("generator_2d")

occ = gmsh.model.occ
gdim = 2

b1 = occ.addRectangle(0, 0, 0, 0.5, 1)
b2 = occ.addRectangle(0.5, 0, 0, 0.5, 1)
occ.synchronize()

all_doms = gmsh.model.getEntities(gdim)
for j, dom in enumerate(all_doms):
    gmsh.model.addPhysicalGroup(dom[0], [dom[1]], j + 1)  # create the main group/node

# number all boundaries
all_edges = gmsh.model.getEntities(gdim - 1)
for j, edge in enumerate(all_edges):
    gmsh.model.addPhysicalGroup(edge[0], [edge[1]], edge[1])  # create the main group/node

gmsh.model.mesh.generate(gdim)
# for _ in range(refine):
#     gmsh.model.mesh.refine()
# gmsh.model.mesh.setOrder(fem_order)
gmsh.model.mesh.optimize()

mpi_rank = 0
dolfinx_model = dolfinx.io.gmsh.model_to_mesh(gmsh.model, mpi4py.MPI.COMM_WORLD, mpi_rank, gdim)
mesh, ct, ft = dolfinx_model.mesh, dolfinx_model.cell_tags, dolfinx_model.facet_tags

interface_bnd_idx = [2, 8] # by visual inspection of the mesh

Omega_A = dolfinx.fem.functionspace(mesh, ("CG", 2))
Omega_r = scifem.create_real_functionspace(mesh)

# mpc
mpc = dolfinx_mpc.MultiPointConstraint(Omega_A)
tol = float(1e-8)
mpc.create_contact_inelastic_condition(
    ft, interface_bnd_idx[0], interface_bnd_idx[1], eps2=tol)
mpc.finalize()
mpc_r  = dolfinx_mpc.MultiPointConstraint(Omega_r)
mpc_r.finalize()

W = ufl.MixedFunctionSpace(mpc.function_space, Omega_r)
A, Vc = ufl.TrialFunctions(W)
At, Vct = ufl.TestFunctions(W)

dx = ufl.Measure("dx", domain=mesh, subdomain_data=ct)

zero = dolfinx.fem.Constant(mesh, dolfinx.default_scalar_type(0))
F = ufl.inner(A, At)*dx + ufl.inner(Vc, Vct)*dx + ufl.inner(A, Vct)*dx + ufl.inner(Vc, At)*dx
one = dolfinx.fem.Constant(mesh, dolfinx.default_scalar_type(1))
F += ufl.inner(one, Vct)*dx + ufl.inner(zero, At)*dx

left_dofs = dolfinx.fem.locate_dofs_geometrical(Omega_A, lambda x: np.isclose(x[0], 0))
dbc = dolfinx.fem.dirichletbc(1.0, left_dofs, Omega_A)

a_block, L_block = ufl.system(F)
a_form =  ufl.extract_blocks(a_block)
L_form = ufl.extract_blocks(L_block)

# solve the linear model
petsc_options = {"pc_type": "lu", "pc_factor_mat_solver_type": "mumps", "ksp_error_if_not_converged": True,"ksp_monitor": None,
                 "ksp_type": "preonly"}
problem = dolfinx_mpc.problem.LinearProblem(a_form, L_form, [mpc,mpc_r], bcs=[dbc], petsc_options=petsc_options)
A_sol = problem.solve()
print(A_sol[0].x.array, A_sol[1].x.array)

Note that I change the volume force from 0 to 1.
Still not sure if this is the expected solution of the problem.
I would need the full variational form to verify what you are doing (in mathematical syntax).

]]>
https://fenicsproject.discourse.group/t/using-mpc-with-ufl-mixedfunctionspace/19651#post_2 Tue, 14 Apr 2026 14:15:19 +0000 fenicsproject.discourse.group-post-58420
Using mpc with ufl.MixedFunctionSpace I am enforcing continuity across non-conformal interface using MPC and need to enforce further constraints with real elements. Following the advice here related to the main branch, I constructed a model but it seems to be running into problems with form compilation. I will appreciate if someone familiar could point out my mistake. The MWE below (dolfinx 0.10, conda build) reproduces the error:

import dolfinx, gmsh, ufl, mpi4py, petsc4py, pyvista, scifem
import numpy as np
import matplotlib.pyplot as plt
import pyvista, dolfinx_mpc

gmsh.finalize()
gmsh.initialize()
gmsh.option.setNumber("General.Terminal", 0)
gmsh.clear()
# gmsh.model.add("generator_2d")

occ = gmsh.model.occ
gdim = 2

b1 = occ.addRectangle(0, 0, 0, 0.5, 1)
b2 = occ.addRectangle(0.5, 0, 0, 0.5, 1)
occ.synchronize()

all_doms = gmsh.model.getEntities(gdim)
for j, dom in enumerate(all_doms):
    gmsh.model.addPhysicalGroup(dom[0], [dom[1]], j + 1)  # create the main group/node

# number all boundaries
all_edges = gmsh.model.getEntities(gdim - 1)
for j, edge in enumerate(all_edges):
    gmsh.model.addPhysicalGroup(edge[0], [edge[1]], edge[1])  # create the main group/node

gmsh.model.mesh.generate(gdim)
# for _ in range(refine):
#     gmsh.model.mesh.refine()
# gmsh.model.mesh.setOrder(fem_order)
gmsh.model.mesh.optimize()

mpi_rank = 0
dolfinx_model = dolfinx.io.gmsh.model_to_mesh(gmsh.model, mpi4py.MPI.COMM_WORLD, mpi_rank, gdim)
mesh, ct, ft = dolfinx_model.mesh, dolfinx_model.cell_tags, dolfinx_model.facet_tags

interface_bnd_idx = [2, 8] # by visual inspection of the mesh

Omega_A = dolfinx.fem.functionspace(mesh, ("CG", 2))
Omega_r = scifem.create_real_functionspace(mesh)

# mpc
mpc = dolfinx_mpc.MultiPointConstraint(Omega_A)
tol = float(1e-8)
mpc.create_contact_inelastic_condition(
    ft, interface_bnd_idx[0], interface_bnd_idx[1], eps2=tol)
mpc.finalize()

W = ufl.MixedFunctionSpace(mpc.function_space, Omega_r)
A, Vc = ufl.TrialFunctions(W)
At, Vct = ufl.TestFunctions(W)

dx = ufl.Measure("dx", domain=mesh, subdomain_data=ct)

zero = dolfinx.fem.Constant(mesh, dolfinx.default_scalar_type(0))
F = ufl.inner(A, At)*dx + ufl.inner(Vc, Vct)*dx + ufl.inner(A, Vct)*dx + ufl.inner(Vc, At)*dx
F += ufl.inner(zero, Vct)*dx + ufl.inner(zero, At)*dx

left_dofs = dolfinx.fem.locate_dofs_geometrical(Omega_A, lambda x: np.isclose(x[0], 0))
dbc = dolfinx.fem.dirichletbc(1.0, left_dofs, Omega_A)

a_block, L_block = ufl.extract_blocks(F)
a_form = dolfinx.fem.form(a_block)
L_form = dolfinx.fem.form(L_block)

# solve the linear model
petsc_options = {"pc_type": "lu", "pc_factor_mat_solver_type": "mumps"}
problem = dolfinx_mpc.problem.LinearProblem(a_form, L_form, [mpc, None], bcs=[dbc], petsc_options=petsc_options)
A_sol = problem.solve()

The form compiler leads to a long error where I suspect the following is relevant:

---------------------------------------------------------------------------
ArityMismatch                             Traceback (most recent call last)
Cell In[17], line 64
     61 dbc = dolfinx.fem.dirichletbc(1.0, left_dofs, Omega_A)
     63 a_block, L_block = ufl.extract_blocks(F)
---> 64 a_form = dolfinx.fem.form(a_block)
     65 L_form = dolfinx.fem.form(L_block)
     67 # solve the linear model

File ~/install/miniforge3/envs/fenics-0.10/lib/python3.11/site-packages/dolfinx/fem/forms.py:449, in form(form, dtype, form_compiler_options, jit_options, entity_maps)
    446     else:
    447         return form
--> 449 return _create_form(form)

File ~/install/miniforge3/envs/fenics-0.10/lib/python3.11/site-packages/dolfinx/fem/forms.py:445, in form.<locals>._create_form(form)
    443     return _zero_form(form)
    444 elif isinstance(form, collections.abc.Iterable):
--> 445     return list(map(lambda sub_form: _create_form(sub_form), form))
    446 else:
    447     return form

File ~/install/miniforge3/envs/fenics-0.10/lib/python3.11/site-packages/dolfinx/fem/forms.py:445, in form.<locals>._create_form.<locals>.<lambda>(sub_form)
    443     return _zero_form(form)
    444 elif isinstance(form, collections.abc.Iterable):
--> 445     return list(map(lambda sub_form: _create_form(sub_form), form))
...
     60         f"{tuple(map(_afmt, a))} vs {tuple(map(_afmt, b))}."
     61     )
     62 return a

ArityMismatch: Adding expressions with non-matching form arguments ('v_0^0',) vs ('v_0^0', 'v_1^0').
]]>
https://fenicsproject.discourse.group/t/using-mpc-with-ufl-mixedfunctionspace/19651#post_1 Tue, 14 Apr 2026 11:56:29 +0000 fenicsproject.discourse.group-post-58419
LEoPart won't compile on Mac (I’m not sure if this is the correct place to ask about this - I’ve seen that someone else has posted an issue on the LEoPart Github but I wanted to see if anyone has figured it out here.)

I’m trying to install LEoPart on my Mac, but every time I do, I get an error from the compiler (pasted at the bottom of the question).

I’ve done a clean install of everything, including macOS, so I don’t know if the computer is the problem. Here’s what the computer is running:

  • Apple M1 Macbook Pro
  • MacOS (Tahoe 26.3.1)
  • Conda-forge (26.1.1-3)
  • Python (3.14.4)
  • Pip3 (26.0.1)
  • CMake (4.3.1)
  • FEniCSx (0.1.0) installed with conda-forge
  • fenicsx-basix == 0.10.0
  • fenics-ffcx == 0.10.1
  • fenics-ufl == 2025.2.1
  • pybind11 == 3.0.3
  • clang == 22.1.0 - although I think LEoPart forces clang 20.something which is also installed
  • LEoPart-x (from their github, most recent commit 1ced51b)

Here’s the code I ran to install it:


conda create --name fenicsx-env fenics-dolfinx=0.10.0 fenics-basix=0.10.0 fenics-ufl=2025.2.1 fenics-ffcx=0.10.1 pybind11=3.0.3 mpich pyvista gcc

conda activate fenicsx-env

git clone https://github.com/LEoPart-project/leopart-x.git

cd leopart-x

pip3 install .

Here’s the error message:


(fenicsx-env11) Lev1@ leopart-x % pip3 install . 
Processing ./.
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Installing backend dependencies ... done
  Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: leopart
  Building wheel for leopart (pyproject.toml) ... error
  error: subprocess-exited-with-error
  
  × Building wheel for leopart (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [150 lines of output]
      WARNING: Use build.verbose instead of cmake.verbose for scikit-build-core >= 0.10
      2026-04-10 08:32:50,030 - scikit_build_core - INFO - RUN: /private/var/folders/w5/4jh2nv210wl3r04h8gxn_69h0000gp/T/pip-build-env-ytz9capg/normal/lib/python3.11/site-packages/cmake/data/bin/cmake -E capabilities
      2026-04-10 08:32:50,039 - scikit_build_core - INFO - CMake version: 4.3.1
      *** scikit-build-core 0.12.2 using CMake 4.3.1 (wheel)
      2026-04-10 08:32:50,040 - scikit_build_core - INFO - Implementation: cpython darwin on arm64
      2026-04-10 08:32:50,043 - scikit_build_core - INFO - Build directory: /Users/Lev1/Desktop/PlasmaPhysics/leopart-x/build
      2026-04-10 08:32:50,045 - scikit_build_core - INFO - New isolated environment /private/var/folders/w5/4jh2nv210wl3r04h8gxn_69h0000gp/T/pip-build-env-3v_4h5pd/overlay/lib/python3.14/site-packages/scikit_build_core -> /private/var/folders/w5/4jh2nv210wl3r04h8gxn_69h0000gp/T/pip-build-env-ytz9capg/overlay/lib/python3.11/site-packages/scikit_build_core, clearing cache
      *** Configuring CMake...
      2026-04-10 08:32:50,050 - scikit_build_core - WARNING - Unsupported CMAKE_ARGS ignored: -DCMAKE_BUILD_TYPE=Release
      fatal error: lipo: can't open input file: ninja (No such file or directory)
      2026-04-10 08:32:50,074 - scikit_build_core - INFO - RUN: ninja --version
      2026-04-10 08:32:50,397 - scikit_build_core - INFO - Ninja version: 1.13.0
      2026-04-10 08:32:50,399 - scikit_build_core - WARNING - Unsupported CMAKE_ARGS ignored: -DCMAKE_BUILD_TYPE=Release
      2026-04-10 08:32:50,399 - scikit_build_core - INFO - RUN: /private/var/folders/w5/4jh2nv210wl3r04h8gxn_69h0000gp/T/pip-build-env-ytz9capg/normal/lib/python3.11/site-packages/cmake/data/bin/cmake -S. -Bbuild -DCMAKE_BUILD_TYPE:STRING=Release -Cbuild/CMakeInit.txt -DCMAKE_INSTALL_PREFIX=/var/folders/w5/4jh2nv210wl3r04h8gxn_69h0000gp/T/tmp4ippbae2/wheel/platlib -DCMAKE_MAKE_PROGRAM=ninja -DCMAKE_AR=/Users/Lev1/miniforge3/envs/fenicsx-env11/bin/arm64-apple-darwin20.0.0-ar -DCMAKE_CXX_COMPILER_AR=/Users/Lev1/miniforge3/envs/fenicsx-env11/bin/arm64-apple-darwin20.0.0-ar -DCMAKE_C_COMPILER_AR=/Users/Lev1/miniforge3/envs/fenicsx-env11/bin/arm64-apple-darwin20.0.0-ar -DCMAKE_RANLIB=/Users/Lev1/miniforge3/envs/fenicsx-env11/bin/arm64-apple-darwin20.0.0-ranlib -DCMAKE_CXX_COMPILER_RANLIB=/Users/Lev1/miniforge3/envs/fenicsx-env11/bin/arm64-apple-darwin20.0.0-ranlib -DCMAKE_C_COMPILER_RANLIB=/Users/Lev1/miniforge3/envs/fenicsx-env11/bin/arm64-apple-darwin20.0.0-ranlib -DCMAKE_LINKER=/Users/Lev1/miniforge3/envs/fenicsx-env11/bin/arm64-apple-darwin20.0.0-ld -DCMAKE_STRIP=/Users/Lev1/miniforge3/envs/fenicsx-env11/bin/arm64-apple-darwin20.0.0-strip -DCMAKE_INSTALL_NAME_TOOL=/Users/Lev1/miniforge3/envs/fenicsx-env11/bin/arm64-apple-darwin20.0.0-install_name_tool -DCMAKE_LIBTOOL=/Users/Lev1/miniforge3/envs/fenicsx-env11/bin/arm64-apple-darwin20.0.0-libtool -DCMAKE_OSX_SYSROOT=/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk
      loading initial cache file build/CMakeInit.txt
      -- The C compiler identification is Clang 19.1.7
      -- The CXX compiler identification is Clang 19.1.7
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - done
      -- Check for working C compiler: /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/arm64-apple-darwin20.0.0-clang - skipped
      -- Detecting C compile features
      -- Detecting C compile features - done
      -- Detecting CXX compiler ABI info
      -- Detecting CXX compiler ABI info - done
      -- Check for working CXX compiler: /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/arm64-apple-darwin20.0.0-clang++ - skipped
      -- Detecting CXX compile features
      -- Detecting CXX compile features - done
      -- Found MPI_C: /Users/Lev1/miniforge3/envs/fenicsx-env11/lib/libmpi.dylib (found version "3.1")
      -- Found MPI_CXX: /Users/Lev1/miniforge3/envs/fenicsx-env11/lib/libmpi.dylib (found version "3.1")
      -- Found MPI: TRUE (found version "3.1")
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
      -- Found Threads: TRUE
      -- Found Boost 1.88.0 at /Users/Lev1/miniforge3/envs/fenicsx-env11/lib/cmake/Boost-1.88.0
      --   Requested configuration: REQUIRED
      -- Found boost_headers 1.88.0 at /Users/Lev1/miniforge3/envs/fenicsx-env11/lib/cmake/boost_headers-1.88.0
      -- HDF5 C compiler wrapper is unable to compile a minimal HDF5 program.
      -- Found HDF5: /Users/Lev1/miniforge3/envs/fenicsx-env11/lib/libhdf5.dylib (found version "1.14.6") found components: C
      -- HDF5_DIR: HDF5_DIR-NOTFOUND
      -- HDF5_DEFINITIONS:
      -- HDF5_INCLUDE_DIRS: /Users/Lev1/miniforge3/envs/fenicsx-env11/include
      -- HDF5_LIBRARIES: /Users/Lev1/miniforge3/envs/fenicsx-env11/lib/libhdf5.dylib
      -- HDF5_HL_LIBRARIES:
      -- HDF5_C_DEFINITIONS:
      -- HDF5_C_INCLUDE_DIR: /Users/Lev1/miniforge3/envs/fenicsx-env11/include
      -- HDF5_C_INCLUDE_DIRS: /Users/Lev1/miniforge3/envs/fenicsx-env11/include
      -- HDF5_C_LIBRARY:
      -- HDF5_C_LIBRARIES: /Users/Lev1/miniforge3/envs/fenicsx-env11/lib/libhdf5.dylib
      -- HDF5_C_HL_LIBRARY:
      -- HDF5_C_HL_LIBRARIES:
      -- Defined targets (if any):
      -- ... hdf5::hdf5
      -- Found PkgConfig: /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/pkg-config (found version "0.29.2")
      -- Checking for one of the modules 'PETSc;petsc'
      -- Checking for one of the modules 'SLEPc;slepc'
      -- Found ADIOS2: /Users/Lev1/miniforge3/envs/fenicsx-env11/lib/cmake/adios2/adios2-config.cmake (found suitable version "2.11.0", minimum required is "2.8.1") found components: CXX
      -- Found Python3: /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/python3.11 (found version "3.11.15") found components: Interpreter Development.Module
      -- Performing Test HAS_FLTO_THIN
      -- Performing Test HAS_FLTO_THIN - Success
      -- Performing Test HAS_FLTO
      -- Performing Test HAS_FLTO - Success
      -- Found pybind11: /private/var/folders/w5/4jh2nv210wl3r04h8gxn_69h0000gp/T/pip-build-env-ytz9capg/overlay/lib/python3.11/site-packages/pybind11/include (found version "3.0.3")
      -- Configuring done (16.0s)
      -- Generating done (0.0s)
      CMake Warning:
        Manually-specified variables were not used by the project:
      
          CMAKE_LIBTOOL
      
      
      CMake Error:
        Error evaluating generator expression:
      
          $<TARGET_OBJECTS:adios2sys_objects>
      
        Objects of target "adios2sys_objects" referenced but no such target exists.
      
      
      -- Build files have been written to: /Users/Lev1/Desktop/PlasmaPhysics/leopart-x/build
      2026-04-10 08:33:06,407 - scikit_build_core - WARNING - Unsupported CMAKE_ARGS ignored: -DCMAKE_BUILD_TYPE=Release
      *** Building project with Ninja...
      2026-04-10 08:33:06,407 - scikit_build_core - INFO - RUN: /private/var/folders/w5/4jh2nv210wl3r04h8gxn_69h0000gp/T/pip-build-env-ytz9capg/normal/lib/python3.11/site-packages/cmake/data/bin/cmake --build build -v
      Change Dir: '/Users/Lev1/Desktop/PlasmaPhysics/leopart-x/build'
      
      Run Build Command(s): ninja -v
      [1/4] /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/arm64-apple-darwin20.0.0-clang++ -DADIOS2_USE_MPI -DDOLFINX_VERSION=\"0.10.0\" -DFMT_SHARED -DHAS_ADIOS2 -DHAS_KAHIP -DHAS_PARMETIS -DHAS_PETSC -DHAS_PTSCOTCH -DHAS_SLEPC -DMDSPAN_USE_BRACKET_OPERATOR=0 -DMDSPAN_USE_PAREN_OPERATOR=1 -DSPDLOG_COMPILED_LIB -DSPDLOG_FMT_EXTERNAL -DSPDLOG_SHARED_LIB -Dcpp_EXPORTS -I/Users/Lev1/Desktop/PlasmaPhysics/leopart-x/leopart/cpp -isystem /Users/Lev1/miniforge3/envs/fenicsx-env11/include/python3.11 -isystem /private/var/folders/w5/4jh2nv210wl3r04h8gxn_69h0000gp/T/pip-build-env-ytz9capg/overlay/lib/python3.11/site-packages/pybind11/include -ftree-vectorize -fPIC -fstack-protector-strong -O2 -pipe -stdlib=libc++ -fvisibility-inlines-hidden -fmessage-length=0 -isystem /Users/Lev1/miniforge3/envs/fenicsx-env11/include -O3 -DNDEBUG -std=gnu++23 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -fPIC -fvisibility=hidden -Wno-comment -Wall -Wextra -pedantic -Werror -Wfatal-errors -flto -MD -MT CMakeFiles/cpp.dir/leopart/cpp/Particles.cpp.o -MF CMakeFiles/cpp.dir/leopart/cpp/Particles.cpp.o.d -o CMakeFiles/cpp.dir/leopart/cpp/Particles.cpp.o -c /Users/Lev1/Desktop/PlasmaPhysics/leopart-x/leopart/cpp/Particles.cpp
      FAILED: [code=1] CMakeFiles/cpp.dir/leopart/cpp/Particles.cpp.o
      /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/arm64-apple-darwin20.0.0-clang++ -DADIOS2_USE_MPI -DDOLFINX_VERSION=\"0.10.0\" -DFMT_SHARED -DHAS_ADIOS2 -DHAS_KAHIP -DHAS_PARMETIS -DHAS_PETSC -DHAS_PTSCOTCH -DHAS_SLEPC -DMDSPAN_USE_BRACKET_OPERATOR=0 -DMDSPAN_USE_PAREN_OPERATOR=1 -DSPDLOG_COMPILED_LIB -DSPDLOG_FMT_EXTERNAL -DSPDLOG_SHARED_LIB -Dcpp_EXPORTS -I/Users/Lev1/Desktop/PlasmaPhysics/leopart-x/leopart/cpp -isystem /Users/Lev1/miniforge3/envs/fenicsx-env11/include/python3.11 -isystem /private/var/folders/w5/4jh2nv210wl3r04h8gxn_69h0000gp/T/pip-build-env-ytz9capg/overlay/lib/python3.11/site-packages/pybind11/include -ftree-vectorize -fPIC -fstack-protector-strong -O2 -pipe -stdlib=libc++ -fvisibility-inlines-hidden -fmessage-length=0 -isystem /Users/Lev1/miniforge3/envs/fenicsx-env11/include -O3 -DNDEBUG -std=gnu++23 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -fPIC -fvisibility=hidden -Wno-comment -Wall -Wextra -pedantic -Werror -Wfatal-errors -flto -MD -MT CMakeFiles/cpp.dir/leopart/cpp/Particles.cpp.o -MF CMakeFiles/cpp.dir/leopart/cpp/Particles.cpp.o.d -o CMakeFiles/cpp.dir/leopart/cpp/Particles.cpp.o -c /Users/Lev1/Desktop/PlasmaPhysics/leopart-x/leopart/cpp/Particles.cpp
      In file included from /Users/Lev1/Desktop/PlasmaPhysics/leopart-x/leopart/cpp/Particles.cpp:7:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/include/dolfinx.h:10:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/include/dolfinx/common/dolfinx_common.h:13:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/include/dolfinx/common/MPI.h:9:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/include/dolfinx/common/Timer.h:10:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/include/dolfinx/common/TimeLogger.h:10:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/include/dolfinx/common/timing.h:10:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/../include/c++/v1/chrono:969:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/../include/c++/v1/__chrono/formatter.h:25:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/../include/c++/v1/__chrono/ostream.h:33:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/../include/c++/v1/__format/format_functions.h:29:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/../include/c++/v1/__format/formatter_floating_point.h:38:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/../include/c++/v1/cmath:328:
      In file included from /Users/Lev1/Desktop/PlasmaPhysics/leopart-x/leopart/cpp/math.h:8:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/include/basix/math.h:9:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/include/basix/mdspan.hpp:5510:
      /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/../include/c++/v1/complex:938:10: fatal error: no member named 'hypot' in namespace 'std'; did you mean '__math::hypot'?
        938 |   return std::hypot(__c.real(), __c.imag());
            |          ^~~~~
      /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/../include/c++/v1/__math/hypot.h:35:36: note: '__math::hypot' declared here
         35 | inline _LIBCPP_HIDE_FROM_ABI float hypot(float __x, float __y) _NOEXCEPT { return __builtin_hypotf(__x, __y); }
            |                                    ^
      1 error generated.
      [2/4] /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/arm64-apple-darwin20.0.0-clang++ -DADIOS2_USE_MPI -DDOLFINX_VERSION=\"0.10.0\" -DFMT_SHARED -DHAS_ADIOS2 -DHAS_KAHIP -DHAS_PARMETIS -DHAS_PETSC -DHAS_PTSCOTCH -DHAS_SLEPC -DMDSPAN_USE_BRACKET_OPERATOR=0 -DMDSPAN_USE_PAREN_OPERATOR=1 -DSPDLOG_COMPILED_LIB -DSPDLOG_FMT_EXTERNAL -DSPDLOG_SHARED_LIB -Dcpp_EXPORTS -I/Users/Lev1/Desktop/PlasmaPhysics/leopart-x/leopart/cpp -isystem /Users/Lev1/miniforge3/envs/fenicsx-env11/include/python3.11 -isystem /private/var/folders/w5/4jh2nv210wl3r04h8gxn_69h0000gp/T/pip-build-env-ytz9capg/overlay/lib/python3.11/site-packages/pybind11/include -ftree-vectorize -fPIC -fstack-protector-strong -O2 -pipe -stdlib=libc++ -fvisibility-inlines-hidden -fmessage-length=0 -isystem /Users/Lev1/miniforge3/envs/fenicsx-env11/include -O3 -DNDEBUG -std=gnu++23 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -fPIC -fvisibility=hidden -Wno-comment -Wall -Wextra -pedantic -Werror -Wfatal-errors -flto -MD -MT CMakeFiles/cpp.dir/leopart/cpp/external/quadprog_mdspan/QuadProg++.cc.o -MF CMakeFiles/cpp.dir/leopart/cpp/external/quadprog_mdspan/QuadProg++.cc.o.d -o CMakeFiles/cpp.dir/leopart/cpp/external/quadprog_mdspan/QuadProg++.cc.o -c /Users/Lev1/Desktop/PlasmaPhysics/leopart-x/leopart/cpp/external/quadprog_mdspan/QuadProg++.cc
      FAILED: [code=1] CMakeFiles/cpp.dir/leopart/cpp/external/quadprog_mdspan/QuadProg++.cc.o
      /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/arm64-apple-darwin20.0.0-clang++ -DADIOS2_USE_MPI -DDOLFINX_VERSION=\"0.10.0\" -DFMT_SHARED -DHAS_ADIOS2 -DHAS_KAHIP -DHAS_PARMETIS -DHAS_PETSC -DHAS_PTSCOTCH -DHAS_SLEPC -DMDSPAN_USE_BRACKET_OPERATOR=0 -DMDSPAN_USE_PAREN_OPERATOR=1 -DSPDLOG_COMPILED_LIB -DSPDLOG_FMT_EXTERNAL -DSPDLOG_SHARED_LIB -Dcpp_EXPORTS -I/Users/Lev1/Desktop/PlasmaPhysics/leopart-x/leopart/cpp -isystem /Users/Lev1/miniforge3/envs/fenicsx-env11/include/python3.11 -isystem /private/var/folders/w5/4jh2nv210wl3r04h8gxn_69h0000gp/T/pip-build-env-ytz9capg/overlay/lib/python3.11/site-packages/pybind11/include -ftree-vectorize -fPIC -fstack-protector-strong -O2 -pipe -stdlib=libc++ -fvisibility-inlines-hidden -fmessage-length=0 -isystem /Users/Lev1/miniforge3/envs/fenicsx-env11/include -O3 -DNDEBUG -std=gnu++23 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -fPIC -fvisibility=hidden -Wno-comment -Wall -Wextra -pedantic -Werror -Wfatal-errors -flto -MD -MT CMakeFiles/cpp.dir/leopart/cpp/external/quadprog_mdspan/QuadProg++.cc.o -MF CMakeFiles/cpp.dir/leopart/cpp/external/quadprog_mdspan/QuadProg++.cc.o.d -o CMakeFiles/cpp.dir/leopart/cpp/external/quadprog_mdspan/QuadProg++.cc.o -c /Users/Lev1/Desktop/PlasmaPhysics/leopart-x/leopart/cpp/external/quadprog_mdspan/QuadProg++.cc
      In file included from /Users/Lev1/Desktop/PlasmaPhysics/leopart-x/leopart/cpp/external/quadprog_mdspan/QuadProg++.cc:16:
      In file included from /Users/Lev1/Desktop/PlasmaPhysics/leopart-x/leopart/cpp/external/quadprog_mdspan/QuadProg++.hh:68:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/include/basix/mdspan.hpp:5510:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/../include/c++/v1/complex:266:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/../include/c++/v1/cmath:328:
      In file included from /Users/Lev1/Desktop/PlasmaPhysics/leopart-x/leopart/cpp/math.h:8:
      /Users/Lev1/miniforge3/envs/fenicsx-env11/include/basix/math.h:191:9: fatal error: no member named 'mdarray' in namespace 'std::experimental'
        191 |   mdex::mdarray<T, md::dextents<std::size_t, 2>, md::layout_left> _A(
            |   ~~~~~~^
      1 error generated.
      [3/4] /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/arm64-apple-darwin20.0.0-clang++ -DADIOS2_USE_MPI -DDOLFINX_VERSION=\"0.10.0\" -DFMT_SHARED -DHAS_ADIOS2 -DHAS_KAHIP -DHAS_PARMETIS -DHAS_PETSC -DHAS_PTSCOTCH -DHAS_SLEPC -DMDSPAN_USE_BRACKET_OPERATOR=0 -DMDSPAN_USE_PAREN_OPERATOR=1 -DSPDLOG_COMPILED_LIB -DSPDLOG_FMT_EXTERNAL -DSPDLOG_SHARED_LIB -Dcpp_EXPORTS -I/Users/Lev1/Desktop/PlasmaPhysics/leopart-x/leopart/cpp -isystem /Users/Lev1/miniforge3/envs/fenicsx-env11/include/python3.11 -isystem /private/var/folders/w5/4jh2nv210wl3r04h8gxn_69h0000gp/T/pip-build-env-ytz9capg/overlay/lib/python3.11/site-packages/pybind11/include -ftree-vectorize -fPIC -fstack-protector-strong -O2 -pipe -stdlib=libc++ -fvisibility-inlines-hidden -fmessage-length=0 -isystem /Users/Lev1/miniforge3/envs/fenicsx-env11/include -O3 -DNDEBUG -std=gnu++23 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -fPIC -fvisibility=hidden -Wno-comment -Wall -Wextra -pedantic -Werror -Wfatal-errors -flto -MD -MT CMakeFiles/cpp.dir/leopart/cpp/wrapper.cpp.o -MF CMakeFiles/cpp.dir/leopart/cpp/wrapper.cpp.o.d -o CMakeFiles/cpp.dir/leopart/cpp/wrapper.cpp.o -c /Users/Lev1/Desktop/PlasmaPhysics/leopart-x/leopart/cpp/wrapper.cpp
      FAILED: [code=1] CMakeFiles/cpp.dir/leopart/cpp/wrapper.cpp.o
      /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/arm64-apple-darwin20.0.0-clang++ -DADIOS2_USE_MPI -DDOLFINX_VERSION=\"0.10.0\" -DFMT_SHARED -DHAS_ADIOS2 -DHAS_KAHIP -DHAS_PARMETIS -DHAS_PETSC -DHAS_PTSCOTCH -DHAS_SLEPC -DMDSPAN_USE_BRACKET_OPERATOR=0 -DMDSPAN_USE_PAREN_OPERATOR=1 -DSPDLOG_COMPILED_LIB -DSPDLOG_FMT_EXTERNAL -DSPDLOG_SHARED_LIB -Dcpp_EXPORTS -I/Users/Lev1/Desktop/PlasmaPhysics/leopart-x/leopart/cpp -isystem /Users/Lev1/miniforge3/envs/fenicsx-env11/include/python3.11 -isystem /private/var/folders/w5/4jh2nv210wl3r04h8gxn_69h0000gp/T/pip-build-env-ytz9capg/overlay/lib/python3.11/site-packages/pybind11/include -ftree-vectorize -fPIC -fstack-protector-strong -O2 -pipe -stdlib=libc++ -fvisibility-inlines-hidden -fmessage-length=0 -isystem /Users/Lev1/miniforge3/envs/fenicsx-env11/include -O3 -DNDEBUG -std=gnu++23 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -fPIC -fvisibility=hidden -Wno-comment -Wall -Wextra -pedantic -Werror -Wfatal-errors -flto -MD -MT CMakeFiles/cpp.dir/leopart/cpp/wrapper.cpp.o -MF CMakeFiles/cpp.dir/leopart/cpp/wrapper.cpp.o.d -o CMakeFiles/cpp.dir/leopart/cpp/wrapper.cpp.o -c /Users/Lev1/Desktop/PlasmaPhysics/leopart-x/leopart/cpp/wrapper.cpp
      In file included from /Users/Lev1/Desktop/PlasmaPhysics/leopart-x/leopart/cpp/wrapper.cpp:8:
      In file included from /Users/Lev1/Desktop/PlasmaPhysics/leopart-x/leopart/cpp/advect.h:8:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/../include/c++/v1/iostream:44:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/../include/c++/v1/ostream:180:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/../include/c++/v1/__ostream/print.h:16:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/../include/c++/v1/format:205:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/../include/c++/v1/__format/format_functions.h:29:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/../include/c++/v1/__format/formatter_floating_point.h:38:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/../include/c++/v1/cmath:328:
      In file included from /Users/Lev1/Desktop/PlasmaPhysics/leopart-x/leopart/cpp/math.h:8:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/include/basix/math.h:9:
      In file included from /Users/Lev1/miniforge3/envs/fenicsx-env11/include/basix/mdspan.hpp:5510:
      /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/../include/c++/v1/complex:938:10: fatal error: no member named 'hypot' in namespace 'std'; did you mean '__math::hypot'?
        938 |   return std::hypot(__c.real(), __c.imag());
            |          ^~~~~
      /Users/Lev1/miniforge3/envs/fenicsx-env11/bin/../include/c++/v1/__math/hypot.h:35:36: note: '__math::hypot' declared here
         35 | inline _LIBCPP_HIDE_FROM_ABI float hypot(float __x, float __y) _NOEXCEPT { return __builtin_hypotf(__x, __y); }
            |                                    ^
      1 error generated.
      ninja: build stopped: subcommand failed.
      
      
      *** CMake build failed
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for leopart
Failed to build leopart
error: failed-wheel-build-for-install

× Failed to build installable wheels for some pyproject.toml based projects
╰─> leopart
]]>
https://fenicsproject.discourse.group/t/leopart-wont-compile-on-mac/19650#post_1 Tue, 14 Apr 2026 00:45:58 +0000 fenicsproject.discourse.group-post-58418
GMRES 3D Naviers Stokes Hi, Did you find the solution to this? I was having a similar issue recently. I tried recently with the PCD preconditioner but it started performing really bad for higher reynolds flows, I needed a preconditioner to solve a problem in the range of like a million.

]]>
https://fenicsproject.discourse.group/t/gmres-3d-naviers-stokes/17636#post_14 Mon, 13 Apr 2026 21:52:15 +0000 fenicsproject.discourse.group-post-58416
Incompressible (Steady) Navier Stokes equations with FEniCS - Preconditioners Hi ,

I have been facing similar issues recently. Did you manage to find the optimal solution ?

]]>
https://fenicsproject.discourse.group/t/incompressible-steady-navier-stokes-equations-with-fenics-preconditioners/13328#post_2 Mon, 13 Apr 2026 21:46:42 +0000 fenicsproject.discourse.group-post-58415
Announcing FEniCS 2026: 17-19 June 2026 at Paris, at the University of Chicago | John W. Boyer Center, France A quick question that may be of interest to everyone: will there be recommended accommodation for conference-goers?
Cheers!

]]>
https://fenicsproject.discourse.group/t/announcing-fenics-2026-17-19-june-2026-at-paris-at-the-university-of-chicago-john-w-boyer-center-france/19613#post_4 Mon, 13 Apr 2026 19:40:18 +0000 fenicsproject.discourse.group-post-58414