Skip to content

Conversation

@JordiManyer
Copy link
Collaborator

@JordiManyer JordiManyer commented Jul 3, 2025

Adds support and solvers for the H1-H1 and HDiv-HDiv discretizations of the MHD equations.

Requires:

  • Gridap#polytopal
  • GridapDistributed#polytopal
  • GridapSolvers#develop

@codecov-commenter
Copy link

codecov-commenter commented Jul 9, 2025

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

❌ Patch coverage is 0% with 913 lines in your changes missing coverage. Please review.
✅ Project coverage is 0.00%. Comparing base (41193d6) to head (1370e08).
⚠️ Report is 108 commits behind head on master.

Files with missing lines Patch % Lines
src/weakforms.jl 0.00% 400 Missing ⚠️
src/Solvers/gmg.jl 0.00% 189 Missing ⚠️
src/fespaces.jl 0.00% 80 Missing ⚠️
src/parameters.jl 0.00% 67 Missing ⚠️
src/Applications/cavity.jl 0.00% 58 Missing ⚠️
src/gridap_extras.jl 0.00% 28 Missing ⚠️
src/Solvers/h1h1blocks.jl 0.00% 26 Missing ⚠️
src/geometry.jl 0.00% 23 Missing ⚠️
src/Applications/hunt.jl 0.00% 20 Missing ⚠️
src/Meshers/hunt_mesher.jl 0.00% 11 Missing ⚠️
... and 4 more
❗ Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@           Coverage Diff            @@
##           master     #48     +/-   ##
========================================
  Coverage    0.00%   0.00%             
========================================
  Files          19      27      +8     
  Lines        2220    3396   +1176     
========================================
- Misses       2220    3396   +1176     
Flag Coverage Δ
unittests 0.00% <0.00%> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@principejavier
Copy link
Collaborator

I'm getting an error with GMG in Hunt problem. The command

mpirun -n 4 julia --project=. -e 'using GridapPETSc; using SparseMatricesCSR; using SparseArrays;  using GridapMHD:hunt;
  hunt(
    nc=(4,4),
    np=(2,2),
    backend=:mpi,
    L=1.0,
    B=(0.,50.,0.),
    debug=false,
    vtk=true,
    title="hunt-H1H1-gmg",
    solver= Dict(
       :solver => :h1h1blocks,
       :matrix_type    => SparseMatrixCSC{Float64,Int},
       :vector_type    => Vector{Float64},
       :block_solvers  => [:gmg,:julia,:julia],
       :petsc_options => "-ksp_error_if_not_converged true -ksp_converged_reason"
       ),
    fluid_disc = :Qk_dPkm1,
    current_disc = :H1,
    ranks_per_level = [(2,2,1),(2,2,1)],
    ζᵤ = 10.0
  )
'

gives this output

...
[7.805e+00 s in MAIN] fe_spaces
┌ Error: 
│   exception =
│    ArgumentError: Collection is empty, must contain exactly 1 element
│    Stacktrace:
│      [1] _only
│        @ ./iterators.jl:1550 [inlined]
│      [2] only
│        @ ./iterators.jl:1545 [inlined]
│      [3] #1
│        @ ~/.julia/packages/GridapSolvers/gqvqX/src/PatchBasedSmoothers/PatchTransferOperators.jl:25 [inlined]
│      [4] iterate
│        @ ./generator.jl:48 [inlined]
│      [5] _collect(c::Gridap.Arrays.Table{Int64, Vector{Int64}, Vector{Int64}}, itr::Base.Generator{Gridap.Arrays.Table{Int64, Vector{Int64}, Vector{Int64}}, GridapSolvers.PatchBasedSmoothers.var"#1#4"{Gridap.Arrays.Table{Int32, Vector{Int32}, Vector{Int32}}, GridapSolvers.PatchBasedSmoothers.var"#is_interior#3"{Vector{Int32}}}}, ::Base.EltypeUnknown, isz::Base.HasShape{1})
│        @ Base ./array.jl:800
│      [6] collect_similar(cont::Gridap.Arrays.Table{Int64, Vector{Int64}, Vector{Int64}}, itr::Base.Generator{Gridap.Arrays.Table{Int64, Vector{Int64}, Vector{Int64}}, GridapSolvers.PatchBasedSmoothers.var"#1#4"{Gridap.Arrays.Table{Int32, Vector{Int32}, Vector{Int32}}, GridapSolvers.PatchBasedSmoothers.var"#is_interior#3"{Vector{Int32}}}})
│        @ Base ./array.jl:709
│      [7] map(f::Function, A::Gridap.Arrays.Table{Int64, Vector{Int64}, Vector{Int64}})
│        @ Base ./abstractarray.jl:3371
│      [8] CoarsePatchTopology(model::Gridap.Adaptivity.AdaptedDiscreteModel{3, 3, Gridap.Geometry.CartesianDiscreteModel{3, Float64, GridapMHD.Meshers.var"#map1#19"{Float64, Float64, Float64}}, Gridap.Geometry.CartesianDiscreteModel{3, Float64, GridapMHD.Meshers.var"#map1#19"{Float64, Float64, Float64}}, Gridap.Adaptivity.AdaptivityGlue{Gridap.Adaptivity.RefinementGlue, 3, Vector{Vector{Int64}}, Vector{Int64}, FillArrays.Fill{Gridap.Adaptivity.RefinementRule{Gridap.ReferenceFEs.ExtrusionPolytope{3}}, 1, Tuple{Base.OneTo{Int64}}}, Gridap.Arrays.Table{Int64, Vector{Int64}, Vector{Int64}}, FillArrays.Fill{Bool, 1, Tuple{Base.OneTo{Int64}}}}}, coarse_ids::Vector{Int32})
│        @ GridapSolvers.PatchBasedSmoothers ~/.julia/packages/GridapSolvers/gqvqX/src/PatchBasedSmoothers/PatchTransferOperators.jl:23
│      [9] #6
│        @ ~/.julia/packages/GridapSolvers/gqvqX/src/PatchBasedSmoothers/PatchTransferOperators.jl:36 [inlined]
...

@JordiManyer am I doing something wrong? Full log:

Details
mpirun -n 4 julia --project=. -e 'using GridapPETSc; using SparseMatricesCSR; using SparseArrays;  using GridapMHD:hunt;
  hunt(
    nc=(4,4),
    np=(2,2),
    backend=:mpi,
    L=1.0,
    B=(0.,50.,0.),
    debug=false,
    vtk=true,
    title="hunt-H1H1-gmg",
    solver= Dict(
       :solver => :h1h1blocks,
       :matrix_type    => SparseMatrixCSC{Float64,Int},
       :vector_type    => Vector{Float64},
       :block_solvers  => [:gmg,:julia,:julia],
       :petsc_options => "-ksp_error_if_not_converged true -ksp_converged_reason"
       ),
    fluid_disc = :Qk_dPkm1,
    current_disc = :H1,
    ranks_per_level = [(2,2,1),(2,2,1)],
    ζᵤ = 10.0
  )
'
No protocol specified
┌ Info: 2-level CartesianModelHierarchy:> Level 1: 8x8x3 CartesianDescriptor distributed in 2x2x1 ranks
└   > Level 2: 4x4x3 CartesianDescriptor distributed in 2x2x1 ranks
[8.786e+00 s in MAIN] pre_process
┌ Info: Parameters ::> x0 => zero 
│   > debug => false> multigrid => { 
│     > tests => { 
│     } 
│     > trials => { 
│     } 
│     > num_refs_coarse => 0 
│   } 
│   > jac_assemble => false> bcs => { 
│     > φ => { 
│       > values => 0.0> tags => conducting 
│     } 
│     > j => { 
│       > values => (0.0, 0.0, 0.0) 
│       > tags => insulating 
│     } 
│     > u => { 
│       > values => (0.0, 0.0, 0.0) 
│       > tags => noslip 
│     } 
│   } 
│   > solve => true> solid => nothing> fespaces => { 
│     > φ_conformity => H1 
│     > fluid_disc => Qk_dPkm1 
│     > φ_constraint => nothing> u_conformity => H1 
│     > p_constraint => nothing> order_u => 2> order_φ => 2> j_conformity => L2 
│     > k => 2> order_p => 1> rt_scaling => nothing> q => 5> current_disc => H1 
│     > p_conformity => L2 
│     > order_j => 2> formulation => H1H1 
│   } 
│   > solver => { 
│     > petsc_options => -ksp_error_if_not_converged true -ksp_converged_reason 
│     > rtol => 1.0e-10> atol => 1.0e-8> solver => h1h1blocks 
│     > niter => 20> niter_ls => 15 
│   } 
│   > fluid => { 
│     > α => 1.0> B => (0.0, 1.0, 0.0) 
│     > domain => nothing> μ => 10.0> σ => 1.0> convection => newton 
│     > ζⱼ => 0.0> g => (0.0, 0.0, 0.0) 
│     > β => 1.0> ζᵤ => 10.0> γ => 2500.0> f => (0.0, 0.0, 1.0) 
│     > divg => 0.0 
│   } 
│   > check_valid => true> continuation => nothing> transient => nothing> res_assemble => false 
[7.805e+00 s in MAIN] fe_spaces
┌ Error: 
│   exception =
│    ArgumentError: Collection is empty, must contain exactly 1 element
│    Stacktrace:
│      [1] _only
│        @ ./iterators.jl:1550 [inlined]
│      [2] only
│        @ ./iterators.jl:1545 [inlined]
│      [3] #1
│        @ ~/.julia/packages/GridapSolvers/gqvqX/src/PatchBasedSmoothers/PatchTransferOperators.jl:25 [inlined]
│      [4] iterate
│        @ ./generator.jl:48 [inlined]
│      [5] _collect(c::Gridap.Arrays.Table{Int64, Vector{Int64}, Vector{Int64}}, itr::Base.Generator{Gridap.Arrays.Table{Int64, Vector{Int64}, Vector{Int64}}, GridapSolvers.PatchBasedSmoothers.var"#1#4"{Gridap.Arrays.Table{Int32, Vector{Int32}, Vector{Int32}}, GridapSolvers.PatchBasedSmoothers.var"#is_interior#3"{Vector{Int32}}}}, ::Base.EltypeUnknown, isz::Base.HasShape{1})
│        @ Base ./array.jl:800
│      [6] collect_similar(cont::Gridap.Arrays.Table{Int64, Vector{Int64}, Vector{Int64}}, itr::Base.Generator{Gridap.Arrays.Table{Int64, Vector{Int64}, Vector{Int64}}, GridapSolvers.PatchBasedSmoothers.var"#1#4"{Gridap.Arrays.Table{Int32, Vector{Int32}, Vector{Int32}}, GridapSolvers.PatchBasedSmoothers.var"#is_interior#3"{Vector{Int32}}}})
│        @ Base ./array.jl:709
│      [7] map(f::Function, A::Gridap.Arrays.Table{Int64, Vector{Int64}, Vector{Int64}})
│        @ Base ./abstractarray.jl:3371
│      [8] CoarsePatchTopology(model::Gridap.Adaptivity.AdaptedDiscreteModel{3, 3, Gridap.Geometry.CartesianDiscreteModel{3, Float64, GridapMHD.Meshers.var"#map1#19"{Float64, Float64, Float64}}, Gridap.Geometry.CartesianDiscreteModel{3, Float64, GridapMHD.Meshers.var"#map1#19"{Float64, Float64, Float64}}, Gridap.Adaptivity.AdaptivityGlue{Gridap.Adaptivity.RefinementGlue, 3, Vector{Vector{Int64}}, Vector{Int64}, FillArrays.Fill{Gridap.Adaptivity.RefinementRule{Gridap.ReferenceFEs.ExtrusionPolytope{3}}, 1, Tuple{Base.OneTo{Int64}}}, Gridap.Arrays.Table{Int64, Vector{Int64}, Vector{Int64}}, FillArrays.Fill{Bool, 1, Tuple{Base.OneTo{Int64}}}}}, coarse_ids::Vector{Int32})
│        @ GridapSolvers.PatchBasedSmoothers ~/.julia/packages/GridapSolvers/gqvqX/src/PatchBasedSmoothers/PatchTransferOperators.jl:23
│      [9] #6
│        @ ~/.julia/packages/GridapSolvers/gqvqX/src/PatchBasedSmoothers/PatchTransferOperators.jl:36 [inlined]
│     [10] map(::GridapSolvers.PatchBasedSmoothers.var"#6#7", ::PartitionedArrays.MPIArray{Gridap.Adaptivity.AdaptedDiscreteModel{3, 3, Gridap.Geometry.CartesianDiscreteModel{3, Float64, GridapMHD.Meshers.var"#map1#19"{Float64, Float64, Float64}}, Gridap.Geometry.CartesianDiscreteModel{3, Float64, GridapMHD.Meshers.var"#map1#19"{Float64, Float64, Float64}}, Gridap.Adaptivity.AdaptivityGlue{Gridap.Adaptivity.RefinementGlue, 3, Vector{Vector{Int64}}, Vector{Int64}, FillArrays.Fill{Gridap.Adaptivity.RefinementRule{Gridap.ReferenceFEs.ExtrusionPolytope{3}}, 1, Tuple{Base.OneTo{Int64}}}, Gridap.Arrays.Table{Int64, Vector{Int64}, Vector{Int64}}, FillArrays.Fill{Bool, 1, Tuple{Base.OneTo{Int64}}}}}, 1}, ::PartitionedArrays.MPIArray{PartitionedArrays.PermutedLocalIndices{PartitionedArrays.LocalIndicesWithConstantBlockSize{3}}, 1})
│        @ PartitionedArrays ~/.julia/packages/PartitionedArrays/py6uo/src/mpi_array.jl:229
│     [11] CoarsePatchTopology(model::GridapDistributed.GenericDistributedDiscreteModel{3, 3, PartitionedArrays.MPIArray{Gridap.Adaptivity.AdaptedDiscreteModel{3, 3, Gridap.Geometry.CartesianDiscreteModel{3, Float64, GridapMHD.Meshers.var"#map1#19"{Float64, Float64, Float64}}, Gridap.Geometry.CartesianDiscreteModel{3, Float64, GridapMHD.Meshers.var"#map1#19"{Float64, Float64, Float64}}, Gridap.Adaptivity.AdaptivityGlue{Gridap.Adaptivity.RefinementGlue, 3, Vector{Vector{Int64}}, Vector{Int64}, FillArrays.Fill{Gridap.Adaptivity.RefinementRule{Gridap.ReferenceFEs.ExtrusionPolytope{3}}, 1, Tuple{Base.OneTo{Int64}}}, Gridap.Arrays.Table{Int64, Vector{Int64}, Vector{Int64}}, FillArrays.Fill{Bool, 1, Tuple{Base.OneTo{Int64}}}}}, 1}, Vector{PartitionedArrays.PRange}, GridapDistributed.DistributedAdaptedDiscreteModelCache{GridapDistributed.DistributedCartesianDescriptor{PartitionedArrays.MPIArray{Int64, 1}, Tuple{Int64, Int64, Int64}, Gridap.Geometry.CartesianDescriptor{3, Float64, GridapMHD.Meshers.var"#map1#19"{Float64, Float64, Float64}}}, GridapDistributed.DistributedCartesianDescriptor{PartitionedArrays.MPIArray{Int64, 1}, Tuple{Int64, Int64, Int64}, Gridap.Geometry.CartesianDescriptor{3, Float64, GridapMHD.Meshers.var"#map1#19"{Float64, Float64, Float64}}}, PartitionedArrays.PRange{PartitionedArrays.MPIArray{PartitionedArrays.PermutedLocalIndices{PartitionedArrays.LocalIndicesWithConstantBlockSize{3}}, 1}}}})
│        @ GridapSolvers.PatchBasedSmoothers ~/.julia/packages/GridapSolvers/gqvqX/src/PatchBasedSmoothers/PatchTransferOperators.jl:35
│     [12] (::GridapMHD.var"#73#74"{GridapSolvers.MultilevelTools.FESpaceHierarchy{Vector{GridapSolvers.MultilevelTools.FESpaceHierarchyLevel{A, Nothing} where A}, Vector{Union{GridapDistributed.MPIVoidVector{Int64}, PartitionedArrays.MPIArray{Int64, 1}}}, GridapSolvers.MultilevelTools.FESpaceHierarchyLevel{A, Nothing} where A}, GridapMHD.var"#weakform#76"{Dict{Symbol, Any}}})(lev::Int64)
│        @ GridapMHD ~/Prog/GridapMHD.jl/src/Solvers/gmg.jl:66
│     [13] (::GridapSolvers.MultilevelTools.var"#11#12"{GridapMHD.var"#73#74"{GridapSolvers.MultilevelTools.FESpaceHierarchy{Vector{GridapSolvers.MultilevelTools.FESpaceHierarchyLevel{A, Nothing} where A}, Vector{Union{GridapDistributed.MPIVoidVector{Int64}, PartitionedArrays.MPIArray{Int64, 1}}}, GridapSolvers.MultilevelTools.FESpaceHierarchyLevel{A, Nothing} where A}, GridapMHD.var"#weakform#76"{Dict{Symbol, Any}}}})(ranks::PartitionedArrays.MPIArray{Int64, 1}, ai::Int64)
│        @ GridapSolvers.MultilevelTools ~/.julia/packages/GridapSolvers/gqvqX/src/MultilevelTools/HierarchicalArrays.jl:109
│     [14] #4
│        @ ./generator.jl:37 [inlined]
│     [15] iterate
│        @ ./generator.jl:48 [inlined]
│     [16] collect(itr::Base.Generator{Base.Iterators.Zip{Tuple{SubArray{Union{GridapDistributed.MPIVoidVector{Int64}, PartitionedArrays.MPIArray{Int64, 1}}, 1, Vector{Union{GridapDistributed.MPIVoidVector{Int64}, PartitionedArrays.MPIArray{Int64, 1}}}, Tuple{UnitRange{Int64}}, true}, SubArray{Int64, 1, LinearIndices{1, Tuple{Base.OneTo{Int64}}}, Tuple{UnitRange{Int64}}, true}}}, Base.var"#4#5"{GridapSolvers.MultilevelTools.var"#11#12"{GridapMHD.var"#73#74"{GridapSolvers.MultilevelTools.FESpaceHierarchy{Vector{GridapSolvers.MultilevelTools.FESpaceHierarchyLevel{A, Nothing} where A}, Vector{Union{GridapDistributed.MPIVoidVector{Int64}, PartitionedArrays.MPIArray{Int64, 1}}}, GridapSolvers.MultilevelTools.FESpaceHierarchyLevel{A, Nothing} where A}, GridapMHD.var"#weakform#76"{Dict{Symbol, Any}}}}}})
│        @ Base ./array.jl:780
│     [17] map
│        @ ./abstractarray.jl:3495 [inlined]
│     [18] map(f::GridapMHD.var"#73#74"{GridapSolvers.MultilevelTools.FESpaceHierarchy{Vector{GridapSolvers.MultilevelTools.FESpaceHierarchyLevel{A, Nothing} where A}, Vector{Union{GridapDistributed.MPIVoidVector{Int64}, PartitionedArrays.MPIArray{Int64, 1}}}, GridapSolvers.MultilevelTools.FESpaceHierarchyLevel{A, Nothing} where A}, GridapMHD.var"#weakform#76"{Dict{Symbol, Any}}}, a::GridapSolvers.MultilevelTools.HierarchicalArray{Int64, SubArray{Int64, 1, LinearIndices{1, Tuple{Base.OneTo{Int64}}}, Tuple{UnitRange{Int64}}, true}, SubArray{Union{GridapDistributed.MPIVoidVector{Int64}, PartitionedArrays.MPIArray{Int64, 1}}, 1, Vector{Union{GridapDistributed.MPIVoidVector{Int64}, PartitionedArrays.MPIArray{Int64, 1}}}, Tuple{UnitRange{Int64}}, true}})
│        @ GridapSolvers.MultilevelTools ~/.julia/packages/GridapSolvers/gqvqX/src/MultilevelTools/HierarchicalArrays.jl:107
│     [19] gmg_patch_prolongations(tests::GridapSolvers.MultilevelTools.FESpaceHierarchy{Vector{GridapSolvers.MultilevelTools.FESpaceHierarchyLevel{A, Nothing} where A}, Vector{Union{GridapDistributed.MPIVoidVector{Int64}, PartitionedArrays.MPIArray{Int64, 1}}}, GridapSolvers.MultilevelTools.FESpaceHierarchyLevel{A, Nothing} where A}, weakform::GridapMHD.var"#weakform#76"{Dict{Symbol, Any}})
│        @ GridapMHD ~/Prog/GridapMHD.jl/src/Solvers/gmg.jl:62
│     [20] gmg_solver(::Val{:H1H1}, ::Val{:h1h1blocks}, params::Dict{Symbol, Any})
│        @ GridapMHD ~/Prog/GridapMHD.jl/src/Solvers/gmg.jl:94
│     [21] get_block_solver(::Val{:gmg}, params::Dict{Symbol, Any})
│        @ GridapMHD ~/Prog/GridapMHD.jl/src/Solvers/gmg.jl:5
│     [22] (::GridapMHD.var"#97#99"{Dict{Symbol, Any}})(s::Symbol)
│        @ GridapMHD ~/Prog/GridapMHD.jl/src/Solvers/h1h1blocks.jl:15
│     [23] iterate
│        @ ./generator.jl:48 [inlined]
│     [24] _collect(c::Vector{Symbol}, itr::Base.Generator{Vector{Symbol}, GridapMHD.var"#97#99"{Dict{Symbol, Any}}}, ::Base.EltypeUnknown, isz::Base.HasShape{1})
│        @ Base ./array.jl:800
│     [25] collect_similar(cont::Vector{Symbol}, itr::Base.Generator{Vector{Symbol}, GridapMHD.var"#97#99"{Dict{Symbol, Any}}})
│        @ Base ./array.jl:709
│     [26] map(f::Function, A::Vector{Symbol})
│        @ Base ./abstractarray.jl:3371
│     [27] H1H1BlockSolver(op::Gridap.FESpaces.FEOperatorFromWeakForm, params::Dict{Symbol, Any})
│        @ GridapMHD ~/Prog/GridapMHD.jl/src/Solvers/h1h1blocks.jl:15
│     [28] _solver(::Val{:h1h1blocks}, op::Gridap.FESpaces.FEOperatorFromWeakForm, params::Dict{Symbol, Any})
│        @ GridapMHD ~/Prog/GridapMHD.jl/src/main.jl:190
│     [29] _solver(op::Gridap.FESpaces.FEOperatorFromWeakForm, params::Dict{Symbol, Any})
│        @ GridapMHD ~/Prog/GridapMHD.jl/src/main.jl:174
│     [30] main(_params::Dict{Symbol, Any}; output::Dict{Symbol, Any})
│        @ GridapMHD ~/Prog/GridapMHD.jl/src/main.jl:147
│     [31] (::GridapMHD.var"#111#112"{Dict{Symbol, Any}})()
│        @ GridapMHD ~/Prog/GridapMHD.jl/src/Applications/hunt.jl:201
│     [32] with(f::GridapMHD.var"#111#112"{Dict{Symbol, Any}}; kwargs::@Kwargs{args::Vector{SubString{String}}})
│        @ GridapPETSc ~/.julia/packages/GridapPETSc/iYLxh/src/Environment.jl:38
│     [33] _hunt(; distribute::PartitionedArrays.var"#88#89"{MPI.Comm, Bool}, rank_partition::Tuple{Int64, Int64}, nc::Tuple{Int64, Int64}, ν::Float64, ρ::Float64, σ::Float64, B::Tuple{Float64, Float64, Float64}, f::Tuple{Float64, Float64, Float64}, ζᵤ::Float64, ζⱼ::Float64, μ::Int64, L::Float64, u0::Float64, B0::Float64, σw1::Float64, σw2::Float64, tw::Float64, order::Int64, order_j::Int64, nsums::Int64, vtk::Bool, title::String, path::String, debug::Bool, res_assemble::Bool, jac_assemble::Bool, solve::Bool, solver::Dict{Symbol, Any}, formulation::Symbol, initial_value::Symbol, rt_scaling::Bool, verbose::Bool, BL_adapted::Bool, kmap_x::Int64, kmap_y::Int64, ranks_per_level::Vector{Tuple{Int64, Int64, Int64}}, adaptivity_method::Int64, fluid_disc::Symbol, current_disc::Symbol)
│        @ GridapMHD ~/Prog/GridapMHD.jl/src/Applications/hunt.jl:200
│     [34] _hunt
│        @ ~/Prog/GridapMHD.jl/src/Applications/hunt.jl:41 [inlined]
│     [35] #105
│        @ ~/Prog/GridapMHD.jl/src/Applications/hunt.jl:23 [inlined]
│     [36] with_mpi(f::GridapMHD.var"#105#108"{Tuple{Int64, Int64}, String, @Kwargs{nc::Tuple{Int64, Int64}, L::Float64, B::Tuple{Float64, Float64, Float64}, debug::Bool, vtk::Bool, solver::Dict{Symbol, Any}, fluid_disc::Symbol, current_disc::Symbol, ranks_per_level::Vector{Tuple{Int64, Int64, Int64}}, ζᵤ::Float64}, String}; comm::MPI.Comm, duplicate_comm::Bool)
│        @ PartitionedArrays ~/.julia/packages/PartitionedArrays/py6uo/src/mpi_array.jl:73
│     [37] with_mpi
│        @ ~/.julia/packages/PartitionedArrays/py6uo/src/mpi_array.jl:64 [inlined]
│     [38] hunt(; backend::Symbol, np::Tuple{Int64, Int64}, title::String, nruns::Int64, path::String, kwargs::@Kwargs{nc::Tuple{Int64, Int64}, L::Float64, B::Tuple{Float64, Float64, Float64}, debug::Bool, vtk::Bool, solver::Dict{Symbol, Any}, fluid_disc::Symbol, current_disc::Symbol, ranks_per_level::Vector{Tuple{Int64, Int64, Int64}}, ζᵤ::Float64})
│        @ GridapMHD ~/Prog/GridapMHD.jl/src/Applications/hunt.jl:22
│     [39] top-level scope
│        @ none:2
│     [40] eval
│        @ ./boot.jl:430 [inlined]
│     [41] exec_options(opts::Base.JLOptions)
│        @ Base ./client.jl:296
│     [42] _start()
│        @ Base ./client.jl:531
└ @ PartitionedArrays ~/.julia/packages/PartitionedArrays/py6uo/src/mpi_array.jl:75
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------

@JordiManyer
Copy link
Collaborator Author

Can you update all branches and try again?

@principejavier
Copy link
Collaborator

principejavier commented Jul 30, 2025

Already done, cavity works. Could periodic mesh be an issue?

BTW, cavity only works with PETSc 3.23.4 but not with 3.15.4, seems to be a requirement of GridapSolvers, I remember you mentioned it. Could it be interesting to add a message elsewhere? Let me know whether you want me to open an issue about this, I could still reproduce the error I get with 3.15.4.

@JordiManyer
Copy link
Collaborator Author

Already done, cavity works. Could periodic mesh be an issue?

Ok yeah, the periodic meshes thing makes sense. I'll have a look.

BTW, cavity only works with PETSc 3.23.4 but not with 3.15.4, seems to be a requirement of GridapSolvers, I remember you mentioned it. Could it be interesting to add a message elsewhere? Let me know whether you want me to open an issue about this, I could still reproduce the error I get with 3.15.4.

Has nothing to do with GridapSolvers, it has to do with GridapPETSc. Basically the old version only worked with old versions of PETSc. I did changes to make it work with newer versions of PETSc, but it made old versions break. I have a new PR open in GridapPETSc, with some nice changes to how memory is handled. Can you try to use that to see if it makes a difference? Otherwise I can just add a comment in GridapPETSc's documentation, I don't think it's a big deal since one can simply update PETSc.

@JordiManyer
Copy link
Collaborator Author

This works for me:

using Gridap
using Gridap.Geometry, Gridap.Arrays

using GridapSolvers
using GridapSolvers: PatchBasedSmoothers  
using PartitionedArrays, GridapDistributed

parts = (2,2,1)
ranks = with_debug() do distribute
  distribute(LinearIndices((prod(parts),)))
end

cmodel = CartesianDiscreteModel((0,1,0,1),(5,5);isperiodic=(true,true))
cmodel = CartesianDiscreteModel(ranks,parts,(0,1,0,1,0,1),(4,4,3);isperiodic=(false,false,true))
model = Gridap.Adaptivity.refine(cmodel)

glue = Gridap.Adaptivity.get_adaptivity_glue(model)
ptopo = PatchBasedSmoothers.CoarsePatchTopology(model);

so I think you did not update all your branches like I suggested. Can you do go into your environment and update all your packages?

@principejavier
Copy link
Collaborator

Confirmed. Hunt with isotropic refinement (0a1dad9) runs.

JordiManyer and others added 29 commits August 4, 2025 15:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants