Skip to content

Conversation

@astroC86
Copy link
Contributor

@astroC86 astroC86 commented Aug 7, 2025

  • Add Tests
  • Add Documentation

@astroC86 astroC86 marked this pull request as draft August 7, 2025 23:14
@mrava87
Copy link
Contributor

mrava87 commented Jan 8, 2026

@astroC86 I had a first spin at the Halo operator. Seems very promising and I can see various things we could do with it ❤️

I am going to summarize my experience so far here and tomorrow I will push an updated version of plot_halo and a new plot_halo1d which uses an updated version of the actual implementation (will push it in a file called Halo_new.py so you can see what changes I made - so far they seem to break the 3d implementation though...):

  • I tried to check if the operator passes the dottest with this very simple example, but i got AssertionError: Dot test failed, v^H(Opu)=61.67157626475711 - u^H(Op^Hv)=65.33767302699903... not really sure why, but it seems that the adjoint is not correct (I trust the forward as I used in various examples and it always seem to make sense)
import numpy as np
import pylops
from pylops_mpi.basicoperators.Halo import ScatterType, local_block_split, BoundaryType
from pylops_mpi.utils.dottest import dottest
from mpi4py import MPI

import pylops_mpi

np.random.seed(42)

comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()

# Halo operator
nlocal = 64
n = nlocal * size
proc_grid_shape = (size, )  # number of partitions over each axis

halo = 1
halo_op = pylops_mpi.basicoperators.MPIHalo(
    dims=(n, ),
    halo=halo,
    proc_grid_shape=proc_grid_shape,
    boundary_mode=BoundaryType.ZERO,
    comm=comm
)

x_dist = pylops_mpi.DistributedArray(
    global_shape=n,
    base_comm=comm,
    partition=pylops_mpi.Partition.SCATTER)
x_dist[:] = np.random.normal(0., 1., nlocal)

y_dist = pylops_mpi.DistributedArray(
    global_shape=halo_op.shape[0],
    base_comm=comm,
    partition=pylops_mpi.Partition.SCATTER)
y_dist[:] = np.random.normal(0., 1., y_dist.local_array.shape)

dottest(halo_op, x_dist, y_dist, verb=True)
  • I used Halo to implement a distributed FirstDerivative operator using only pylops FirstDerivative and MPIBlockDiag... I think this is a nice example of how one could use Halo instead of rewrite from scratch a distributed operator and use add_ghost_cells.. this however required me adding a new parameter called edge which when set to False does not add halos at the edges (basically where you 'fake' the value using either zero, reflect, periodic

  • I wanted to use Halo to implement a distributed version of https://pylops.readthedocs.io/en/stable/api/generated/pylops.signalprocessing.Sliding1D.html#pylops.signalprocessing.Sliding1D.. however I realized that in this case one would need to have halos that change from rank to rank.. I can sketch why, but more in general you think this is feasible?

@astroC86
Copy link
Contributor Author

astroC86 commented Jan 9, 2026

I am going to summarize my experience so far here and tomorrow I will push an updated version of plot_halo and a new plot_halo1d which uses an updated version of the actual implementation (will push it in a file called Halo_new.py so you can see what changes I made - so far they seem to break the 3d implementation though...):

Perfect,will have a look once you push!

  • I tried to check if the operator passes the dottest with this very simple example, but i got AssertionError: Dot test failed, v^H(Opu)=61.67157626475711 - u^H(Op^Hv)=65.33767302699903... not really sure why, but it seems that the adjoint is not correct (I trust the forward as I used in various examples and it always seem to make sense)

The reason being is the Halo adjoint is not a true "adjoint" , it simply removes the halo from the local_submatirx returning the core. I will try to think of a way to get the adjoint working as you expect it for the dottest so the adjoint is as you expect it. Thanks for the example by the way very handy to test things, really appreciate it!

  • I used Halo to implement a distributed FirstDerivative operator using only pylops FirstDerivative and MPIBlockDiag... I think this is a nice example of how one could use Halo instead of rewrite from scratch a distributed operator and use add_ghost_cells.. this however required me adding a new parameter called edge which when set to False does not add halos at the edges (basically where you 'fake' the value using either zero, reflect, periodic

Ok so if I understand correctly you want to have the ability to have no halos long the edges, is that correct?

I will try to understand the Sliding1D operator before I give you a concrete, but I think it could be possible

@mrava87SW
Copy link
Contributor

mrava87SW commented Jan 9, 2026

For the adjoint, I think Halo, at least the way you designed it (with edges allowing Zero or Reflect or Periodic - not adding a chosen value) should be a linear operator, and so adjontable

Take this example (it doesn't really matter that it is not distributed, this should not affect the linearity of the operator); I have an array of size 6 and I want to add halos every 3

Input=[1,2,3,4,5,6]
Output=[0,1,2,3,3,4,4,5,6,0]

Now I can write this as follows:

[0,0,0,0,0,0] 
[1,0,0,0,0,0] 
[0,1,0,0,0,0] 
[0,0,1,0,0,0] 
[0,0,1,0,0,0] 
[0,0,0,1,0,0] 
[0,0,0,1,0,0] 
[0,0,0,0,1,0] 
[0,0,0,0,0,1] 
[0,0,0,0,0,0] 
* Input

this tells me that this operator is linear, so it has an adjoint 😄

I would be the same if you use Reflect as now you will have

Input=[1,2,3,4,5,6]
Output=[1,1,2,3,3,4,4,5,6,6]

Now I can write this as follows:

[1,0,0,0,0,0] 
[1,0,0,0,0,0] 
[0,1,0,0,0,0] 
[0,0,1,0,0,0] 
[0,0,1,0,0,0] 
[0,0,0,1,0,0] 
[0,0,0,1,0,0] 
[0,0,0,0,1,0] 
[0,0,0,0,0,1] 
[0,0,0,0,0,1] 
* Input

but you could not say, use a choosen value (eg 10) as

Input=[1,2,3,4,5,6]
Output=[10,1,2,3,3,4,4,5,6,10]

has no fixed matrix that goes from Input to Output... so we don't want to add that scenario.

Makes sense?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants