Skip to content

Add inverse scattering problem#1020

Merged
tachella merged 32 commits intodeepinv:mainfrom
tachella:scattering
Feb 15, 2026
Merged

Add inverse scattering problem#1020
tachella merged 32 commits intodeepinv:mainfrom
tachella:scattering

Conversation

@tachella
Copy link
Contributor

@tachella tachella commented Jan 19, 2026

This PR adds the deepinv.physics.Scattering operator, which allows simulating measurements of a 2D non-linear scattering problem, with applications in microwave tomography and optical diffraction tomography among others.

Summary of contributions

  • A new deepinv.physics.Scattering operator
  • A demo example showcasing the main features of the operator
  • A generalization of the compute_norm method to non-linears physics
  • A series of tests checking the correctness of the operator using closed form formulas for scattered waves (Mie Theory)

I've updated black which generated a bunch of new small style changes

New operator description

For each of the $i=1,\dots,T$ transmitters, the 2D forward model is given by the inhomogeneous Helmholtz equation (see e.g., Soubies et al. 2017):

$$ \nabla^2 u_i(\mathbf{r}) + k^2(\mathbf{r}) u_i(\mathbf{r}) = - (k^2(\mathbf{r}) - k_b^2) v_i(\mathbf{r}) \quad \mathbf{r} \in \mathbb{R}^2 $$

where

  • $u_i$ is the (unknown) scattered field,
  • $k_b$ is the (known scalar) wavenumber of the incident wave in the background medium,
  • $k(\mathbf{r})$ is the (unknown) spatially-varying wavenumber of the object to be recovered,
  • $v_i$ is the incident field generated by the ith transmitter in the absence of the object.

The total field (scattered + incident) is measured at (R) different receiver locations surrounding the object.

Parametrizing the unknown spatially-varying wavenumber as $k^2(\mathbf{r}) = k_b^2 (x(\mathbf{r})+1)$, where

  • $k_b$ is the background wavenumber,
  • $x = k^2/k_b^2 - 1$ is the scattering potential of the object to be recovered,

the problem can be reformulated in the Lippmann-Schwinger integral equation form:

$$ \begin{align*} u_i &= g * \left(x \circ (u_i+v_i)\right) \\ y_i &= G_s \left(x \circ (u_i+v_i)\right) \end{align*} $$

where

  • $g(\mathbf{r}) = k_b^2 \frac{i}{4} H_0^1(k_b|\mathbf{r}|)$ is Green's function in 2D (normalized by $k_b^2$),
  • $y \in \mathbb{C}^{R}$ are the measurements at the receivers for the ith transmitter,
  • $G_s$ denotes the convolution with Green's operator plus sampling at the (R) different receiver locations.

Checks to be done before submitting your PR

  • python3 -m pytest deepinv/tests runs successfully.
  • black . runs successfully.
  • make html runs successfully (in the docs/ directory).
  • Updated docstrings related to the changes (as applicable).
  • Added an entry to the changelog.rst.

@Andrewwango Andrewwango linked an issue Jan 21, 2026 that may be closed by this pull request
@Andrewwango Andrewwango self-assigned this Jan 24, 2026
:param torch.Tensor transmitters: New transmitter positions of shape `(2, T)`, where T is the number of transmitters.
"""
if transmitters is not None:
self.transmitters = transmitters.to(self.device).to(self.dtype)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use register_buffer so it works with .to()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved the registering from the init to here.

Is the device handling okay now? I'm not sure given the latest updates

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the latest PR #989, we should not rely on self.device internally in the future, since for example physics.to(device) with move buffers to device but not self.device. So either we add a dummy tensor _device_holder to dynamically update the device when calling update_parameters() or only register the buffer on the same device of the tensor and users are responsible for giving the tensor transmitter on the right device. Can you confirm that @romainvo?

Copy link
Contributor

@romainvo romainvo Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, exactly, the same is also true for self.dtype, it is not going to be updated appropriately when calling to(). Ideally, the _device_holder should just be a temporary measure to avoid breaking current workflows; I think newly implemented physics should not rely on this.

The deprecation of self.device also induces some changes in how parameters/buffers should be updated:

  • in Fix #968 and remove self.device from internal LinearPhysics logic #989, when updating parameters inside A or A_adjoint, it is implied that the inputs are on the same device as the current buffers, and if new buffers are created, they are initialized on the device of the input (so update_parameters, if needed, takes a device kwarg as input.
  • if some parameters should only be of a certain dtype, I suggest we hardcode it instead of setting self.dtype = torch.complex128. If multiple dtype precisions are supported (like float16 or float32), then it should be up to the user to initialize correctly first, and then to manage the dtype of the parameters (by allowing them to pass dtype as a kwarg in the update logic e.g.)
  • a rule I've implemented in Fix #968 and remove self.device from internal LinearPhysics logic #989, is that when updating a parameters, the dtype and device of the current parameter takes precedence over those of the pushed tensor, in practice this is handled by the super class in Physics.update_parameters. This means that you need to ensure all parameters updates are done via super().update_parameters. Because we don't have access to self.dtype or self.device, this prevents discrepency between the set of buffers, i.e. all buffers have the same dtype/device at all time, and we only rely on the to() method to change it. (https://github.com/deepinv/deepinv/pull/989/changes#diff-69993452ba3f714d704e427f51b84732addeb9b5fe672965827d28725955fe93R270-R275

Copy link
Contributor Author

@tachella tachella Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the clarifications.

I removed all the to.(self.device) and to(self.dtype) from update_parameters(). I chose not to include the dtype and device as optional arguments, as the user can do physics.to(device) instead of changing devices in update_parameters

import torch


def native_hankel1(n: int, x: torch.Tensor) -> torch.Tensor:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can remove this function. As of torch2.10 there is only bessel_j0 or bessel_j1 and no bessel_j or bessel_y

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Many thanks for spotting this!

I prefer to keep it, since the most important calls are for n=0 and n=1 (except for the testing function). I've fixed the implementation and now check if n in (0,1)

return torch.complex(jn, yn)


def hankel1(n: int, x: torch.Tensor) -> torch.Tensor:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So since pytorch don't have hankel or bessel function implemented for higher order, I'm not sure if we should keep these functions or just use the function from scipy directly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above, I kept and added a check in (0,1)

Copy link
Collaborator

@mh-nguyen712 mh-nguyen712 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the updates. My last comments and I think we're good. Apart from minor comments, I just realized that there is no bessel_j or bessel_y implemented in torch for order > 1 as of torch2.10. So we can only use scipy. In that case, I'm not sure if we should implement the special module add the API.

Copy link
Collaborator

@mh-nguyen712 mh-nguyen712 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Last fixes in the special functions

Copy link
Collaborator

@Andrewwango Andrewwango left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tachella
Copy link
Contributor Author

tachella commented Feb 9, 2026

/gpu-tests

@github-actions
Copy link
Contributor

github-actions bot commented Feb 9, 2026

🚀 GPU test workflows successfully triggered!

@tachella
Copy link
Contributor Author

tachella commented Feb 9, 2026

/gpu-tests

@github-actions
Copy link
Contributor

github-actions bot commented Feb 9, 2026

🚀 GPU test workflows successfully triggered!

@tachella
Copy link
Contributor Author

tachella commented Feb 9, 2026

/gpu-tests

@github-actions
Copy link
Contributor

github-actions bot commented Feb 9, 2026

🚀 GPU test workflows successfully triggered!

@tachella
Copy link
Contributor Author

tachella commented Feb 9, 2026

/gpu-tests

@github-actions
Copy link
Contributor

github-actions bot commented Feb 9, 2026

🚀 GPU test workflows successfully triggered!

@github-actions
Copy link
Contributor

github-actions bot commented Feb 9, 2026

❌ GPU docs failed

🔗 View run details

@github-actions
Copy link
Contributor

github-actions bot commented Feb 9, 2026

❌ GPU docs failed

🔗 View run details

@github-actions
Copy link
Contributor

❌ GPU tests failed

🔗 View run details

@github-actions
Copy link
Contributor

❌ GPU tests failed

🔗 View run details

@github-actions
Copy link
Contributor

❌ GPU docs failed

🔗 View run details

@github-actions
Copy link
Contributor

❌ GPU docs failed

🔗 View run details

@tachella tachella merged commit 3ef7768 into deepinv:main Feb 15, 2026
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

non-imaging inverse problems

4 participants