Add inverse scattering problem#1020
Conversation
# Conflicts: # deepinv/physics/forward.py
deepinv/physics/scattering.py
Outdated
| :param torch.Tensor transmitters: New transmitter positions of shape `(2, T)`, where T is the number of transmitters. | ||
| """ | ||
| if transmitters is not None: | ||
| self.transmitters = transmitters.to(self.device).to(self.dtype) |
There was a problem hiding this comment.
use register_buffer so it works with .to()
There was a problem hiding this comment.
I moved the registering from the init to here.
Is the device handling okay now? I'm not sure given the latest updates
There was a problem hiding this comment.
From the latest PR #989, we should not rely on self.device internally in the future, since for example physics.to(device) with move buffers to device but not self.device. So either we add a dummy tensor _device_holder to dynamically update the device when calling update_parameters() or only register the buffer on the same device of the tensor and users are responsible for giving the tensor transmitter on the right device. Can you confirm that @romainvo?
There was a problem hiding this comment.
Yes, exactly, the same is also true for self.dtype, it is not going to be updated appropriately when calling to(). Ideally, the _device_holder should just be a temporary measure to avoid breaking current workflows; I think newly implemented physics should not rely on this.
The deprecation of self.device also induces some changes in how parameters/buffers should be updated:
- in Fix #968 and remove self.device from internal LinearPhysics logic #989, when updating parameters inside
AorA_adjoint, it is implied that the inputs are on the same device as the current buffers, and if new buffers are created, they are initialized on the device of the input (soupdate_parameters, if needed, takes adevicekwarg as input. - if some parameters should only be of a certain dtype, I suggest we hardcode it instead of setting
self.dtype = torch.complex128. If multiple dtype precisions are supported (like float16 or float32), then it should be up to the user to initialize correctly first, and then to manage the dtype of the parameters (by allowing them to passdtypeas a kwarg in the update logic e.g.) - a rule I've implemented in Fix #968 and remove self.device from internal LinearPhysics logic #989, is that when updating a parameters, the dtype and device of the current parameter takes precedence over those of the pushed tensor, in practice this is handled by the super class in
Physics.update_parameters. This means that you need to ensure all parameters updates are done viasuper().update_parameters. Because we don't have access to self.dtype or self.device, this prevents discrepency between the set of buffers, i.e. all buffers have the same dtype/device at all time, and we only rely on theto()method to change it. (https://github.com/deepinv/deepinv/pull/989/changes#diff-69993452ba3f714d704e427f51b84732addeb9b5fe672965827d28725955fe93R270-R275
There was a problem hiding this comment.
thanks for the clarifications.
I removed all the to.(self.device) and to(self.dtype) from update_parameters(). I chose not to include the dtype and device as optional arguments, as the user can do physics.to(device) instead of changing devices in update_parameters
| import torch | ||
|
|
||
|
|
||
| def native_hankel1(n: int, x: torch.Tensor) -> torch.Tensor: |
There was a problem hiding this comment.
I think we can remove this function. As of torch2.10 there is only bessel_j0 or bessel_j1 and no bessel_j or bessel_y
There was a problem hiding this comment.
Many thanks for spotting this!
I prefer to keep it, since the most important calls are for n=0 and n=1 (except for the testing function). I've fixed the implementation and now check if n in (0,1)
| return torch.complex(jn, yn) | ||
|
|
||
|
|
||
| def hankel1(n: int, x: torch.Tensor) -> torch.Tensor: |
There was a problem hiding this comment.
So since pytorch don't have hankel or bessel function implemented for higher order, I'm not sure if we should keep these functions or just use the function from scipy directly.
There was a problem hiding this comment.
same as above, I kept and added a check in (0,1)
mh-nguyen712
left a comment
There was a problem hiding this comment.
Thanks for the updates. My last comments and I think we're good. Apart from minor comments, I just realized that there is no bessel_j or bessel_y implemented in torch for order > 1 as of torch2.10. So we can only use scipy. In that case, I'm not sure if we should implement the special module add the API.
mh-nguyen712
left a comment
There was a problem hiding this comment.
Last fixes in the special functions
|
/gpu-tests |
|
🚀 GPU test workflows successfully triggered! |
|
/gpu-tests |
|
🚀 GPU test workflows successfully triggered! |
|
/gpu-tests |
|
🚀 GPU test workflows successfully triggered! |
|
/gpu-tests |
|
🚀 GPU test workflows successfully triggered! |
|
❌ GPU docs failed |
|
❌ GPU docs failed |
|
❌ GPU tests failed |
|
❌ GPU tests failed |
|
❌ GPU docs failed |
|
❌ GPU docs failed |
This PR adds the
deepinv.physics.Scatteringoperator, which allows simulating measurements of a 2D non-linear scattering problem, with applications in microwave tomography and optical diffraction tomography among others.Summary of contributions
deepinv.physics.Scatteringoperatorcompute_normmethod to non-linears physicsI've updated
blackwhich generated a bunch of new small style changesNew operator description
For each of the$i=1,\dots,T$ transmitters, the 2D forward model is given by the inhomogeneous Helmholtz equation (see e.g., Soubies et al. 2017):
where
The total field (scattered + incident) is measured at (R) different receiver locations surrounding the object.
Parametrizing the unknown spatially-varying wavenumber as$k^2(\mathbf{r}) = k_b^2 (x(\mathbf{r})+1)$ , where
the problem can be reformulated in the Lippmann-Schwinger integral equation form:
where
Checks to be done before submitting your PR
python3 -m pytest deepinv/testsruns successfully.black .runs successfully.make htmlruns successfully (in thedocs/directory).