Add scale-consistency training#645
Conversation
|
This is great, thanks so much @zongyi-li ! Comments to follow but this looks really good. One note: the online runner does not haver access to the 128x128 darcy-flow dataset by default as it is not pre-included in the library; we can either download it (e.g. call |
dhpitt
left a comment
There was a problem hiding this comment.
Thanks again for this contribution, @zongyi-li. A few minor notes on style and clarity for future maintainability but overall this looks very good. Passing on to @JeanKossaifi for review.
| if type == "darcy": | ||
| y_small_ = model(x_small) | ||
| else: | ||
| y_small_ = model(x_small, re) |
There was a problem hiding this comment.
This forward call seems like it might belong better in the custom trainer's training loop
There was a problem hiding this comment.
Yea we added a custom trainer, but I feel it will also be beneficial to have a loss, which we can use in other trainers in the future.
| model = self._unwrap_model() | ||
| if self.mixed_precision: | ||
| with torch.autocast(device_type=self.autocast_device_type): | ||
| sc_loss = self.selfconsistency_loss(model, sample["x"], loss_fn=training_loss, y=sample["y"]) |
There was a problem hiding this comment.
As above; might make more sense if things like the model's forward call went here, and as little as possible of the computation belonged in the loss function
|
This looks good, thanks @zongyi-li for the PR and @dhpitt for the review. My main question is whether there shouldn't be some more separation between the concepts (the algorithm) and the problem specific implementation (for Darcy, Burgers, etc). Perhaps this is just a matter of refactoring a little. What do you think? |
This makes sense to me. The more we can centralize the simpler this will be to maintain in the future |
|
sorry for the delay! I will rename the specific implementation to generic one. |
Implement the scale-consistency training from paper https://arxiv.org/abs/2507.18813
We added four files; no existing filed is modified.
neuralop/losses/scale_consistency_losses.pythe loss scale_consistency by compare global domain and sub domains.neuralop/data/transforms/rescale.pyrescaling transforms that zoom in to subdomains.neuralop/training/scale_consistency.pythe subclass trainer that pass the model to the loss.example/training/plot_scale_consistency_FNO_darcy.pyan example of scale consistency training on boundary-valued darcy problem.