Releases: neuraloperator/neuraloperator
Major release: 2.0.0
NeuralOperator v2.0.0 - Release Notes
We are excited to release this major new version of the library that brings plenty of new features, new models and strategies, quality of life improvements and bug fixes! Notably, we worked on improving the documentation and making the library even easier to use for anyone wanting to train Neural Operators in PyTorch.
Features & Enhancements
Models
- Codano model added (#497)
- Tensor-GaLore added (#510)
- Fourier Continuation (FC) layer added (#422) (#604) (#644)
- Mollified GNO added (#546)
Training & Losses
- Train recipes improved for GINO and FNO/GNO (#518)
HdivLossimplemented (#540)- Numerical stability improvements in relative losses (#554)
- Fix powers and roots in data losses (#665)
- PINO Reweighing schemes added (#618)
- Fourier Differentiation added (#604) (#643)
- Finite Differences class added (#643)
- Point cloud finite-difference added (#576)
- Divergence-free spectral projection added (#648)
Data
- Update configuration system to use
zencfgand migrate config files from.yamlto.py(#592) - Interface and integration with The Well (#533) (#605)
- Nonlinear Poisson dataset added (#546)
- Add legacy
load_mini_burgers_1d_timehelper (#543) - Add a mini version of the car-cfd dataset to visualize in the docs (#581)
Refactoring & Cleanup
- Rename
localFNO→localNO(#526) - Remove
SpectralConvNdlayers (#511) - Make
torch-harmonicsfully optional (#535) - Remove
zarras required dependency for HPC compatibility (#547) - Remove
test_pypideployment (#566) - Remove one-sided padding and Fourier continuation (#630)
- Enable "batch_norm" in FNO class (#610)
- Depreciating central_diff (#651)
- Reactivate decomposition and preactivation FNO options (#650)
- Removing subclasses FNO d, TFNO d (#646)
- Removing kwargs from model definitions (#646)
- Reintroduce
use_channel_mlpoption (#553) - Support float values for channel ratios (#598)
- update disco conv for new filter basis functions (#520)
- Full library cleaning (#668) (#673) (#674)
Bug Fixes
- Fix slicing in
train_gino_carcfd(#507) - Properly center FFT modes (#508)
- Fix serialization of
geluin torch load (#559) - Fix kernel size for Morlet basis + add tests (#560)
- Fix
norm=Nonehandling inFNOBlocks.set_embeddings(#575) - Fix legacy
spectralconvndremoval to prevent build errors (#580) - Fix math formatting in solver examples (#582)
- Fix proper calls to
BatchNormin FNO (#602) - Fix
segmentcsrtests by settinguse_scatter=False(#572) - Fix
burgers_1dtimerunner (#557) - Fix load
BaseModelmetadata with weights_only set to False (#530) - Fix wandb logging (#609)
- ZenCFG Update and config fixes (#632)
- Fix multi-dim indexing (#647)
- fix: Correct L2-norm and power calculation in spectrum_2d (#661)
- Wrap FNOBlock norms with ComplexValued (#666)
- Set
download=Truefor small Darcy example (#565)
Documentation & Examples
- Clarify GINO batch dimension and embedding input shape (#512) (#524)
- Add docs for
spectrum_2d(#519) - Document
TensorDatasetand trainer batch format (#549) - Add documentation on cpu offloading (#635)
- Add Resampling Example (#664)
- Add Normalization Layer Example (#662)
- Add Sinusoidal Embeddings Example (#672)
- Update Theory Guide (#600) (#617) (#669)
- Add Contributing Guide (#641)
- Update User Guide (#670)
- Add Developer's Guide (#531) (#606) (#671)
- Update and clean example gallery (#522) (#528) (#667)
- Docstrings cleaning (#668) (#673) (#674)
- Update API reference (#677)
Miscellaneous Improvements
- Make unsqueezing channel dim optional in
PTDataset(#509) - Set
domain_padding_modesymmetric by default (#525) - Parametrize
num_workersin training scripts (#590) - add explicit indexing type in torch.meshgrid(...) for posterity (#613)
- Batch normalization support in
FNOBlocks(#564) - Parametrize
num_workersin training scripts (#590)
Contributors
@dhpitt, @vduruiss, @JeanKossaifi, @ashiq24, @AleCorintis, @acotino-ignitioncomputing, @bizoffermark, @cwwangcal, @gluca99, @klockwood19, @kmario23, @MichalBravansky, @piano-miles, @radka-j, @rwhan, @StillJosh
Minor release 1.0.2
What's new
This is a minor release to fix an issue in the pypi package.
Full list of changes:
- Fixes to doc workflow by @dhpitt in #496
- FIX workflow syntax by @JeanKossaifi in #498
- Update sphinx gallery index by @JeanKossaifi in #504
- added sources to include for sdist by @sarthakpati in #506
New Contributors
- @sarthakpati made their first contribution in #506
Full Changelog: 1.0.1...1.0.2
1.0.1: minor fixes and a link to the new white paper
What's Changed
Minor update that fixes loading the model's loading/saving: previously it would cause the optimizer to not update the correct parameters.
- (fix) Bump Darcy dataset version for proper shapes by @dhpitt in #490
- (fix) Remove
from_checkpointinload_training_stateto avoid overwriting model by @dhpitt in #491 - Update citation for our white paper! by @dhpitt in #492
- Save metadata in
BaseModel.state_dict(), check version tag inBaseModel.load_state_dict()by @dhpitt in #493 - remove ref tag from stable doc workflow by @dhpitt in #495
Full Changelog: 1.0.0...1.0.1
Major release: 1.0.0
Neuralop 1.0 release notes
We are excited to release version 1.0 of neuraloperator. It is the result of a larger group effort, a year in the making, and introduces significant updates to NeuralOperator, enhancing usability, extending capabilities, and optimizing performance for neural operator models. Key improvements span models, layers, training, data handling, and documentation.
It introduces many new features and improvements, including:
• New Training Framework: Refactored Trainer class, GPU-accelerated DataProcessor, and streamlined training and evaluation workflows.
• Cutting-edge Architectures (GINO, UQNO, LocalFNO)
• New Operator Blocks including Transformer based and DISCO convolutions
• Built-in Datasets: Classic PDE datasets (Darcy Flow, Navier-Stokes) + new Car-CFD dataset for geometric simulations.
• Smarter Algorithms: Incremental spectral learning and mixed-precision training for efficiency
• Improved Documentation: Updated Quickstart Guide, new examples, and 80%+ test coverage
In addition, a number of optimizations, including fixes for numerical stability, training state handling, and improved batch processing, make this release smoother and more efficient for users working with large-scale models.
New training framework
This class introduces a fully refactored Trainer class for easier training. A new DataProcessor module takes care of data-processing directly on GPU, allowing for more flexibility for various usecases such as auto-regressive training or inference. The Callbacks mechanism has also been removed in favor of a cleaner, modular, and more extensible Trainer. This improves clarity and ease of use.
- You can now evaluate before training your model:
>>> model, optimizer, scheduler, _ = load_training_state(...)
>>> trainer = Trainer(model, n_epochs, ... device='cuda', ...)
>>> trainer.evaluate({'h1': H1Loss()}, test_loader)
{'h1': 0.0252}
- Resuming training states is also simplified, and compatible with multi-device setups:
>>> trainer = Trainer(... device=device, use_distributed=True)
>>> trainer.train(..., resume_from_dir="./checkpoints", ...)
Trainer resuming from epoch 5...
The Trainer accepts a DataProcessor module to streamline data pre- and post-processing. Explicit train() and eval() modes were also added to the DataProcessor, making the behavior during training and evaluation clearer and more explicit.
Last, logging has been reintroduced into the Trainer, including better integration with Weights and Biases (WandB) for experiment tracking.
For an example of the new Trainer in action, check out our interactive training example!
New neural operator architectures
GINO
The GINO (Geometry-Informed Neural Operator) architecture brings the FNO's ability to learn complex dynamics to irregular geometries. The GINO maps between functions provided at arbitrary input coordinates. For maximum modularity, the GINO is composed of configurable GNOBlock and FNOBlock operator layers.
To see example usage, check out the training recipe in scripts/train_gino_carcfd.py.
UQNO
The UQNO implements Uncertainty Quantification for Operator Learning, thanks to @ziqi-ma and @dhpitt. For an example training recipe see scripts/train_uqno_darcy.py.
LocalFNO
The Local Fourier Neural Operator shares its forward pass and architecture with the standard FNO, with the key difference that its Fourier convolution layers are replaced with LocalFNOBlocks that place differential kernel layers and local integral layers in parallel to its Fourier layers.Thanks to @mliuschi.
New neural operator blocks
Attention kernel integral
The AttentionKernelIntegral, by @zijieli-Jlee brings the multi-head attention mechanism to operator learning.
Codomain-Attention Blocks
CODABlocks, by @ashiq24, implements Codomain Attention Neural Operators, an operator block that extends transformer positional encoding, self-attention and normalization functionalities to function spaces.
Differential convolution
Implements the finite difference convolution required for Local Neural Operators. The DifferentialConv computes a finite difference convolution on a regular grid, which converges to a directional derivative as the grid is refined. Thanks to @mliuschi.
DISCO convolutions
DiscreteContinuousConv2d implements Discrete-Continuous Convolutions required for Local Neural Operators. Check our documentation for an interactive demo of DISCO convolutions in use! Thanks to @mliuschi and @bonevbs.
GNOBlock
This layer implements the Graph Neural Operator architecture, which combines a spatial neighbor search with a pointwise aggregation to create a kernel integral similar to message-passing neural network.
LocalFNOBlock
- Adds a
LocalFNOBlockslayer, which mirrors theFNOBlockarchitecture with differential and discrete-continuous convolutions in parallel to normal spectral convolutions (#468)
Updates to existing models/layers
FNO
We've simplifies the FNO's documentation and initialization. Parameters are now ordered by importance, and init only requires values for the most crucial parameters
- Creating an FNO is now as simple as:
>>> from neuralop.models import FNO >>> model = FNO(n_modes=(12,12), in_channels=1, out_channels=1, hidden_channels=64) >>> model FNO( (positional_embedding): GridEmbeddingND() (fno_blocks): FNOBlocks( (convs): SpectralConv( (weight): ModuleList( (0-3): 4 x DenseTensor(shape=torch.Size([64, 64, 12, 7]), rank=None) ) ) ... torch.nn.Module printout truncated ...
We've also added support for functions that take on complex values in the spatial domain (#400)
FNO-GNO
The FNOGNO combines FNOBlocks over a regular grid with a GNOBlock layer to map to arbitrary query points in the spatial domain. Updates include simplifications of parameters and documentation, and modularization to integrate the new GNOBlock layer.
New meta algorithms
Incremental learning of resolution and frequency
Implements Incremental Spectral Learning to learn the smallest possible model for a given problem.
- Adds
IncrementalDataProcessorand accompanying meta-alg (#274) - Meta-algorithm implemented as
IncrementalFNOTrainer, a subclass of theTrainer
Thanks to @Robertboy18
Out-of-the-box mixed-precision training
The Trainer's mixed_precision parameter automatically handles mixed-precision using the new torch.autocast framework.
A new data module
Overview
We've refactored all data-associated functionality into neuralop.data. Datasets are now located at neuralop.data.datasets and transforms are located at neuralop.data.transforms. We've also updated the interfaces of our example datasets, including the option to download data from the new NeuralOperator Zenodo Community.
The Darcy-flow, Burger's, and Navier-Stokes datasets now all derive from a PTDataset template class, which updates all interfaces, adds adownload option and connects to the Zenodo data source (#299).
Car-CFD
To showcase our geometric models, we've also added a dataset of simulations of airflow over 3d ShapeNet car models (#452), as well as examples for the FNOGNO and GINO models (scripts/train_fnogno_carcfd.py, scripts/train_gino_carcfd.py).
Testing and Documentation
This release, we expanded test coverage to 80%. To ensure users can get the most out of the library with minimal effort, we also significantly improved the documentation, simplifying the docstring, adding more user-guides and new examples.
The Quickstart Guide has been fully updated to provide a smooth, hands-on experience for new users, with clear instructions on training, saving, and loading models.
Other changes
Correct complex optimization
PyTorch's default Adam implementation currently handles complex parameters by viewing as a 2-tensor stack (real_values, imaginary_values). This leads to an incorrect magnitude computation, which in turn affects the computation of momentum. Our implementation keeps parameters as real, and updates the SpectralConv's choice of parameter tensors to ensure that parameters are registered as complex.
- Adds custom
AdamWimplementation (#420) - Ensures
SpectralConvparameters are registered with complex values (#401)
Misc. bug fixes
0.3.0
Summary
We are excited to release this new version of the neuraloperator library! It brings many improvements, including new architectures (SFNO, GNO, GINO), many improvements to existing ones, out-of-the-box super resolution, super-evaluation and incremental training.
All models can now be easily saved and loaded and we provide a lightweight trainer compatible with all our neuraloperators. Head to the examples for some sample code, and to the API for a full documentation!
What's Changed
- Refactor MLP config by @JeanKossaifi in #143
- Adds super-resolution to FNO by @JeanKossaifi in #147
- Adds ADA_IN norm by @JeanKossaifi in #148
- Adds SFNO by @JeanKossaifi in #150
- Fix missing parameters output_scaling_factor by @sleepyeye in #159
- Finodev by @ashiq24 in #152
- quick_avoid by @ashiq24 in #173
- Add low-precision to TFNO by @crwhite14 in #172
- fix SFNO example by @crwhite14 in #177
- fix comma in readme file by @gegewen in #179
- Liftproj mod to mlp by @btolooshams in #182
- MLP additional statement to check for scenario where n_layers=1 by @btolooshams in #183
- marge the update on fno mlp help description by @btolooshams in #191
- Update guide on Fourier neural operator by @devzhk in #156
- Spectrum analysis of datasets. by @Robertboy18 in #193
- adding flag option to only pad the last dim by @btolooshams in #185
- Minor error in L#602 in fno.py by @ImanLiao in #194
- gino by @kovachki in #195
- Fix mlp nonlinearity by @ziqi-ma in #197
- Reformat
layers/directory withblackby @m4e7 in #199 - Use lifting channels in FNO.lifting if it is passed by @dhpitt in #196
- removed dead lines in FNOGNO by @dhpitt in #203
- Reformat
datasets/directory withblackby @m4e7 in #205 - docstring for FNOGNO by @dhpitt in #202
- Fix Python 3.6 f-string compatibility and condense documentation for FNO classes by @m4e7 in #209
- Split
preactivationfromFNOBlock.forward()by @m4e7 in #214 - Padding correction by @ashiq24 in #218
- general trainer class for GINO and NO by @dhpitt in #215
- Padding correction by @ashiq24 in #220
- fix example Trainer API calls by @dhpitt in #219
- Refactor rescaling in skips as transform in the Spectral Conv by @JeanKossaifi in #217
- Further simplification + UNO fix by @JeanKossaifi in #221
- Sht correction by @ashiq24 in #222
- Fix loss signatures to build doc by @dhpitt in #224
- fix small bug in the WandB logger callback by @dhpitt in #232
- Adding 4D prediction only, no nested fno by @gegewen in #225
- Revert 4D_FNO changes until they are properly tested by @dhpitt in #235
- Remove torch_scatter and torch_cluster from CI pipeline's dependencies by @dhpitt in #233
- Model checkpointing by @dhpitt in #234
- Updates to documentation and callback docstrings by @dhpitt in #237
- index dropout moduleList by @dhpitt in #239
- fix syntax error and add index.rst by @dhpitt in #240
- Fix typo in checkpoint init by @rybchuk in #241
- Bug fixes and unit testing for Callbacks by @dhpitt in #242
- Refactors SpectralConv for simpler FNO by @JeanKossaifi in #244
- BaseModel: adds checkpointing, versioning, safeguards by @JeanKossaifi in #257
- Add Burger's dataset and PINO by @crwhite14 in #256
- Enable transform wrappers by @JeanKossaifi in #254
- Update tensor_dataset.py by @slanthaler in #260
- Update to the checkpoint callback and test by @dhpitt in #258
- fix domain_padding to accept list (e.g., [0,0,1]) in addition to sc… by @btolooshams in #263
- Move to DataProcessor API by @dhpitt in #262
- Fix navier stokes preprocessor bug by @dhpitt in #265
- Fixes to make
DataProcessorcode doc build by @dhpitt in #266 - Add AutoML via Optuna by @crwhite14 in #243
- fixing the horizontal_skips_map construction, it was not going through by @btolooshams in #267
- Updates to saving and loading models by @dhpitt in #268
New Contributors
- @sleepyeye made their first contribution in #159
- @crwhite14 made their first contribution in #172
- @gegewen made their first contribution in #179
- @btolooshams made their first contribution in #182
- @devzhk made their first contribution in #156
- @Robertboy18 made their first contribution in #193
- @ImanLiao made their first contribution in #194
- @kovachki made their first contribution in #195
- @ziqi-ma made their first contribution in #197
- @m4e7 made their first contribution in #199
- @dhpitt made their first contribution in #196
- @rybchuk made their first contribution in #241
- @slanthaler made their first contribution in #260
Full Changelog: 0.2.0...0.3.0