Skip to content

Releases: neuraloperator/neuraloperator

Major release: 2.0.0

22 Oct 06:18
b307030

Choose a tag to compare

NeuralOperator v2.0.0 - Release Notes

We are excited to release this major new version of the library that brings plenty of new features, new models and strategies, quality of life improvements and bug fixes! Notably, we worked on improving the documentation and making the library even easier to use for anyone wanting to train Neural Operators in PyTorch.

Features & Enhancements

Models

  • Codano model added (#497)
  • Tensor-GaLore added (#510)
  • Fourier Continuation (FC) layer added (#422) (#604) (#644)
  • Mollified GNO added (#546)

Training & Losses

  • Train recipes improved for GINO and FNO/GNO (#518)
  • HdivLoss implemented (#540)
  • Numerical stability improvements in relative losses (#554)
  • Fix powers and roots in data losses (#665)
  • PINO Reweighing schemes added (#618)
  • Fourier Differentiation added (#604) (#643)
  • Finite Differences class added (#643)
  • Point cloud finite-difference added (#576)
  • Divergence-free spectral projection added (#648)

Data

  • Update configuration system to use zencfg and migrate config files from .yaml to .py (#592)
  • Interface and integration with The Well (#533) (#605)
  • Nonlinear Poisson dataset added (#546)
  • Add legacy load_mini_burgers_1d_time helper (#543)
  • Add a mini version of the car-cfd dataset to visualize in the docs (#581)

Refactoring & Cleanup

  • Rename localFNOlocalNO (#526)
  • Remove SpectralConvNd layers (#511)
  • Make torch-harmonics fully optional (#535)
  • Remove zarr as required dependency for HPC compatibility (#547)
  • Remove test_pypi deployment (#566)
  • Remove one-sided padding and Fourier continuation (#630)
  • Enable "batch_norm" in FNO class (#610)
  • Depreciating central_diff (#651)
  • Reactivate decomposition and preactivation FNO options (#650)
  • Removing subclasses FNO d, TFNO d (#646)
  • Removing kwargs from model definitions (#646)
  • Reintroduce use_channel_mlp option (#553)
  • Support float values for channel ratios (#598)
  • update disco conv for new filter basis functions (#520)
  • Full library cleaning (#668) (#673) (#674)

Bug Fixes

  • Fix slicing in train_gino_carcfd (#507)
  • Properly center FFT modes (#508)
  • Fix serialization of gelu in torch load (#559)
  • Fix kernel size for Morlet basis + add tests (#560)
  • Fix norm=None handling in FNOBlocks.set_embeddings (#575)
  • Fix legacy spectralconvnd removal to prevent build errors (#580)
  • Fix math formatting in solver examples (#582)
  • Fix proper calls to BatchNorm in FNO (#602)
  • Fix segmentcsr tests by setting use_scatter=False (#572)
  • Fix burgers_1dtime runner (#557)
  • Fix load BaseModel metadata with weights_only set to False (#530)
  • Fix wandb logging (#609)
  • ZenCFG Update and config fixes (#632)
  • Fix multi-dim indexing (#647)
  • fix: Correct L2-norm and power calculation in spectrum_2d (#661)
  • Wrap FNOBlock norms with ComplexValued (#666)
  • Set download=True for small Darcy example (#565)

Documentation & Examples

  • Clarify GINO batch dimension and embedding input shape (#512) (#524)
  • Add docs for spectrum_2d (#519)
  • Document TensorDataset and trainer batch format (#549)
  • Add documentation on cpu offloading (#635)
  • Add Resampling Example (#664)
  • Add Normalization Layer Example (#662)
  • Add Sinusoidal Embeddings Example (#672)
  • Update Theory Guide (#600) (#617) (#669)
  • Add Contributing Guide (#641)
  • Update User Guide (#670)
  • Add Developer's Guide (#531) (#606) (#671)
  • Update and clean example gallery (#522) (#528) (#667)
  • Docstrings cleaning (#668) (#673) (#674)
  • Update API reference (#677)

Miscellaneous Improvements

  • Make unsqueezing channel dim optional in PTDataset (#509)
  • Set domain_padding_mode symmetric by default (#525)
  • Parametrize num_workers in training scripts (#590)
  • add explicit indexing type in torch.meshgrid(...) for posterity (#613)
  • Batch normalization support in FNOBlocks (#564)
  • Parametrize num_workers in training scripts (#590)

Contributors

@dhpitt, @vduruiss, @JeanKossaifi, @ashiq24, @AleCorintis, @acotino-ignitioncomputing, @bizoffermark, @cwwangcal, @gluca99, @klockwood19, @kmario23, @MichalBravansky, @piano-miles, @radka-j, @rwhan, @StillJosh

Minor release 1.0.2

30 Dec 20:51
9ab8667

Choose a tag to compare

What's new

This is a minor release to fix an issue in the pypi package.

Full list of changes:

New Contributors

Full Changelog: 1.0.1...1.0.2

1.0.1: minor fixes and a link to the new white paper

20 Dec 00:55
60fe227

Choose a tag to compare

What's Changed

Minor update that fixes loading the model's loading/saving: previously it would cause the optimizer to not update the correct parameters.

  • (fix) Bump Darcy dataset version for proper shapes by @dhpitt in #490
  • (fix) Remove from_checkpoint in load_training_state to avoid overwriting model by @dhpitt in #491
  • Update citation for our white paper! by @dhpitt in #492
  • Save metadata in BaseModel.state_dict(), check version tag in BaseModel.load_state_dict() by @dhpitt in #493
  • remove ref tag from stable doc workflow by @dhpitt in #495

Full Changelog: 1.0.0...1.0.1

Major release: 1.0.0

17 Dec 15:07

Choose a tag to compare

Neuralop 1.0 release notes

We are excited to release version 1.0 of neuraloperator. It is the result of a larger group effort, a year in the making, and introduces significant updates to NeuralOperator, enhancing usability, extending capabilities, and optimizing performance for neural operator models. Key improvements span models, layers, training, data handling, and documentation.

It introduces many new features and improvements, including:
New Training Framework: Refactored Trainer class, GPU-accelerated DataProcessor, and streamlined training and evaluation workflows.
Cutting-edge Architectures (GINO, UQNO, LocalFNO)
New Operator Blocks including Transformer based and DISCO convolutions
Built-in Datasets: Classic PDE datasets (Darcy Flow, Navier-Stokes) + new Car-CFD dataset for geometric simulations.
Smarter Algorithms: Incremental spectral learning and mixed-precision training for efficiency
Improved Documentation: Updated Quickstart Guide, new examples, and 80%+ test coverage

In addition, a number of optimizations, including fixes for numerical stability, training state handling, and improved batch processing, make this release smoother and more efficient for users working with large-scale models.

New training framework

This class introduces a fully refactored Trainer class for easier training. A new DataProcessor module takes care of data-processing directly on GPU, allowing for more flexibility for various usecases such as auto-regressive training or inference. The Callbacks mechanism has also been removed in favor of a cleaner, modular, and more extensible Trainer. This improves clarity and ease of use.

  • You can now evaluate before training your model:
>>> model, optimizer, scheduler, _ = load_training_state(...)

>>> trainer = Trainer(model, n_epochs, ... device='cuda', ...) 
>>> trainer.evaluate({'h1': H1Loss()}, test_loader)
{'h1': 0.0252}

  • Resuming training states is also simplified, and compatible with multi-device setups:
>>> trainer = Trainer(... device=device, use_distributed=True)
>>> trainer.train(..., resume_from_dir="./checkpoints", ...)
Trainer resuming from epoch 5...

The Trainer accepts a DataProcessor module to streamline data pre- and post-processing. Explicit train() and eval() modes were also added to the DataProcessor, making the behavior during training and evaluation clearer and more explicit.

Last, logging has been reintroduced into the Trainer, including better integration with Weights and Biases (WandB) for experiment tracking.

For an example of the new Trainer in action, check out our interactive training example!

New neural operator architectures

GINO

The GINO (Geometry-Informed Neural Operator) architecture brings the FNO's ability to learn complex dynamics to irregular geometries. The GINO maps between functions provided at arbitrary input coordinates. For maximum modularity, the GINO is composed of configurable GNOBlock and FNOBlock operator layers.

To see example usage, check out the training recipe in scripts/train_gino_carcfd.py.

UQNO

The UQNO implements Uncertainty Quantification for Operator Learning, thanks to @ziqi-ma and @dhpitt. For an example training recipe see scripts/train_uqno_darcy.py.

LocalFNO

The Local Fourier Neural Operator shares its forward pass and architecture with the standard FNO, with the key difference that its Fourier convolution layers are replaced with LocalFNOBlocks that place differential kernel layers and local integral layers in parallel to its Fourier layers.Thanks to @mliuschi.

New neural operator blocks

Attention kernel integral

The AttentionKernelIntegral, by @zijieli-Jlee brings the multi-head attention mechanism to operator learning.

Codomain-Attention Blocks

CODABlocks, by @ashiq24, implements Codomain Attention Neural Operators, an operator block that extends transformer positional encoding, self-attention and normalization functionalities to function spaces.

Differential convolution

Implements the finite difference convolution required for Local Neural Operators. The DifferentialConv computes a finite difference convolution on a regular grid, which converges to a directional derivative as the grid is refined. Thanks to @mliuschi.

DISCO convolutions

DiscreteContinuousConv2d implements Discrete-Continuous Convolutions required for Local Neural Operators. Check our documentation for an interactive demo of DISCO convolutions in use! Thanks to @mliuschi and @bonevbs.

GNOBlock

This layer implements the Graph Neural Operator architecture, which combines a spatial neighbor search with a pointwise aggregation to create a kernel integral similar to message-passing neural network.

LocalFNOBlock

  • Adds a LocalFNOBlocks layer, which mirrors the FNOBlock architecture with differential and discrete-continuous convolutions in parallel to normal spectral convolutions (#468)

Updates to existing models/layers

FNO

We've simplifies the FNO's documentation and initialization. Parameters are now ordered by importance, and init only requires values for the most crucial parameters

  • Creating an FNO is now as simple as:
    >>> from neuralop.models import FNO
    >>> model = FNO(n_modes=(12,12), in_channels=1, out_channels=1, hidden_channels=64)
    >>> model
    FNO(
    (positional_embedding): GridEmbeddingND()
    (fno_blocks): FNOBlocks(
        (convs): SpectralConv(
        (weight): ModuleList(
            (0-3): 4 x DenseTensor(shape=torch.Size([64, 64, 12, 7]), rank=None)
        )
        )
            ... torch.nn.Module printout truncated ...
    

We've also added support for functions that take on complex values in the spatial domain (#400)

FNO-GNO

The FNOGNO combines FNOBlocks over a regular grid with a GNOBlock layer to map to arbitrary query points in the spatial domain. Updates include simplifications of parameters and documentation, and modularization to integrate the new GNOBlock layer.

New meta algorithms

Incremental learning of resolution and frequency

Implements Incremental Spectral Learning to learn the smallest possible model for a given problem.

  • Adds IncrementalDataProcessor and accompanying meta-alg (#274)
  • Meta-algorithm implemented as IncrementalFNOTrainer, a subclass of the Trainer

Thanks to @Robertboy18

Out-of-the-box mixed-precision training

The Trainer's mixed_precision parameter automatically handles mixed-precision using the new torch.autocast framework.

A new data module

Overview

We've refactored all data-associated functionality into neuralop.data. Datasets are now located at neuralop.data.datasets and transforms are located at neuralop.data.transforms. We've also updated the interfaces of our example datasets, including the option to download data from the new NeuralOperator Zenodo Community.

The Darcy-flow, Burger's, and Navier-Stokes datasets now all derive from a PTDataset template class, which updates all interfaces, adds adownload option and connects to the Zenodo data source (#299).

Car-CFD

To showcase our geometric models, we've also added a dataset of simulations of airflow over 3d ShapeNet car models (#452), as well as examples for the FNOGNO and GINO models (scripts/train_fnogno_carcfd.py, scripts/train_gino_carcfd.py).

Testing and Documentation

This release, we expanded test coverage to 80%. To ensure users can get the most out of the library with minimal effort, we also significantly improved the documentation, simplifying the docstring, adding more user-guides and new examples.

The Quickstart Guide has been fully updated to provide a smooth, hands-on experience for new users, with clear instructions on training, saving, and loading models.

Other changes

Correct complex optimization

PyTorch's default Adam implementation currently handles complex parameters by viewing as a 2-tensor stack (real_values, imaginary_values). This leads to an incorrect magnitude computation, which in turn affects the computation of momentum. Our implementation keeps parameters as real, and updates the SpectralConv's choice of parameter tensors to ensure that parameters are registered as complex.

  • Adds custom AdamW implementation (#420)
  • Ensures SpectralConv parameters are registered with complex values (#401)

Misc. bug fixes

  • Fixes numerical instability in UnitGaussianNormalizer (#330)
  • Fixes factorized tensor contraction in SpectralConv when separable=True (#389)
    • Fixed a bug in MGPatchingDataProcessor.to(device) that prevented proper device transfers. (#449)
    • Corrected the number of training examp...
Read more

0.3.0

08 Dec 23:14

Choose a tag to compare

Summary

We are excited to release this new version of the neuraloperator library! It brings many improvements, including new architectures (SFNO, GNO, GINO), many improvements to existing ones, out-of-the-box super resolution, super-evaluation and incremental training.

All models can now be easily saved and loaded and we provide a lightweight trainer compatible with all our neuraloperators. Head to the examples for some sample code, and to the API for a full documentation!

What's Changed

New Contributors

Full Changelog: 0.2.0...0.3.0