Skip to content

Add Hyperparameter Optimization (HPO) scripts using HyperNOs and Ray Tune#717

Open
MaxGhi8 wants to merge 1 commit intoneuraloperator:mainfrom
MaxGhi8:main
Open

Add Hyperparameter Optimization (HPO) scripts using HyperNOs and Ray Tune#717
MaxGhi8 wants to merge 1 commit intoneuraloperator:mainfrom
MaxGhi8:main

Conversation

@MaxGhi8
Copy link
Copy Markdown

@MaxGhi8 MaxGhi8 commented Mar 22, 2026

This PR adds three new scripts to the scripts/ directory for performing hyperparameter optimization on SFNO, TFNO, and UNO models using HyperNOs and Ray Tune.

Changes

  • New HPO Scripts:
    • scripts/ray_tune_sfno_swe.py: HPO for SFNO on Spherical Shallow Water Equations.
    • scripts/ray_tune_tfno_darcy.py: HPO for TFNO on Darcy Flow.
    • scripts/ray_tune_uno_darcy.py: HPO for UNO on Darcy Flow.
  • Documentation:
    • Updated README.rst to include a row on HPO and reference the new scripts.

The new scripts provide a standardized framework for optimizing hyperparameters in neural operators. They utilize the HyperNOs library for model and dataset loading, while Ray Tune manages the
search space and execution of trials. This addition facilitates more robust experimentation and model tuning for users of the library.

Users will need to install hypernos and ray[tune] to use these scripts.

Copilot AI review requested due to automatic review settings March 22, 2026 13:58
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds Ray Tune-based hyperparameter optimization (HPO) entry-point scripts for several NeuralOperator models, using HyperNOs for dataset/model integration, and updates the README to point users to these scripts.

Changes:

  • Added Ray Tune + HyperNOs HPO scripts for TFNO (Darcy), UNO (Darcy), and SFNO (Spherical SWE).
  • Updated README to mention availability of HPO scripts in scripts/.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.

File Description
scripts/ray_tune_sfno_swe.py Adds an HPO driver for SFNO on spherical SWE using Ray Tune + HyperNOs.
scripts/ray_tune_tfno_darcy.py Adds an HPO driver for TFNO on Darcy Flow using Ray Tune + HyperNOs.
scripts/ray_tune_uno_darcy.py Adds an HPO driver for UNO on Darcy Flow using Ray Tune + HyperNOs.
README.rst Documents the new HPO scripts.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

uno_n_modes=config["uno_n_modes"],
uno_scalings=[[1.0, 1.0], [0.5, 0.5], [1.0, 1.0], [2.0, 2.0]],
n_layers=4,
horizontal_skips_map={4: 0, 3: 1}
Copy link

Copilot AI Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

horizontal_skips_map is set to {4: 0, 3: 1} while n_layers=4 (valid layer indices are 0–3). The 4: 0 entry is unused and the skip pattern doesn’t match the default skip-map generation for 4 layers ({3: 0, 2: 1} in UNO). Consider removing horizontal_skips_map to use the model’s default, or update it to the correct indices for n_layers=4.

Suggested change
horizontal_skips_map={4: 0, 3: 1}

Copilot uses AI. Check for mistakes.
Comment thread README.rst
To use W&B logging features, simply create a file in ``neuraloperator/config``
called ``wandb_api_key.txt`` and paste your W&B API key there.

Hyperparameter optimization (HPO) scripts using `HyperNOs` and `Ray Tune` are also available in the ``scripts/`` directory (e.g. ``scripts/ray_tune_tfno_darcy.py``).
Copy link

Copilot AI Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This README line uses single backticks around HyperNOs / Ray Tune, but the rest of this README uses double-backticks for inline literals. In reStructuredText, single backticks are interpreted text and may not render as intended on GitHub/Sphinx. Use HyperNOs / Ray Tune (or proper links) for consistent rendering.

Suggested change
Hyperparameter optimization (HPO) scripts using `HyperNOs` and `Ray Tune` are also available in the ``scripts/`` directory (e.g. ``scripts/ray_tune_tfno_darcy.py``).
Hyperparameter optimization (HPO) scripts using ``HyperNOs`` and ``Ray Tune`` are also available in the ``scripts/`` directory (e.g. ``scripts/ray_tune_tfno_darcy.py``).

Copilot uses AI. Check for mistakes.
"learning_rate": tune.loguniform(1e-4, 1e-2),
"hidden_channels": tune.choice([16, 32, 64]),
"uno_out_channels": tune.choice([[32, 64, 64, 32], [16, 32, 32, 16]]),
"uno_n_modes": tune.choice([[16, 16], [8, 8]]),
Copy link

Copilot AI Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uno_n_modes search space is defined as a list of length 2, but neuralop.models.UNO asserts that len(uno_n_modes) == n_layers (here n_layers=4). As written, trials will fail at model construction. Define uno_n_modes as a list of length 4 (one entry per layer), or reduce n_layers accordingly.

Suggested change
"uno_n_modes": tune.choice([[16, 16], [8, 8]]),
"uno_n_modes": tune.choice([
[[16, 16], [16, 16], [16, 16], [16, 16]],
[[8, 8], [8, 8], [8, 8], [8, 8]],
]),

Copilot uses AI. Check for mistakes.
n_layers=config["n_layers"],
in_channels=config["input_dim"],
out_channels=config["out_dim"],
spectral_channels=config["modes"],
Copy link

Copilot AI Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SFNO (a partialclass of FNO) does not accept a spectral_channels argument (the FNO.__init__ signature has no such parameter). This will raise a TypeError when building the model. Remove spectral_channels=... or replace it with a supported FNO/SFNO argument (e.g., adjust hidden_channels or n_modes).

Suggested change
spectral_channels=config["modes"],

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants