Thresholds for anomaly detection are often arbitrary and lack theoretical guarantees. nonconform wraps anomaly detectors (from PyOD, scikit-learn, or custom implementations) and transforms their raw anomaly scores into statistically valid p-values. It applies principles from conformal prediction to one-class anomaly detection, enabling controlled false discovery rate (FDR) workflows with explicit statistical guarantees.
Note: The methods in nonconform assume that training and test data are exchangeable. The package is therefore not suited for spatial or temporal autocorrelation unless such dependencies are explicitly handled in preprocessing or model design.
| Need | nonconform Functionality | Start Here |
|---|---|---|
| Principled anomaly decisions | ConformalDetector.select(...) combines conformal p-values with FDR-controlled selection |
FDR Control |
| Flexible calibration strategies | Split, CrossValidation, and JackknifeBootstrap for different data/compute tradeoffs |
Conformalization Strategies |
| Covariate-shift aware workflows | Weighted conformal prediction with density-ratio estimators and weighted FDR control (requires sufficient calibration/test support overlap) | Weighted Conformal |
| Rich p-value estimation | Empirical, probabilistic KDE, and conditional calibration estimators | Common Workflows |
| Sequential monitoring | Exchangeability martingales (PowerMartingale, SimpleMixtureMartingale, SimpleJumperMartingale) |
Exchangeability Martingales |
| Custom detector integration | Support for any detector implementing the AnomalyDetector protocol |
Detector Compatibility |
Installation via PyPI:
pip install nonconformNote: The example below uses an external dataset API. Install with
pip install oddballorpip install "nonconform[data]".
Example: Isolation Forest on the Shuttle benchmark. This trains a base detector, calibrates conformal scores, then applies FDR-controlled selection through select(...). Raw p-values remain available via detector.last_result.p_values.
from pyod.models.iforest import IForest
from nonconform import ConformalDetector, Split
from nonconform.metrics import false_discovery_rate, statistical_power
from oddball import Dataset, load
x_train, x_test, y_test = load(Dataset.SHUTTLE, setup=True, seed=42)
detector = ConformalDetector(
detector=IForest(),
strategy=Split(n_calib=1_000),
seed=42,
)
detector.fit(x_train)
decisions = detector.select(x_test, alpha=0.2)
print(f"Empirical FDR: {false_discovery_rate(y_test, decisions)}")
print(f"Statistical Power: {statistical_power(y_test, decisions)}")Output:
Empirical FDR: 0.18
Statistical Power: 0.99
nonconform includes advanced workflows for practitioners who need more power or robustness:
- Probabilistic Conformal Estimation (
Probabilistic): uses KDE-based modeling of calibration scores to produce continuous p-values instead of purely empirical stepwise values. - Weighted Conformal Prediction (
weight_estimator=...): reweights calibration evidence for covariate shift settings where test and calibration distributions differ, assuming enough support overlap between calibration and test features. - Exchangeability Martingales (
nonconform.martingales): sequential evidence monitoring over conformal p-value streams.
Probabilistic Conformal Setup:
from pyod.models.iforest import IForest
from nonconform import ConformalDetector, Probabilistic, Split
detector = ConformalDetector(
detector=IForest(),
strategy=Split(n_calib=1_000),
estimation=Probabilistic(n_trials=10),
seed=42,
)Weighted Conformal Setup:
from pyod.models.iforest import IForest
from nonconform import ConformalDetector, Split, logistic_weight_estimator
detector = ConformalDetector(
detector=IForest(),
strategy=Split(n_calib=1_000),
weight_estimator=logistic_weight_estimator(),
seed=42,
)Note: In weighted mode,
ConformalDetector.select(...)dispatches weighted FDR control automatically.
Martingale Setup for Sequential Monitoring:
from nonconform.martingales import AlarmConfig, PowerMartingale
martingale = PowerMartingale(
epsilon=0.5,
alarm_config=AlarmConfig(ville_threshold=100.0),
)
state = martingale.update(p_t)
states = martingale.update_many(p_values_chunk)Note:
update(...)already validates and normalizes numeric scalar p-values, so an explicitfloat(...)cast is optional. Martingale alarms monitor evidence over time; they do not replace cross-hypothesis FDR control.
While primarily designed for static (single-batch) workflows, optional onlinefdr integration supports streaming FDR procedures.
Any detector implementing the AnomalyDetector protocol works with nonconform:
from typing import Self
import numpy as np
class MyDetector:
def fit(self, X, y=None) -> Self: ...
def decision_function(self, X) -> np.ndarray: ... # higher = more anomalous
def get_params(self, deep=True) -> dict: ...
def set_params(self, **params) -> Self: ...For custom detectors, either set score_polarity explicitly ("higher_is_anomalous" in most cases), or omit it to use the pre-release default behavior. Use score_polarity="auto" only when you want strict detector-family validation.
See Detector Compatibility for details and examples.
For additional features, you might need optional dependencies:
pip install nonconform[pyod]- Includes PyOD anomaly detection librarypip install nonconform[data]- Includes oddball for loading benchmark datasetspip install nonconform[fdr]- Includes advanced FDR control methods (online-fdr)pip install nonconform[probabilistic]- Includes KDEpy and Optuna for probabilistic estimation/tuningpip install nonconform[all]- Includes all optional dependencies
Please refer to the pyproject.toml for details.
Bug reporting: https://github.com/OliverHennhoefer/nonconform/issues

