- Adding support for multiple targets in the TimeSeriesDataSet (#199) and amended tutorials.
- Temporal fusion transformer and DeepAR with support for multiple tagets (#199)
- Check for non-finite values in TimeSeriesDataSet and better validate scaler argument (#220)
- TimeSeriesDataSet's
yof the dataloader is a tuple of (target(s), weight) - potentially breaking for model or metrics implementation Most implementations will not be affected as hooks in BaseModel and MultiHorizonMetric were modified.
- Fixed autocorrelation for pytorch 1.7 (#220)
- Ensure reproducibility by replacing python
set()withdict.fromkeys()(#221)
- jdb78
- JustinNeumann
- Tutorial on how to implement a new architecture covering basic and advanced use cases (#188)
- Additional and improved documentation - particularly of implementation details (#188)
- Moved multiple private methods to public methods (particularly logging) (#188)
- Moved
get_maskmethod from BaseModel into utils module (#188) - Instead of using label to communicate if model is training or validating, using
self.trainingattribute (#188) - Using
sample((n,))of pytorch distributions instead of deprecatedsample_n(n)method (#188)
- Beta distribution loss for probabilistic models such as DeepAR (#160)
- BREAKING: Simplifying how to apply transforms (such as logit or log) before and after applying encoder. Some transformations are included by default but a tuple of a forward and reverse transform function can be passed for arbitrary transformations. This requires to use a
transformationkeyword in target normalizers instead of, e.g.log_scale(#185)
- Incorrect target position if
len(static_reals) > 0leading to leakage (#184) - Fixing predicting completely unseen series (#172)
- jdb78
- JakeForsey
- Using GRU cells with DeepAR (#153)
- GPU fix for variable sequence length (#169)
- Fix incorrect syntax for warning when removing series (#167)
- Fix issue when using unknown group ids in validation or test dataset (#172)
- Run non-failing CI on PRs from forks (#166, #156)
- Improved model selection guidance and explanations on how TimeSeriesDataSet works (#148)
- Clarify how to use with conda (#168)
- jdb78
- JakeForsey
- DeepAR by Amazon (#115)
- First autoregressive model in PyTorch Forecasting
- Distribution loss: normal, negative binomial and log-normal distributions
- Currently missing: handling lag variables and tutorial (planned for 0.6.1)
- Improved documentation on TimeSeriesDataSet and how to implement a new network (#145)
- Internals of encoders and how they store center and scale (#115)
- Update to PyTorch 1.7 and PyTorch Lightning 1.0.5 which came with breaking changes for CUDA handling and with optimizers (PyTorch Forecasting Ranger version) (#143, #137, #115)
- jdb78
- JakeForesey
- Fix issue where hyperparameter verbosity controlled only part of output (#118)
- Fix occasional error when
.get_parameters()fromTimeSeriesDataSetfailed (#117) - Remove redundant double pass through LSTM for temporal fusion transformer (#125)
- Prevent installation of pytorch-lightning 1.0.4 as it breaks the code (#127)
- Prevent modification of model defaults in-place (#112)
- Hyperparameter tuning with optuna to tutorial
- Control over verbosity of hyper parameter tuning
- Interpretation error when different batches had different maximum decoder lengths
- Fix some typos (no changes to user API)
This release has only one purpose: Allow usage of PyTorch Lightning 1.0 - all tests have passed.
- Additional checks for
TimeSeriesDataSetinputs - now flagging if series are lost due to highmin_encoder_lengthand ensure parameters are integers - Enable classification - simply change the target in the
TimeSeriesDataSetto a non-float variable, use theCrossEntropymetric to optimize and output as many classes as you want to predict
- Ensured PyTorch Lightning 0.10 compatibility
- Using
LearningRateMonitorinstead ofLearningRateLogger - Use
EarlyStoppingcallback in trainercallbacksinstead ofearly_stoppingargument - Update metric system
update()andcompute()methods - Use
trainer.tuner.lr_find()instead oftrainer.lr_find()in tutorials and examples
- Using
- Update poetry to 1.1.0
- Removed attention to current datapoint in TFT decoder to generalise better over various sequence lengths
- Allow resuming optuna hyperparamter tuning study
- Fixed inconsistent naming and calculation of
encoder_lengthin TimeSeriesDataSet when added as feature
- jdb78
- Backcast loss for N-BEATS network for better regularisation
- logging_metrics as explicit arguments to models
- MASE (Mean absolute scaled error) metric for training and reporting
- Metrics can be composed, e.g.
0.3* metric1 + 0.7 * metric2 - Aggregation metric that is computed on mean prediction over all samples to reduce mean-bias
- Increased speed of parsing data with missing datapoints. About 2s for 1M data points. If
numbais installed, 0.2s for 1M data points - Time-synchronize samples in batches: ensure that all samples in each batch have with same time index in decoder
- Improved subsequence detection in TimeSeriesDataSet ensures that there exists a subsequence starting and ending on each point in time.
- Fix
min_encoder_length = 0being ignored and processed asmin_encoder_length = max_encoder_length
- jdb78
- dehoyosb
- More tests driving coverage to ~90%
- Performance tweaks for temporal fusion transformer
- Reformatting with sort
- Improve documentation - particularly expand on hyper parameter tuning
- Fix PoissonLoss quantiles calculation
- Fix N-Beats visualisations
- Calculating partial dependency for a variable
- Improved documentation - in particular added FAQ section and improved tutorial
- Data for examples and tutorials can now be downloaded. Cloning the repo is not a requirement anymore
- Added Ranger Optimizer from
pytorch_rangerpackage and fixed its warnings (part of preparations for conda package release) - Use GPU for tests if available as part of preparation for GPU tests in CI
- BREAKING: Fix typo “add_decoder_length” to “add_encoder_length” in TimeSeriesDataSet
- Fixing plotting predictions vs actuals by slicing variables
Fix bug where predictions were not correctly logged in case of decoder_length == 1.
- Add favicon to docs page
Update build system requirements to be parsed correctly when installing with pip install https://github.com/jdb78/pytorch-forecasting/
- Add tests for MacOS
- Automatic releases
- Coverage reporting
This release improves robustness of the code.
Fixing bug across code, in particularly
- Ensuring that code works on GPUs
- Adding tests for models, dataset and normalisers
- Test using GitHub Actions (tests on GPU are still missing)
Extend documentation by improving docstrings and adding two tutorials.
- Basic tests for data and model (mostly integration tests)
- Automatic target normalization
- Improved visualization and logging of temporal fusion transformer
- Model bugfixes and performance improvements for temporal fusion transformer
- Metrics are reduced to calculating loss. Target transformations are done by new target transformer