All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog.
- Callbacks have been reorganized in folders.
- Update requirements.
- Update
gh-pages.yml. - Update
README.md - Update
mixed_precisionparameter and usage.
- Progress bar will only be created under the main process.
- Logging now is handled by
acceleratelibrary.
- Add
torch.ditributionexample, with code taken from Romain Strock. - Add
predictmethod toTrainer. #38 - Add functions to freeze and unfreeze model. #43
- Add function to transform dataset into time series dataset.
- Metrics are now moved to the execution device #41.
- Log level is now used in the Trainer. #40
LearningRateSchedulernow does not crash in first epoch whenon_trainis False. #36
- Make regularization part of the callbacks system. #37
- Divide utils into three submodules:
convenience,preprocessinganddata. - Update requirements to avoid conflicts.
- Update some tests.
- Remove old regularization module and all related code.
- Fix PyPi deployment file.
- Add PyPi deployment to the CI/CD.
- Fix
CHANGELOG.mdrelease dates.
- Add possibility to set the log level of the callbacks.
- Add stochastic weight averaging callback.
- Add
train_test_val_split. - Add
log_nameattribute totochfitter.callbacks.base.Callback.
- Change
with torch.no_grad()for@torch.no_grad()in trainer. - Format code with Black.
- Reorganize
utilsmodule.
- Remove
reset_parametersmethod from callbacks.
- Fix
RichProgressBarnot logging appropiate values. - Fix log level not being correctly set.
- Add more hooks to the callback system.
- Rich progress bar as callback.
- accelerate.Accelerator backend.
trainer.Trainer.fitnow returns a dictionary with the train history.
- Update README.
- Update metrics handling.
- Remove callback type.
- Solve doc typos.
- Fix logger and trainer tests.
- Fix incomplete
quickstartin docs. - Fix logging bug in
GPUStatscallback.
- Add support for mixed precision training.
- Add ElasticNet regularization.
- Add testing methods and their tests:
check_monotonically_decreasingandcompute_forward_gradient. - Add cuda seed setting in Manager.
- Add option to only use deterministic algorithm in the Manager class.
- Update logo and README.
- Update tests with new testing methods.
- Make some method on Trainer and Manager private.
- Solve bug in callbacks where the handler was not calling in appropiate order.
- Remove
ElasticNetregularization because the implementation was not correct.
- Change
params_dictin the Trainer to a specific class that tracks the internal state. - Change README.
- Update tests.
- Change logic of TQDM to be updated in each batch instead of in each epoch.
- Change optimization loop to be of type
condition-loopinstead ofiteration-loop. This is, the loop is now awhileloop.
- Add
Managerclass to handle multiple experiments. - Add support for computing metrics in the optimization loop via
torchmetrics. - Add
GPUStats,ReduceLROnPlateauandProgressBarLoggercallbacks. - Add testing utility to check gradients:
compute_forward_gradient. - Add more functions to
utils:FastTensorDataLoader,check_model_on_cuda.
- Solve warning where learning rate scheduler was being called before loss.
- Change
_compute_penaltyin favour ofcompute_penalty. - Change
_trainin favour oftrain_step. - Change
_validatein favour ofvalidation_step. - Update tests to be correct.
- Added new
reset_parametersmethod in the trainer. - Added requirements file for example.
- Added
trainerexample in.pyformat. - Added
manager.ipynbexample.
- Fix error in setup naming.
- Fix moving the tensors to device. Now, it is done in each batch.
- Change the
requirements.txtto remove unnecessary dependencies.
- Added possibility to use L1 and ElasticNet regularization.
- Added new testing module.
- Added tests for the new functionalities.
- Updated README to add brief tutorial on how to create regularization algos.
- Updated tests for trainer.
- Fixed minor typos in README.
- Added badges from shield.io
- Added a CHANGELOG.md
- Fixed error in README example syntax.