When contributing a PR, please add the title, link and a short 1-2 line description of the PR to this document. If you are an external contributor, please also add your github handle. You can use markdown formatting in this document.
Template for contribution summaries: Please use the following to extend the changelog:
- **The PR title [#<number>](https://github.com/<user>/<repo>/pull/<number>)**:
<Short 1-2 line description of the PR>
Info for maintainers: When creating a new release, make sure to update the latest heading
in this file to the released code version using the name of the github tag (e.g. v0.1.2,
v0.1.2a3, v0.1.2b3, etc.).
-
Add embedding ensembling functionality #507: Add
ensemble_embeddingsthat aligns multiple embeddings and combine them into an averaged one. -
Move
max_validation_iterationsfromcebra.CEBRAtocebra.metrics.infonce_loss#527: Movemax_validation_iterationsfromcebra.CEBRAtocebra.metrics.infonce_lossand rename the variable tonum_batches. -
Add
plot_consistencyand demo notebook #502: Addplot_consistencyhelper function and complete the corresponding notebook.
- Add helpers to use DeepLabCut data with CEBRA #436: Add helpers to preprocess DeepLabCut output data and use it easily with CEBRA.
- Add
compare_modelsfunctionality #460: Multiple trained models can now be plotted together for easier comparison of hyperparameter settings and datasets.
This release contains various additions from the work on three successive release candidates. It is the official first release distributed along with the publication of the CEBRA paper.
- v0.0.2rc3
- Add adapt=True in CEBRA.fit() #445: Add capability to adapt a trained CEBRA models to new sessions of data, potentially with different input dimensions.
- Save/load functionality for sklearn models #408:
Add a
save/loadfunction tocebra.CEBRAfor serialization. Experimental feature for now which will be refined later on.
- v0.0.2rc2
- v0.0.2rc1
- Implementation for general dataloading #305:
Implement
load, a general function to convert any supported data file types tonumpy.array. - Add score method #316:
Add
scoremethod tocebrato compute the score of the trained model on new data. - Add quick testing option #318: Add slow marker for longer tests and a quick testing option for pytest and in github workflow.
- Add CITATION.cff file #339: Add CITATION.cff file for easy-to-use citation of the pre-print paper.
- Update sklearn dependency #317:
The sklearn dependency was updated to
scikit-learnas discussed in the scikit-learn docs - Increase documentation coverage >90% #265:
Configure
interrogatefor checking docstring coverage of the codebase. Add docstrings to increase overall coverage to >90%. - Increase documentation coverage >80% #263:
Configure
interrogatefor checking docstring coverage of the codebase. Add docstrings to increase overall coverage to >80%. - Apply new code and docstring formatting to whole codebase #255:
Before enforcing google style doc strings with
yapf, applyblackfor stricter code formatting. Format docstrings withdocformatter. - Run formatter during workflow run #217:
This addition checks that
make docscan be run as part of the tests. - Update documentation and enforce working links #198: Revision and improvement of the current documentation. "nitpicky" mode is now used in sphinx, which will check that we dont have any broken links of missing references in the documentation.
- Implementation for general dataloading #305:
Implement
- Version of the code submitted along with the paper revision