Releases: sdatkinson/NeuralAmpModelerCore
Version 0.5.0
Overview
This release contains the necessary code to be able to process NAM's upcoming A2 models.
Optimized processing implementations are in development and will be released in the next patch version update.
For developers aiming to understand the key new features that will be used by A2 that were not in v0.4.0, refer to #242, #247, and #249.
What's Changed
- Update README with tools for library usage by @sdatkinson in #229
- Add extensibility test for model registry (issue #230) by @sdatkinson in #231
- Add integration tests workflow for PRs by @sdatkinson in #232
- [FEATURE] Rendering tool by @sdatkinson in #235
- Add inline GEMM optimizations and general performance improvements by @jfsantos in #226
- Refactor factory into separate config parser and unified create_dsp() construction path by @jfsantos in #227
- Optimizations for 3 channel models by @jfsantos in #238
- Fix MSVC build ("restrict" keyword) by @mikeoliphant in #236
- Slimmable interface and SlimmableContainer by @jfsantos in #242
- [WIP] First draft of SlimmableWavenet by @jfsantos in #243
- Get rid of SlimmableWaveNet "architecture" by @sdatkinson in #244
- Formatting by @sdatkinson in #245
- [FEATURE] Add
--slimtobenchmodelby @sdatkinson in #246 - [FEATURE, BREAKING] WaveNet: Support different kernel sizes in each Layer by @sdatkinson in #247
- [FEATURE] WaveNet: Model head, layer array head variable kernel size by @sdatkinson in #249
- [BREAKING] Improve organization of WaveNet code & internalized functionality by @sdatkinson in #250
Full Changelog: v0.4.0...v0.5.0
Developed with support from TONE3000. Thank you!
Version 0.4.0
This version of NeuralAmpModelerCore should be able to play Architecture A2 when it is finalized.
What's Changed
- Adding activation functions and fast LUT implementation by @jfsantos in #177
- Added multichannel PReLU by @jfsantos in #179
- Added gating activation classes by @jfsantos in #180
- Benchmarking report by @sdatkinson in #182
- [BREAKING] Conv1D manages its own ring buffer by @sdatkinson in #181
- [FEATURE] Grouped convolutions by @sdatkinson in #183
- [FEATURE] Grouped convolutions for
Conv1x1, WaveNetgroups_1x1hyperparameter by @sdatkinson in #184 - [FEATURE] bottlenecks in WaveNet layers by @sdatkinson in #185
- [FEATURE] Support multi-input, multi-output models by @sdatkinson in #187
- Head 1x1 convolution by @jfsantos in #189
- [FEATURE] Optionally process WaveNet conditions with another WaveNet by @sdatkinson in #190
- [FEATURE] Integrate gating & blending activations into WaveNet by @sdatkinson in #193
- Configurable activations by @jfsantos in #194
- [FEATURE] FiLMs in
wavenet::Layerby @sdatkinson in #196 - Fix bugs, add an end-to-end test with a model with all new features by @sdatkinson in #198
- Add documentation by @sdatkinson in #200
- Bump .nam file supported to 0.6.0 by @sdatkinson in #203
- [FEATURE] Softsign activation by @sdatkinson in #205
- [FEATURE] WaveNet: Allow different activations, gating modes, and secondary activations in each layer of a layer array by @sdatkinson in #207
- Refine WaveNet constructors by @sdatkinson in #208
- Add features to
wavenet_a2_max.namby @sdatkinson in #209 - [FEATURE] Grouped 1x1 convolutions in FiLM modules by @sdatkinson in #211
- [FEATURE] WaveNet: Make
layer1x1(formerly1x1) optional, rename.namkey"head_1x1"to"head1x1"by @sdatkinson in #214 - [BUGFIX] Fix performance hit for grouped convolutions by @sdatkinson in #216
- [ENHANCEMENT] Optimized depthwise convolutions by @sdatkinson in #217
- [BUGFIX] WaveNet Factory: Check that condition_dsp is not null by @jfsantos in #220
- [BUGFIX, BREAKING] Make activation base class abstract, fix PReLU implementation by @sdatkinson in #223
- Add TONE3000 support note in README.md by @sdatkinson in #224
- [BUGFIX] Support no head key in WaveNet config by @sdatkinson in #225
New Contributors
Full Changelog: v0.3.0...v0.4.0.rc3
Version 0.3.0
What's Changed
- [BUGFIX] Fix some wrongly-private attributes in WaveNet by @sdatkinson in #139
- [BUGFIX] Eliminate real-time allocations in WaveNet by @sdatkinson in #141
- Update
nlohmann/jsonto Version 3.12.0 by @sdatkinson in #152 - Fix the build by @sdatkinson in #153
- Update build to use C++20 by @sdatkinson in #155
- [FEATURE] Ability to register new factories into
get_dsp()by @sdatkinson in #156 - [FEATURE] Exponse some additional attributes by @sdatkinson in #162
- Add build status badge to README by @Khalian in #163
New Contributors
Full Changelog: v0.2.0...v0.2.1
Version 0.2.0
What's Changed
- [BUGFIX] Fix gated activation code by @sdatkinson in #102
- Bug fix renaming param in header to implementation by @dhilanpatel26 in #106
- simplify vector load from json by @shaforostoff in #105
- Fix wavenet head check by @mikeoliphant in #108
- Remove
nam::DSP::finalize_()by @sdatkinson in #110 - Define
nam::DSP::Resetandnam::DSP::ResetAndPrewarmby @sdatkinson in #111 - More efficient pre-warming using multiple-sample buffers by @sdatkinson in #112
- [BREAKING] Remove
config_pathas input toGetWeightsby @sdatkinson in #119 - CI: Test loading and running models by @sdatkinson in #120
- [FEATURE] Define input and output level calibration functionality for
DSPby @sdatkinson in #121 - [ENHANCEMENT]
get_dsp: Set input and output levels while loading models by @sdatkinson in #122 - [Chore] Formatting by @sdatkinson in #128
- [FEATURE] Add support for LeakyReLU activation by @sdatkinson in #127
- [BUGFIX] Handle when calibration fields are present but null-valued by @sdatkinson in #130
- [BUGFIX] Fix gated activations in WaveNet by @sdatkinson in #131
New Contributors
- @dhilanpatel26 made their first contribution in #106
- @shaforostoff made their first contribution in #105
Full Changelog: v0.1.0...v0.2.0
Version 0.1.0
Version 0.1.0
What's Changed
- Pre-warm WaveNet on creation over the size of the receptive field by @mikeoliphant in #71
- [BREAKING] Remove
dsp/by @sdatkinson in #75 - [BREAKING ]Processing interface cleanup by @mikeoliphant in #78
- [BREAKING] Remove _process_core() and output normalization by @mikeoliphant in #80
- [BREAKING] Remove
TARGET_DSP_LOUDNESSby @sdatkinson in #85 - [BREAKING] Remove constructors with loudness by @sdatkinson in #87
- [BUGFIX] Fix LSTM input-output reversal by @sdatkinson in #92
- Move pre-warm to DSP and call it in get_dsp() by @mikeoliphant in #90
- [BREAKING] Add
namnamespace by @sdatkinson in #93 - [BREAKING] Remove parametric modeling code by @sdatkinson in #95
Full Changelog: v0.0.1...v0.1.0-rc.1
Version 0.0.1
What's Changed
- Added Hardtanh activation function by @mikeoliphant in #14
- Separate core NAM code from other plugin dsp code by @sdatkinson in #21
- Remove iplug2 dependency by @sdatkinson in #22
- Handle PAD chunk by @sdatkinson in #20
- Output loudness normalization by @sdatkinson in #18
- Removed unused numpy code by @mikeoliphant in #23
- fabs->fabsf in fast tanh by @mikeoliphant in #24
- Formatting by @sdatkinson in #25
- Separate out convnet by @mikeoliphant in #26
- Activation function refactor by @mikeoliphant in #28
- Method to determine whether model has loudness data by @pawelKapl in #33
- Support loading WAV files with "fact" chunk by @sdatkinson in #37
- Activation constructor/destructor cleanup by @mikeoliphant in #32
- Read WAV files via allow-list of chunks by @sdatkinson in #39
- Support all v0.5 models regardless of patch version by @sdatkinson in #41
- Add wav error message string function to dsp::wav by @olilarkin in #42
- Define WaveNet and LSTM destructors by @sdatkinson in #44
- dspStruct pull request by @masqutti in #46
- Add fast tanh and fast sigmoid to LSTM by @mikeoliphant in #43
- Add CMake build tools by @mikeoliphant in #34
- Fix minor incompatibilities with clang/g++ build on Linux by @daleonov in #50
- Fix memory issues by @sdatkinson in #55
- Fix compiler warnings by @falkTX in #53
- Implement expected sample rate for DSP class by @sdatkinson in #60
- Allow NAM_SAMPLE_FLOAT to switch model input to float instead of double by @mikeoliphant in #48
- DSPs prepared for loading persisted models by @pawelKapl in #35
- Fix some wav impulse files would sometimes fail to load ... by @fab672000 in #63
- High-pass and low-pass filters by @sdatkinson in #66
- ImpulseResponse::GetSampleRate by @sdatkinson in #70
- Allow defining DSP_SAMPLE_FLOAT to have dsp classes use float I/O by @mikeoliphant in #68
- Virtual destructor to dsp::DSP by @sdatkinson in #72
- Fix some buffers having uninitialized contents after resizing by @daleonov in #51
New Contributors
- @mikeoliphant made their first contribution in #14
- @pawelKapl made their first contribution in #33
- @olilarkin made their first contribution in #42
- @masqutti made their first contribution in #46
- @daleonov made their first contribution in #50
- @falkTX made their first contribution in #53
- @fab672000 made their first contribution in #63
Full Changelog: v0.0.0...v0.0.1
Version 0.0.0
Core library as of NeuralAmpModelerPlugin v0.7.1
What's Changed
- Support loading IEEE 32-bit WAV files by @sdatkinson in #9
- Reduce gain of IRs by @sdatkinson in #11
Full Changelog: https://github.com/sdatkinson/NeuralAmpModelerCore/commits/v0.0.0