Skip to content

Tags: sdatkinson/NeuralAmpModelerCore

Tags

v0.5.0

Toggle v0.5.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
[BREAKING] Improve organization of WaveNet code & internalized functi…

…onality (#250)

* Refactor WaveNet into NAM/wavenet/ (params, detail, model)

Move configuration types to params.h, implementation classes to detail.h,
and WaveNet/WaveNetConfig/JSON parsing to model.h and model.cpp. Slimmable and
tests include wavenet/model.h; CMake globs NAM/*/*.cpp for the new path.
Update Doxygen RST class paths. Format with clang-format.

Made-with: Cursor

* Move slimmable WaveNet sources to wavenet/slimmable.{h,cpp}

Relocate NAM/slimmable_wavenet.* alongside the rest of the WaveNet code.
model.cpp now includes slimmable.h from the same directory.

Made-with: Cursor

v0.4.0

Toggle v0.4.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
[BUGFIX] Support no head key in WaveNet config (#225)

v0.4.0.rc3

Toggle v0.4.0.rc3's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
[BUGFIX] Support no head key in WaveNet config (#225)

v0.4.0.rc2

Toggle v0.4.0.rc2's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Depthwise convolution implementation (#217)

Squashed commit of the following:

commit 79e9f31415cde3ec1430229121751429eb7eff25
Merge: 4d1fd5d 12f93a2
Author: Steven Atkinson <[email protected]>
Date:   Thu Jan 29 00:22:38 2026 -0800

    Merge branch 'main' into 215-group-2

commit 4d1fd5d
Author: Steven Atkinson <[email protected]>
Date:   Thu Jan 29 00:17:36 2026 -0800

    Enhance Conv1x1 and Conv1D classes to support depthwise convolutions. Introduced logic to differentiate between depthwise and non-depthwise configurations, optimizing weight storage and processing methods accordingly. Updated weight setting and processing functions to handle depthwise operations efficiently, ensuring correct handling of input channels and weights.

commit 2ad9dec
Author: Steven Atkinson <[email protected]>
Date:   Wed Jan 28 23:56:35 2026 -0800

    Improve grouped convolutions for Conv1D by...ignoring them for now.

commit e3be255
Author: Steven Atkinson <[email protected]>
Date:   Wed Jan 28 23:46:36 2026 -0800

    Revert "Implement std::vector grouped_weights"

    This reverts commit e78e191.

commit e78e191
Author: Steven Atkinson <[email protected]>
Date:   Wed Jan 28 23:41:45 2026 -0800

    Implement std::vector grouped_weights

commit 546f820
Author: Steven Atkinson <[email protected]>
Date:   Wed Jan 28 23:31:28 2026 -0800

    Improve speed of small grouped convolutions with single GEMM

commit c20fb86
Author: Steven Atkinson <[email protected]>
Date:   Wed Jan 28 23:23:28 2026 -0800

    Zero out conv weight matrices after resize

v0.4.0.rc1

Toggle v0.4.0.rc1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Fix bugs, add an end-to-end test with a model with all new features (#…

…198)

* Assert no post-head 1x1 FiLM if there's no head 1x1

* Add groups_input_mixin parameter to wavenet Layer

Adds groups_input_mixin parameter to control grouped convolutions in the
input_mixin Conv1x1 layer. The parameter is propagated through Layer,
LayerArrayParams, and LayerArray constructors. Factory parsing defaults
to 1 if not specified in the model JSON for backward compatibility.

Also fixes a bug in test_real_time_safe where make_layer_all_films was
incorrectly activating head1x1_post_film when head1x1 was inactive.

* Change JSON key from 'groups' to 'groups_input' in WaveNet factory

Aligns the JSON configuration key with the LayerArrayParams attribute name
for consistency. The factory now reads 'groups_input' instead of 'groups'
from the layer configuration.

* Consolidate gating_activation_post_film_params with activation_post_film_params

Removed the separate gating_activation_post_film_params parameter and now use
activation_post_film_params for both gated and blended modes. This simplifies
the API and reduces redundancy since both modes apply FiLM modulation after
activation in the same way.

Changes:
- Removed gating_activation_post_film_params parameter from _Layer,
  _LayerArray, and LayerArrayParams constructors
- Removed _gating_activation_post_film member variable from _Layer
- Updated _Layer::Process() to use _activation_post_film for gated mode
- Updated all test files to use 7 FiLM parameters instead of 8
- Updated weight count in test_real_time_safe.cpp accordingly

* Refactor secondary_activation to use ActivationConfig

Update WaveNet C++ code to handle secondary_activation as ActivationConfig
instead of string for proper type safety. This enables support for complex
activation types with parameters (e.g., PReLU, LeakyHardtanh).

Changes:
- Modify _Layer, _LayerArray, and LayerArrayParams to use typed ActivationConfig
- Update Factory function to parse secondary_activation from JSON as ActivationConfig
- Update all test files to use ActivationConfig for secondary activation parameters

All tests pass successfully.

* Fix Conv1D and Conv1x1 to use groups parameters

Fixed two bugs in _Layer constructor:
- Conv1D was missing groups_input parameter (always defaulted to 1)
- Conv1x1 _1x1 was passing groups_1x1 as bias parameter instead of groups

These fixes enable proper grouped convolutions for reduced computation.

* Add wavenet_a2_max.nam

* Add wavenet_a2_max.nam to end-to-end tests, formatting

* Add real-time safety tests for FiLM.

* Add test for RT safety for Layer with gated activation and post-activation FiLM. Failing.

* Fix test_layer_post_activation_film_gated_realtime_safe test errors

- Fix incorrect parameter comments (lines 713-720): corrected parameter names to match actual Layer constructor
- Fix misleading comment on activation_post_film weight calculation: clarify that FiLM is created with bottleneck as input_dim, shift doubles output channels
- Remove 4 extra placeholder weights that were causing assertion failures
- Apply same fixes to test_layer_post_activation_film_blended_realtime_safe

* Fix real-time safety: eliminate allocations in gated/blended activation paths

- Use Eigen::Ref in FiLM::Process and Conv1x1::process_ to accept block
  expressions without creating temporary matrices
- Add pre-allocated buffers in GatingActivation and BlendingActivation
  to avoid allocating MatrixXf objects in processing loops

* Fix LayerArray head buffer size mismatch when head_1x1 is active

When head_1x1 was active with out_channels != bottleneck, _head_inputs
and _head_rechannel were incorrectly sized using bottleneck instead of
head_1x1.out_channels, causing an Eigen matrix dimension mismatch.

Added _head_output_size member to _LayerArray that correctly computes
the head output size (head_1x1.out_channels if active, else bottleneck).
Updated weight generator to match.

* Remove unused private variable

v0.3.0

Toggle v0.3.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Add build status badge to README (#163)

Added build status badge to README.

v0.2.0

Toggle v0.2.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
[BUGFIX] Fix gated activations in WaveNet (#131)

* Possible fix to gating bug. Haven't tried, needs tests

* Unit test

* Clean up comments

v0.1.0

Toggle v0.1.0's commit message
Bump version to 0.1.0

v0.1.0-rc.1

Toggle v0.1.0-rc.1's commit message
Bump version to 0.1.0

v0.0.1

Toggle v0.0.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Update version.h

Bump to v0.0.1