Skip to content

Apply delay penalty on k2 ctc loss#669

Merged
yaozengwei merged 29 commits intok2-fsa:masterfrom
yaozengwei:latency_k2_ctc
Nov 28, 2022
Merged

Apply delay penalty on k2 ctc loss#669
yaozengwei merged 29 commits intok2-fsa:masterfrom
yaozengwei:latency_k2_ctc

Conversation

@yaozengwei
Copy link
Copy Markdown
Collaborator

@yaozengwei yaozengwei commented Nov 9, 2022

This PR supports to apply delay penalty on k2 ctc loss, to reduce the symbol delay. The encoder is streaming conformer model.
We provide two methods of delay penalty here:
(1) Add penalty scores on encoder ctc outputs after log-softmax. Use param --nnet-delay-penalty in train.py.
(2) Add penalty scores on ctc decoding lattice. Use param --delay-penalty in train.py. It requires k2-fsa/k2#1086 in k2.

For method (1), we want to add lambda * torch.arange(T) to the ctc blank probs (after log-softmax), to encourage blanks to align to later and encourage non-blank symbols to align earlier. However, it would also encourage blank over non-blanks. This is not what we want. This effect will be stronger for longer utterances, where the penalty scores lambda * torch.arange(T) get too large. To minimize the above "repeat-penalizing effect", we first split utterances into shorter sub-utterances, for purpose of adding the "arange" expression. We can declare that there is a sub-utterance boundary every time the blank prob is >= 0.99, for instance. Specifically, we
(1) compute a mask that is 1 when blank_prob >= 0.99 (or t==0);
(2) compute cummax_out = torch.cummax(torch.arange(T) * mask)
(3) compute the sawtooth-like "blank-bonus" values: penalty = torch.arange(T) - cummax_out
(4) add penalty * lambda to ctc blank probs (after log-softmax).

For method (2), we add penalty scores (frame offset relative to the middle frame, scaled by lambda) on the non-repeat and non-blank arcs in ctc decoding lattice, which are at the frames when the non-blank symbols are firstly emitted. It is the same as we did in transducer loss (see https://arxiv.org/pdf/2211.00490.pdf).

  • Note: The delay penalty is applied when warmup >= 1.0. Otherwise the loss would not converge.

@yaozengwei
Copy link
Copy Markdown
Collaborator Author

yaozengwei commented Nov 14, 2022

I train the models on full-librispeech for 25 epochs, and decode with ctc-decoding method. The following table presents the experimental results of two penalty methods: (1) Add "sawtooth" on blank log-prob; (2) Add frame offsets on Non-blank & non-repeat arcs. It shows that the second method achieves better WER-Delay trade-off than the first method.

Method, chunk-size=0.32s WER (%) Mean symbol delay Comment
Baseline 4.91 & 12.78 0.333s & 0.329s epoch-25-avg-8
"sawtooth" on blank log-prob, lambda=0.01 5.71 & 14.32 0.198s & 0.199s epoch-25-avg-8
"sawtooth" on blank log-prob, lambda=0.02 6.66 & 16.12 0.125s & 0.129s epoch-25-avg-8
"sawtooth" on blank log-prob, lambda=0.03 7.16 & 17.22 0.085s & 0.012s epoch-25-avg-8
Non-blank & non-repeat arcs, lambda=0.01 5.45 & 13.84 0.195s & 0.198s epoch-25-avg-8
Non-blank & non-repeat arcs, lambda=0.02 6.11 & 15.1 0.081s & 0.095s epoch-25-avg-8
Non-blank & non-repeat arcs, lambda=0.03 6.94 & 16.44 0.038s & 0.049s epoch-25-avg-8

@yaozengwei yaozengwei added ready and removed ready labels Nov 28, 2022
@yaozengwei yaozengwei removed the ready label Nov 28, 2022
@yaozengwei yaozengwei added ready and removed ready labels Nov 28, 2022
@yaozengwei yaozengwei added ready and removed ready labels Nov 28, 2022
@yaozengwei yaozengwei added ready and removed ready labels Nov 28, 2022
@yaozengwei yaozengwei added ready and removed ready labels Nov 28, 2022
@yaozengwei yaozengwei merged commit ece728d into k2-fsa:master Nov 28, 2022
baileyeet pushed a commit to reazon-research/icefall that referenced this pull request Jul 16, 2025
* add init files

* fix bug, apply delay penalty

* fix decoding code and getting timestamps

* add option applying delay penalty on ctc log-prob

* fix bug of streaming decoding

* minor change for bpe-based case

* add test_model.py

* add README.md

* add CI
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant