It's time to name them and shame them, as the work of the maintainer, or release captain is suffering from having to restart test runs over and over again:
| Test Name |
Runs |
Failures |
Flakyness |
|
|
| test_onchain_their_unilateral_out[True] |
143 |
57 |
28.50% |
|
|
| test_wss_proxy |
164 |
47 |
22.27% |
|
|
| test_rbf_reconnect_tx_construct |
25 |
6 |
19.35% |
|
|
| test_penalty_htlc_tx_timeout[True] |
120 |
25 |
17.24% |
|
|
| test_penalty_htlc_tx_fulfill[True] |
123 |
25 |
16.89% |
|
|
| test_penalty_outhtlc[True] |
141 |
25 |
15.06% |
|
|
| test_penalty_rbf_normal[True] |
142 |
25 |
14.97% |
|
|
| test_penalty_inhtlc[True] |
147 |
25 |
14.53% |
|
|
| test_onchain_middleman_their_unilateral_in[True] |
150 |
25 |
14.29% |
|
|
| test_onchain_timeout[True] |
150 |
25 |
14.29% |
|
|
| test_onchain_middleman_simple[True] |
152 |
25 |
14.12% |
|
|
| test_anchorspend_using_to_remote[True] |
142 |
22 |
13.41% |
|
|
If your test is listed here, please go and stabilize it, the maintainers will be thankful. If not we might have to disable the tests temporarily until they are more stable.