This Godot program generates Householder matrix delay times using concepts from genetic algorithms.
It's not crossbreeding reverbs, but it is generating millions of permutations of random delay matrices, and then it uses a sophisticated fitness function to evaluate which random matrix is the best.
This is done using the concept that you can plot a delay matrix onto a visual line with a point for each total delay time represented in the matrix, and then you can plot this as a picture by wrapping the line into an image of successive horizontal lines.
Currently at time of starting the public repo, the picture drawn uses the green, red and blue channels to represent different kinds of fitness function.
Greenness of a pixel represents how many times a combination of delay times ended up at that total number. With dense matrices, this will end up looking like a cylinder on the screen because early times can't be reached by the combined Householder times, and then there's a central section where each time gets reached by many different Householder paths. (think of it like this: if it's a 4x4 matrix but all the delays are the same, the point reached will always be 'four instances of that number in series' but there won't be any variation. So the output will not be 'the delay number in all the delay banks' but four of it, and then every possible variation will lead to the same end point and the program would show a single dot on the display. Use better delay times to make it a 'cloud' of many possible delay returns, and try many possibilities to make it a good sounding 'cloud'. How? That's what this program does. So greenness is literally 'how many paths led to this point', and for tiny matrices it's just a speckling of spaced dots, and for huge matrices it's a cloud because the resulting cloud is super dense.
IntoTheMatrix plots this as a picture, and then it'll weight it by adding up all the 'results' to the second power (so it can make a score that's higher if all the delays are the same, and lower if they avoid overlapping.) This is because having all this randomness make a smooth cloud, probably sounds better, and the smoother and more free of spikes (which will be heard as slapback echo-like effects in the reverb!) the better. That's what's in the green part of the picture. This tends to translate to reverb bloom and fullness, and if it's lacking, the verb sounds dead and bad and won't want to sustain very well.
Red is a separate kind of measurement. It goes through the green channel, and counts how many points there are in the gap between each echo. It then makes that a sort of chart, and shows it on the same picture (it'll be in the top left). Bright red is good and means there's more than zero, but a lowish number of times that spacing showed up (across the whole reverb matrix green channel). Darker red means there were many instances of that spacing, which probably means the reverb has a metallic ring to it. What will happen is, it'll measure all the spacings, and for a really dense matrix there'll be long stretches where there's no gap (which can be its own kind of long spacing) but there will always be areas with spaced-out delay returns. What you want to see is the widest possible block of red, ideally representing all the spacings you could have without any frequency being unduly weighted. This tends to translate to depth and realism, and if it's lacking the reverb sounds fake and up-front: when it's doing well, there's less 'ringyness' and the matrix sounds to the ear like a physical space, one that 'hides' in the mix and doesn't come forward as its own sound.
Blue is about tracking the difference between all the loudnesses recorded in Green. I'd been measuring true peak slew rate on a reference sound to determine if the reverb sounded more bright or more dark, and then figured out this would be represented by bigger differences between adjacent positions in the green dataset, so it does the slew measurement, also to the second power so that high readings will contribute to a high score (low is best). It includes this measurement in the fitness function too, alongside the green and red measurements, and keeps track of what's the best (lowest). It'll be seen as a cyan tinge over the green parts of the picture, and those are the points in the reverb tail which will sound brightest.
There's controls to weight how much the fitness function cares about these things: for instance, to make the results all come out brighter, turn the influence of the Blue channel down, and less of that will be added to the total score that's evaluated. Don't mess with the controls once a run is underway or it'll simply fail to give good results if you've made all the new trials give a higher score than what it was before. The program does a thing where it tries to approach more rigorous grading over time because there's been situations where it accidentally did something (like, measuring green, hit a random cloud much bigger than desired, notice that had less overlap because it's bigger, and then keep that as the best one forever) and changing the controls mid-run can do the same thing. Or, if you have all the controls maxed, you can tell if one is working by setting it to lower or to zero: removing that part will give lower scores and there should immediately be a new 'high score' since you changed the rules :)
That'll do for now: the program may change a lot, but this is how it worked as I uploaded it, and this is the basic principle behind what it's doing. You could probably treat the sequence of delay times as a 'genome' and apply crossover to breed reverbs from other reverbs: for now, it is simply generating random 'genomes' and applying the fitness function, and to my mind the fun is in adjusting the fitness functions to get the high scorers to produce different sounds. But given sufficient diversity in the populations you could indeed run a true genetic algorithm with crossover, in this. Just a thought :)
Currently, in the scripts search3, search4, search5 and search6, there's a place where it divides a number by sqrt(sqrt(sqrt(total))), where before it had been 'iterations'. What this is doing is starting it out WAY more likely to try many many many different possible reverb settings, to the point that over the first hour or so it will seemingly never stop. This is because I've always run this program over long time periods, but it'd hit on its best try rather early, and then it'd run for hours without ever changing. So what it's doing now is sllloooowwwwwlllyyyy becoming less likely to update, and by the time it's run a million trials without finding a better, it will have tried pretty much everything. It stores three previous trials (might update that too!) to drift in the direction of crossover from simple mutation and randomness. If this doesn't work I'll try something else. If you want it to settle down quicker, use less sqrt() functions in that place and it will rapidly settle on a best try.