Frm rustworkx post alpha to wip/rustworkx-migration: Code Cleanup - mostly documentation#440
Conversation
…frm_rustworkx_post_alpha
…temporary debugging print stmts that will be deleted in a future commit...
that looped forever.
Correct! |
…pulated_graph() where we needed to regenerate the PopulatedGraph because it was altered by the find_balanced_edge_cuts_fn()
Got rid of the "choice" paramter in uniform_spanning_tree() and random_spanning_tree()
It previously had used def _bounded_find_balanced_edge_cuts_fn(*args: object, **kwargs: object) -> list:
and it now looks like:
def _bounded_find_balanced_edge_cuts_fn(
h: _PopulatedGraph,
one_sided_cut: bool = False,
rootnode_choice_fn: Callable = random.choice,
) -> list[_Cut]:
This required adding an interface to describe the find_balanced_edge_cuts_fn()
|
As of 4/17/2026, I have addressed all of the issues in this PR except the one about the bad loop termination condition for recom(). I have proposed a fix in the response to your (Peter's) comments. I resolved all of the other issues - but presumably you will check to see if you also consider them resolved. I just pushed my changes so far. Waiting for feedback on my fix to the loop termination issues... |
| # See if somehow changing the seed will change the result... | ||
| random.seed(2025) | ||
| # end of debugging code |
There was a problem hiding this comment.
In the future, we should really switch to seeded generators internally rather than having to modify the global RNG
There was a problem hiding this comment.
I will leave it to the future (as you say) to change the way we deal with random number seeds.
However, there is an issue here that I think needs some head scratching - this test passes or fails depending on the value of the seed, but that should not be the case. Yes, the test is set up to make it challenging to find the districting plan that has two districts that have an opt_value of 10, but the test also generates many many many spanning trees, and when the test succeeds, it succeeds in finding 20+ solutions. I played around with different values of the seed and found that the test passed for seeds: 2023, 2025, 5000, but failed for seeds: 4, 5, 2024. This makes me feel very uneasy - something is going on that seems wrong, but I can't sort out what that might be.
So, I am hoping that you (Peter) have a clue about how to proceed. I am happy to do the work, if you have a strategy for how to suss out what is going on.
So - I will leave in my debugging code for now (both here in the test but also in bipartition_tree.py - to help figure out what happens in each attempt to do a bipartition).
|
There are two remaining issues that need your attention. The first relates to the proposed changes to recom(). I am suggesting yet another approach, and again, since it is such an important function, I want a second set of eyes on it. There are extensive comments in the PR describing the issue. The other is that a test in test_single_metric.py: is very sensitive to random.seed(). If I set it to some values the test passes but if set to other values the test fails. The seed is currently set so that the test passes, but this sensitivity concerns me, and I am hoping you have some insight into whether it is a real issue or not. Again, lots of comments in the PR comments. I have pushed changes, so you will be looking at the latest... -Fred |
Peter,
This will be a tedious code-review for you as it is almost all documentation updates.
However, please pay attention to any change that has your name in the TODO: line - as in:
These are questions I was unable to answer myself and I need your input.
Also - I would like you to confirm that the punchlist for releasing this is:
I am going to take a look now at your reply to the test failure that loops forever...
-Fred