I have suggested an optimization, the rest looks good.
A pool with N members would keep making the same calls N times if there was no upstream found ( this would be odd, but wouldn't rule it ) or if there was no public upstream. So we should cache this result to avoid the O(N) calls.
We should add a test for this case to check the number of calls is 1 as well.
Emily Chui (915398f9) at 16 Mar 06:04
Merge branch 'renovate-tools/gitaly-init-cgroups/github.com-contain...
... and 1 more commit
Emily Chui (bd364bb6) at 16 Mar 06:04
This MR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
| github.com/containerd/cgroups/v3 | require | patch |
v3.1.2 -> v3.1.3
|
⚠️ WarningSome dependencies could not be looked up. Check the Dependency Dashboard for more information.
MR created with the help of gitlab-org/frontend/renovate-gitlab-bot
v3.1.3
Full Changelog: https://github.com/containerd/cgroups/compare/v3.1.2...v3.1.3
This MR has been generated by Renovate Bot.
This MR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
| github.com/containerd/cgroups/v3 | require | patch |
v3.1.2 -> v3.1.3
|
⚠️ WarningSome dependencies could not be looked up. Check the Dependency Dashboard for more information.
MR created with the help of gitlab-org/frontend/renovate-gitlab-bot
v3.1.3
Full Changelog: https://github.com/containerd/cgroups/compare/v3.1.2...v3.1.3
This MR has been generated by Renovate Bot.
needs update to go 1.25.0.
Can we restore this as well ?
Needs a restore in the praefect version as well.
Closes #7112
One test case for housekeeping.PruneObjects checks that we're able to prune stale loose objects from the repository. Since optimizeRepository() always executes repackIfNeeded() before pruneIfNeeded(), the test needed a way to stop git-repack(1) from taking care of the loose objects. It did this by creating invalid blobs which would be silently ignored by git-repack.
The upstream change 7a8582c82c (reachable: convert to use
odb_for_each_object(), 2026-01-26) changed how reachable objects were
traversed. The new mechanism requires that objects are valid, otherwise
it would produce an error.
Fix the tests to create valid blobs. Since these are loose and unreachable, they won't be discovered by git-repack(1) anyway, so the test is still functionally identical.
Emily Chui (38d78c51) at 13 Mar 01:31
Merge branch '589220-fix-list-commits-pagination-cursor' into 'master'
... and 1 more commit
Contributes to gitlab#589220
Problem
The ListCommits RPC incorrectly returns a PaginationCursor even
when there are no more commits to fetch. This causes clients to
incorrectly report hasNextPage: true on the last page of results.
Solution
Use the "limit + 1" pattern to accurately determine if more data
exists. Stream commits through the chunker while tracking the count.
When we reach the limit and detect another commit exists, we set
hasMoreCommits and stop iterating. The pagination cursor is sent
in the final response only when there are more commits beyond the
current page.
The commitsSender is extended with SetPaginationCursor() to
support including a cursor in the final chunked response.
N/A - backend change only.
Run the new tests:
go test ./internal/gitaly/service/commit/... -run "TestListCommits/cursor_not_returned" -v
go test ./internal/gitaly/service/commit/... -run "TestListCommits/full_pagination_flow" -v
All tests should pass, verifying:
limit commits existEvaluate this MR against the MR acceptance checklist. It helps you analyze changes to reduce risks in quality, performance, reliability, security, and maintainability.
The cleanup removed the SetTokenFromFile() and its not being called anywhere other than tests.
Back to streaming thank you.
Contributes to gitlab#589220
Problem
The ListCommits RPC incorrectly returns a PaginationCursor even
when there are no more commits to fetch. This causes clients to
incorrectly report hasNextPage: true on the last page of results.
Solution
Use the "limit + 1" pattern to accurately determine if more data
exists. Stream commits through the chunker while tracking the count.
When we reach the limit and detect another commit exists, we set
hasMoreCommits and stop iterating. The pagination cursor is sent
in the final response only when there are more commits beyond the
current page.
The commitsSender is extended with SetPaginationCursor() to
support including a cursor in the final chunked response.
N/A - backend change only.
Run the new tests:
go test ./internal/gitaly/service/commit/... -run "TestListCommits/cursor_not_returned" -v
go test ./internal/gitaly/service/commit/... -run "TestListCommits/full_pagination_flow" -v
All tests should pass, verifying:
limit commits existEvaluate this MR against the MR acceptance checklist. It helps you analyze changes to reduce risks in quality, performance, reliability, security, and maintainability.
Emily Chui (4306962d) at 13 Mar 00:58
Emily Chui (1a85e39d) at 13 Mar 00:58
Merge branch 'shackermeier/fix-update-refs-quadratic-lookup' into '...
... and 2 more commits
shouldUpdateRef performed a linear scan over the existingRefs slice for every ref passed to ResetRefs, resulting in O(n*m) complexity.
Build a map from existing refs before the loop so shouldUpdateRef can do O(1) lookups, bringing total complexity down to O(n+m). This is consistent with shouldRemoveRef which already uses a map.
Affects both localRepository.ResetRefs and remoteRepository.ResetRefs.
Thanks LGTM.