Backport !227592 into 18.9. This executes all migration affected by the bug fixed with !224446.
The best way is to test on a fresh Omnibus install as described in this snippet:
pool_repositiories
sudo gitlab-rails c session
class TestProject < ApplicationRecord
self.table_name = 'projects'
end
project = TestProject.create!(organization_id: 1, namespace_id: 1, project_namespace_id: 1, name: 'Project 1', path: 'project-1')
sudo gitlab-psql session
drop trigger pool_repositories_loose_fk_trigger on pool_repositories ;
insert into pool_repositories (shard_id, source_project_id) select 1, id from projects limit 1;
create trigger pool_repositories_loose_fk_trigger AFTER DELETE ON pool_repositories REFERENCING OLD TABLE AS old_table FOR EACH STATEMENT EXECUTE FUNCTION insert_into_loose_foreign_keys_deleted_records();
pool_repositories will not be updated.PG::CheckViolation: ERROR: check constraint "check_96233d37c0"...
sudo gitlab-rails db:migrate, all migrations should succeedThis checklist encourages us to confirm any changes have been analyzed to reduce risks in quality, performance, reliability, security, and maintainability.
e2e:test-on-omnibus-ee job has succeeded, or if it has failed, investigate the failures. If you determine the failures are unrelated, you may proceed. If you need assistance investigating, reach out to a Software Engineer in Test in #s_developer_experience.If you have questions about the patch release process, please:
#releases Slack channel (internal only).@praba.m7n This is ready for review. I will move to preparing the back-ports meanwhile.
Seems to work fine with namespaces class names:
> Gitlab::BackgroundMigration.const_defined?('RemoteDevelopment::BmDesiredConfigArrayValidator')
=> true
We use send here to avoid BackgroundMigration/DictionaryFile cop failing with an error because of m.job_class_name. Disabling this rule did not help.
Pick only background migration compatible with the current migration's schema.
Krasimir Angelov (7d450b81) at 18 Mar 00:02
Execute BBM affected by single record table bug
@ahegyi Sorry, I was busy with some other work, will look at this as soon as I can.
@praba.m7n Sorry, I was busy with some other work, will look at this as soon as I can.
@patrickbajao Sorry, I was busy with some other work, will look at this as soon as I can.
@tskorupa-gl Sorry, I was busy with some other work, will look at this as soon as I can.
@zhaochen_li Sorry, I was busy with some other work, will look at this as soon as I can.
I've started working on the above approach in Execute BBM affected by single record table bug (!227592).
We should try get this in 18.10.1 (as it's too late for 18.10) and then back-post to 18.9 and 18.8.
@praba.m7n As Max is away I may need help from you to review merge.
Execute BBM affected by single record table bug
Execute BBMs affected by #590848. This is in addition to !225461 which was not enough as just marked the migrations as paused.
Evaluate this MR against the MR acceptance checklist. It helps you analyze changes to reduce risks in quality, performance, reliability, security, and maintainability.
Krasimir Angelov (198ce3f4) at 17 Mar 04:20
Execute BBM affected by single record table bug
While testing this locally I realized this is not going to solve the issue. Even if we set the status of the potentially affected migrations back to active the following migrations like 20260209093954 will still fail because:
A better approach will be to finalize these migrations so they are completed when depending migrations are executed.
@patrickbajao Good catch! This table was created a while ago (in 18.0) but partitions were not managed properly as public.merge_request_commits_metadata_id_seq was used. How will this change affect existing instances?