@daniele-gitlab
Remove archive_authentication_events feature flag.
This flag guarded a recurring background worker that deletes rows from the authentication_events table. The worker is now guarded by an application setting instead.
The FF was rolled out in #573714 and has been at 100% for 4 months.
We couldn't remove it until the application setting was implemented and confirmed safe. That's now done, so we're good to remove this flag.
Evaluate this MR against the MR acceptance checklist. It helps you analyze changes to reduce risks in quality, performance, reliability, security, and maintainability.
Related to #573714
Jason Knabl (018f0390) at 17 Mar 22:42
Remove archive_authentication_events feature flag
This route exists in sandbox now, I don't think there's anything left to do.
OauthApplications have organization_id set since 18.9.oauth_applications have a value for organization_id in the .com DB.Try adding a NOT NULL constraint to oauth_applications.organization_id.
See what breaks in the test suite.
We need to find everywhere in the app that writes to this column.
This MR incorporates changes from individual MRs:
And adds any additional fixes needed to get CI to green.
Issue: #579290 Previous draft: !216043
Evaluate this MR against the MR acceptance checklist. It helps you analyze changes to reduce risks in quality, performance, reliability, security, and maintainability.
Related to #579290
organization_id when creating LDAPKeys shipped in !221381 (merged)
organization_id for existing LDAPKeys shipped in !225979 (merged)
LDAPKey records on gitlab.com, so no real consequence.Looks like we aren't setting keys.organization_id in seed fixtures or the keys factory.
This will cause the db:migrate:multi-version-upgrade jobs to fail and block CI for the NOT NULL constraint: !217884 (comment 2993112780)
This MR sets keys.organization_id everywhere it's needed to unblock the NOT NULL constraint.
The purpose of the MR is to run CI to make sure we're catching all the right places.
Evaluate this MR against the MR acceptance checklist. It helps you analyze changes to reduce risks in quality, performance, reliability, security, and maintainability.
Adds a NOT NULL constraint to keys.organization_id.
NOT NULL constraintmain: == [advisory_lock_connection] object_id: 133000, pg_backend_pid: 11893
main: == 20260116211307 AddNotNullToKeysOrganizationId: migrating ===================
main: -- current_schema(nil)
main: -> 0.0006s
main: -- transaction_open?(nil)
main: -> 0.0000s
main: -- transaction_open?(nil)
main: -> 0.0000s
main: -- execute("ALTER TABLE keys\nADD CONSTRAINT check_8933ae4f38\nCHECK ( organization_id IS NOT NULL )\nNOT VALID;\n")
main: -> 0.0022s
main: -- execute("SET statement_timeout TO 0")
main: -> 0.0003s
main: -- execute("ALTER TABLE keys VALIDATE CONSTRAINT check_8933ae4f38;")
main: -> 0.0007s
main: -- execute("RESET statement_timeout")
main: -> 0.0002s
main: == 20260116211307 AddNotNullToKeysOrganizationId: migrated (0.0760s) ==========
main: == [advisory_lock_connection] object_id: 132380, pg_backend_pid: 12621
main: == 20260116211307 AddNotNullToKeysOrganizationId: reverting ===================
main: -- transaction_open?(nil)
main: -> 0.0000s
main: -- transaction_open?(nil)
main: -> 0.0000s
main: -- execute(" ALTER TABLE keys\n DROP CONSTRAINT IF EXISTS check_8933ae4f38\n")
main: -> 0.0015s
main: == 20260116211307 AddNotNullToKeysOrganizationId: reverted (0.0716s) ==========
main: == [advisory_lock_connection] object_id: 132380, pg_backend_pid: 12621
Evaluate this MR against the MR acceptance checklist. It helps you analyze changes to reduce risks in quality, performance, reliability, security, and maintainability.
Related to #577246
In the discussion with the security team where we landed on the 2048 bit minimum, that specific minimum was chosen to match FIPS 140-3 requirements. Caveat: I am not an expert on FIPS, but I took a look at SP 800-131A Rev. 2, which appears to be the standard relevant to key length. On page 9, the text below table 2 states:
RSA: The length of the modulus n shall be 2048 bits or more to meet the minimum security-strength requirement of 112 bits for Federal Government use.
Which is emphasized by >= 2048 bit values being listed as "acceptable" in the table above (table 2).
The draft of SP 800-131A Rev. 3 does introduce a deprecation timeline for 2048-bit keys after 2030, after which the minimum would become 3072 bits, but the 2048 bit floor remains until then.
The documentation recommendation of 4096 bits was added very recently in a docs-only MR that did not go through security review. Whereas the original code threshold of 2048 bits for this warning was the result of deliberate discussion with @mloveless and is grounded in FIPS.
Should we confirm with the security team before using that docs recommendation to change the code threshold?
We did some benchmarking around JSON.safe_parse in this comment thread (see the script and results). Those results support the idea that safe_parse adds a lot of overhead when compared with the old parse, and the overhead varies in magnitude with payload shape and size. In general this supports the idea of this MR that a safe_parse with improved performance could be beneficial
Knowing how the benchmarks translate to actual production environments could help with the decision-making here. For example is the performance benefit minimal in production environments, or potentially large? If minimal, it might not be worth the risks of breakage, rollout, and maintaining a C extension like you said @minac. I do see a lot of risk involved in swapping a parser implementation.
Has anyone measured what percentage of request time safe_parse accounts for on hot paths in production? Hard numbers would help calibrate whether the ~8.5x improvement is meaningful or not relative to the rest of the request.